gushi: (freebsd logo)

The problem:

  1. FreeBSD comes with Sendmail. Sendmail, in order to verify SSL certs, requires a CAPath (i.e. a bunch of split files, one cert per file), with a hash pointing at each file.

  2. The sendmail docs specifically warn against using too large a cafile, which is why you should use the path (arguably a hack, but what are you gonna do).

  3. FreeBSD's ports only contain ca_root_nss, which installs only a single, monolithic .crt file. (i.e. probably too large)

  4. I can't find a good script which will split this file apart (I mean, sure, I can write one) and generate those hashes.

  5. The tool that comes with openssl that does this is called c_rehash -- FreeBSD rips it out of their base OpenSSL install (probably because it's dependent on perl, which is no longer in base). I think the real solution here is that the port for ca_root_nss needs to just have a port-readme that gives you these commands.

The solution:

  1. Install ca-root-nss from pkg.

  2. cd into the /usr/local/share/certs directory

  3. Split up the certs into their requisite files, using the split command: split -d -p 'Certificate:' -a 3 ca-root-nss.crt foo

  4. Remove the first one: rm foo.000

  5. Use a quick for-loop to generate the hashes: for file in foo*; do ln -s "$file" "$(openssl x509 -hash -noout -in "$file")".0; done

Quickly

pkg install ca-root-nss

cd /usr/local/share/certs

split -d -p 'Certificate:' -a 3 ca-root-nss.crt foo

rm foo.000

for file in foo*; do ln -s "$file" "$(openssl x509 -hash -noout -in "$file")".0; done

Testing it

I did the above, and restarted sendmail, and noticed that now, when I connect to gmail, I get:

Aug 28 22:54:20 <mail.info> prime sm-mta[76815]: STARTTLS=client, relay=gmail-smtp-in.l.google.com., version=TLSv1.2, verify=OK, cipher=ECDHE-RSA-AES128-GCM-SHA256, bits=128/128 Which is great.

One thing we're still always going to hit is that SSL validation will always fail here:

Aug 28 22:54:18 <mail.info> prime sendmail[76809]: STARTTLS=client, relay=[127.0.0.1], version=TLSv1.2, verify=FAIL, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256/256"

because our cert doesn't have "127.0.0.1" or "localhost" as common names. No real fix for that, sadly.

gushi: (Default)

Hrmm, it seems that the not-maintained-since-2006 livejournal client I used to use doesn't work with Dreamwidth.

I've asked on a couple of communities what, if anything, has changed. I'm seeing this list claims that it works, but at the same time, on searching for "Dreamwidth Flat Interface" I'm seeing vague mumblings about the interface being deprecated.

Hopefully someone can help me out.

In the mean time, I'm just posting via web form, which is so blasé.

gushi: (Default)

A lot of the people I host are not coders. They don't understand things like php, globally scoped variables, deprecation warnings, database authentication plugins, insecure hash types, or the like.

They know only that they have code that worked fine for a decade, and then Some Jerk Ferrit did something that made their site not work.

Most of this is because PHP, as a language, is a Shit Show. The only reason PHP scripts are not still majorly responsible for most of the botnet activity on the internet is because someone decided to make smart light bulbs with globally routable ipv6 addresses.

Coding in php is like trying to sculpt something in clay, except that people keep dumping ingredients in the clay that change its consistency: sand, water, cement, cheerios.

For an admin, php is a security nightmare: you have 300 users, whose code can all alter each others' files. Oh, and on most webservers? Users can't alter the files PHP created. They're owned by the "www" user.

Shit. Show.

So, because vague reasons, the people who make the PHP language decide that a particular function is not workable in the particular coding style that they feel people should be using at that time. So, somewhere in a README file that nobody actually reads, they say "hey, you should stop using this function, it may go away in the next version".

I hosted several hundred websites at one time -- nobody knew about that README file, which, as far as they knew, were on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.”

So, a long long time ago, I solved two birds with one stone. I installed a program called "suPHP". What suphp brilliantly does, is sacrifice some of the speed normally present in PHP, by running everyone's PHP scripts as them. It does this by decoupling PHP from the webserver, and winding up a tiny little PHP process to spawn your files.

The unexpected side effect here, is that it can run different versions of PHP for different users.

Now, as far as the operating system is concerned, you can only install packages for one version of PHP at a time, and right now, at the time of this writing, that's PHP56, with a bunch of removed functions and deprecation warnings.

I've been building PHP from scratch for years, tho, and I know how to install a tiny little shadow copy of an older version of PHP where the webserver can get at it.

So, if you were going to go look at: this page, you'll see a php info page that talks about php version 5.5. If you look at this default one, you'll see that it in turn is running php 5.6.

In fact, I even have a separate copy of apache running with the mod_php going on, for my webmail, where I can use the speed.

Best part? You can control it.

If you were to look at this htaccess file, you can see how easy it is to signal to the interperter that you want 5.5. (Normally, apache won't serve .htaccess files out to the world, this one is special). Basically three lines of code:

<FilesMatch "\.php$">
  SetHandler application/x-httpd-php55
</FilesMatch>

In a former life, I let people use this to switch between php4 and php5. RIght now the only handlers are php5 and php55. I could maybe add php54 as well.

That said -- if you possibly can, I advise using upgraded code that supports the latest thing. So if you're running something like Wordpress, please do update. If on the other hand, you have an old copy of Gallery, and it's not being hacked or hammered, and it suddenly broke, the above will fix it.

gushi: (Default)

Gmail outright rejects mail from my server delivered via ipv6, but allows it via ipv4.

What this means is that I'm going to have to simply maintain a list of gmail MX AAAA's and pump them into an ipfw reset rule like:

reset tcp from me to 2a00:1450:400c:c02::/64 dst-port 25


On the same note, I am getting continually added to various google groups that send me a bunch of Indian CV's for people seeking employment. Google apparently will anyone be added to a google group without confirmation.

I keep maintaining a procmail rule that looks like this:

:0
* 1^0 ^List-ID:.*.shaikhgroups.net
* 1^0 ^List-ID:.*zain-22.zaryabi.info
* 1^0 ^List-ID:.*hadi-20.hadebad.info
# ...more goes here
| /home/danm/spamcopquick.pl

I should probably write a SpamAssassin module that detects this crap, and once it does, rather than filtering the body, detects the list-header and reports as appropriate. (I don't want to reject at SMTP transaction time, because I want the lists to get onto google's radar as a problem that's not simply a delivery issue)

Note: Looks like this journal theme doesn't show the markdown "code" properly. Dammit.

gushi: (Nevar Button)

Mitigating a Mail Server Attack

Analysis

A few days ago, I noticed an unusual number of bounceback messages from one specific user directed at yahoo.com email addresses. When I looked inside the mail queue, I noticed that each message had dozens of recipients.

The troubling thing is, looking at the mail logs, I saw the dreaded line

maillog.7.bz2:Aug 18 21:33:08 <mail.info> prime sm-mta[80096]: 
AUTH=server, relay=[84.32.158.42], authid=vivian@littlestdomain.com, 
mech=PLAIN, bits=0

The AUTH=server bit tells me that rather than a rogue script running here (a not-uncommon thing that happens when you let users run PHP scripts), this was an actual password that got leaked, and was being used to send mail just as a regular user would.

I quickly concluded "compromised account", changed the user's password, and contacted them out-of-band with a new password. Life seemed good.

...then I noticed a second account doing the same thing. Okay, that's weird. Maybe my users have started falling for a really effective phishing scam?

When it happened a third time, a few days later, I recalled the old Ian Fleming quote:

"Once is happenstance. Twice is coincidence. Three times, it's enemy action."

Combat Perl

So, faced with the task of seeing if there were any other users who were affected, I wrote a short little bit of perl code to analyze my mail logs, and spit out each login, as well as the number of times each IP had logged in.

The code looks like this. No, there's no fancy "use Strict" or anything like that. I used YAML as an output format because writing "Dump" is easier than writing a foreach loop to iterate over the hashes.

#!/usr/bin/perl

use YAML;
open FOO, "/usr/bin/bzgrep -i \"auth=server\" /var/log/maillog.0.bz2|";
my @lines = <FOO>;
my %thing;
foreach my $line (@lines) {
  chomp $line;
  if ($line =~ /\[(\d+\.\d+\.\d+\.\d+)\].*authid=(\S+),/) {
        print "ip address $1 authid $2 found in $line\n";
        $thing{$2}{$1}++;
  }
}

print Dump %thing;

Combat perl output

The code above produced output like what I have here. Note that I've altered all the logins and none of the below actually exist on my system. The IP addresses and counts, however, are real.

--- jim
---
209.85.219.47: 1
216.15.173.10: 4
--- bob
---
107.144.99.110: 1
--- moe
---
107.144.99.110: 10
--- curly@curly.net
---
163.23.34.45: 9
177.19.195.242: 10
177.55.38.11: 3
177.6.83.93: 1
179.182.193.136: 2
179.182.198.161: 3
186.222.71.36: 2
187.110.215.243: 6
187.35.157.21: 1
188.11.56.33: 4
189.75.25.147: 9
190.220.7.227: 2
193.152.213.171: 7
193.32.77.45: 4
200.63.165.85: 2
201.15.121.108: 3
201.216.221.245: 3
201.231.237.7: 4
201.41.166.52: 5
201.7.227.106: 1
201.77.115.15: 2
212.174.233.143: 4
212.40.242.27: 6
213.98.140.39: 4
213.98.78.213: 2
217.149.97.78: 5
217.216.67.202: 2
217.7.241.62: 1
219.92.29.90: 10
31.145.8.126: 3
37.159.194.66: 8
46.25.232.183: 1
50.79.128.153: 4
62.42.1.181: 1
66.17.46.53: 4
72.67.57.38: 3
77.229.13.35: 7
77.231.97.130: 7
78.186.71.8: 7
78.188.184.173: 5
78.188.19.231: 6
78.189.53.17: 2
80.35.0.235: 3
81.133.208.133: 4
81.142.195.149: 3
81.149.106.151: 2
81.214.85.206: 7
81.215.226.231: 13
81.218.104.229: 4
81.45.181.255: 4
82.108.126.242: 1
82.127.51.45: 2
82.153.165.168: 8
82.85.115.185: 5
82.91.75.35: 12
83.19.111.210: 3
83.56.26.36: 8
85.105.139.205: 7
85.105.85.72: 8
85.152.27.1: 2
85.251.7.78: 1
85.71.229.247: 3
87.139.118.153: 3
87.22.242.43: 2
87.23.197.245: 4
88.103.141.154: 6
88.12.31.52: 1
88.2.171.129: 4
88.247.171.72: 5
88.247.78.60: 5
88.248.23.202: 4
88.249.248.163: 4
88.250.240.101: 3
88.87.205.230: 3
88.9.119.148: 5
89.119.131.171: 4
92.56.76.179: 7
93.207.41.7: 1
95.236.181.166: 7
95.243.214.226: 1
95.60.124.88: 8
--- thing
---
67.68.160.180: 1
--- stuff
---
207.161.109.87: 2
--- steve
---
209.85.215.49: 1
209.85.217.180: 1
--- joe
---
70.117.105.120: 1

So, one of these things is not like the others. It's understandable that a person may have two or three ips in a given period. Their ips change, they're logging on from multiple computers.

Remember that these are ONLY the ip addresses pulled from the sendmail logs. Only on connections where a piece of mail is sent, using SMTP auth.

So, that curly@curly.net entry? Yeah, that is that what security researchers call a "snowshoe" attack -- not one server sending hundreds of mails (which would be easy to block), but instead, it's spread out, and even though I now have a list of ips I could block, what we're looking at here is a botnet of otherwise compromised machines on a dozen or more ISPs.

The other thing to note about the curly@curly.net entry is that it's a full user@domain entry. Put another way, it's an email address -- one where the LHS (left hand side) just so happens to match the user's actual login.

What was going on here is that the way I (and most people) do SMTP auth in sendmai, there can be a concept of multiple "realms" defined -- for example, to log in against different authentication databases. As I'm not using this feature, the realm and everything after is ignored (but still logged).

As I normally instruct my users to only use the barename to log in, any login using a full realm must be a compromised account.

Notifying the User

So, there's a problem here. While I can easily change the user's password and send them mail, this effectively locks them out of their account and keeps them from getting anything done until we touch base.

What I wanted to do was find a way to block the users who were using the "bad" format, while letting good users go on. I wanted a quick, guilt-free way to block the sending of mail, without breaking the communication link.

What I discovered was a ten year old post in the old usenet group comp.mail.sendmail, here.

With a little bit of tweaking, I had applied that same config to my own sendmail, and had configured a line in the access database to block a test user. The account still worked, but it wouldn't let them send mail. Perfect. I could block "curly@curly.com" without blocking "curly". (And yes, this relies on a little bit of obscurity -- but it's a botnet, not monkeys at typewriters, it's only going to try what it knows).

Identifying the source

So, three accounts with relatively secure passwords compromised in a week. What was the common thread? Could these people have all used the same insecure passwordless wifi networks? Is there some newfangled router exploit that mails your traffic all off to the highest bidder?

I spoke to all the users. None of them had fallen for any phishing emails. They were running different OSes, so a password-stealing virus was out. And then it hit me. Like a ton of bricks.

I've recently seen a surge of spam to addresses like macromedia-at-gushi.org, adobe-at-gushi.org, adobe2012-at-gushi.org.
The reason?

Well, it's all because adobe sucks at securing your data.
Sometime last year, people were able to download 150 million usernames and passwords from adobe's backend servers. And, as the article I just linked will tell you, those passwords were encrypted weakly, and in a way that all the users had the same encrypted password string, for a given password.

While I'm not 100 percent sure this was the attack vector -- there's been several other leaks that happened (last.fm, Linkedin, E-harmony), and about 90 percent sure that this is the likely cause, even if I don't know which site it was that ultimately spilled my users' beans.

gushi: (Default)

Mysql, as everyone knows, is a database server with two different storage engines.

One engine, MyISAM, is the default, and is reasonably fast and well-optimized.

The other engine, InnoDB, is a more "professional grade" engine, supporting things like transactions, row-level locking, and the rest. This engine is supported by a commercial entiry, InnoDB, in the same way that PHP is supported by Zend: in that sort of "you can do more useful things if you're willing to pay for it" sort of way. The other big, big, annoyance, is that out-of-the-box, InnoDB keeps all data, for all tables in all databases, in one huge fucking blob, that can automatically grow at a rate you specify, but that never shrinks. Not even if you drop every database. Right now, my little innoDB blob is five gigs, and that's for a remarkably small number of applications using it.

And there we have the rub. There are a few applications that use the functions of InnoDB, and that default their table types to be InnoDB tables. Among these are MediaWiki, and gallery2.

Looking at that blob, there's also no easy way to tell what databases are using it. MySQL's "show table status" (which could tell you the type) doesn't work on a global base, you have to select a database first (of which I have many.

The solution to this is an option that should have been on-by-default the whole time, the innodb_file_per_table option. This tells mysql to treat innodb blobs much like it would treat MyISAM tables: they go into the database-specific directories, so you can easily (with tools like du) tell which databases are bloating (because, for example, some user installed phpBB and then forgot about it).

After turning that option on, there's still a problem: it doesn't cause the SQL server to migrate your data for you.
It only affects newly created databases. It's easy enough to dump-then-restore each database during a maintenance window, but wouldn't it be nice if there was some way to spot the databases which needed it? (Remember, InnoDB is not the default).

As it happens, the following short shell script can do this for you:

    
#!/bin/sh
# InnoDB finder $Id: findinnodb.sh,v 1.1.1.1 2009/12/27 12:48:33 danm Exp $
# Dan Mahoney, danm@prime.gushi.org
# ISC License applies

cd /var/db/mysql
# Change to your DB dir
for i in `find . -regex '.*.frm' | cut -d '.' -f 1-2`
  do
  #  echo "testing $i"
  #  file $i.MYI
  dbname=`echo $i | cut -d "/" -f 2`
  tablename=`echo $i |cut -d "/" -f 3`
  if [ -e "$i.MYI" ]
  then
    echo "$i is MyISAM"
  else
        if [ -e "$i.ibd" ]
        then
          echo "$i is InnoDB, but self-contained"
        else
          engtype=`mysql -E $dbname -e "show table status like '$tablename'"| grep -i engine | cut -d ":" -f 2`
          # add a --password=xxx option above if you don't have one in .my.cnf or whatnot
          echo "$i is$engtype (from MySQL DB)"
        fi
  fi
done

Note that it goes by file-system-wise clues, instead of eating time connecting to mysql. It only connects to mysql if it can't figure it out. For example, without asking mysql, a memory-only table looks identical to an InnoDB.

I was rather surprised to find that nobody on the MySQL pages had suggested this, after all, "find the piggy" is a big part of detecting abuse and resource-problems

gushi: (Default)

Hello there! Do you care more about the underlying protocols that drive the internet and that make hostnames in general possible? Would you like to know how to take debug output of the most prevalent DNS software in use anywhere, and help the internet to run better? Are you just an ouright NERD? Then perhaps you should read... A howto about DNS logging. )

gushi: (Nevar Button)

So apparently there was a real nasty worm out and about called Bagle. We all remember it, right?

So I was noticing in clearing out my error logs that I have a ton of hits to prime.gushi.org/777.gif (it's a 404 and to the best of my knowledge always has been).

So here's where it gets scary: I go to the reputable trend micro site seen here, looking for info on 777.gif:

http://www.trendmicro.com/vinfo/virusencyclo/default5.asp?VName=WORM_BAGLE.FN&VSect=T

And then I see it, slightly obscured...

This worm downloads possibly malicious files from the following URLs:

[...]

http://pr{BLOCKED}ushi.org/777.gif

Yup. Apparently prime's listed as one of the distribution sites for one of the worst virii out there.

Now, near as I can tell, that file's NEVER been in that webspace. In fact http://prime.gushi.org returns a 403 and points at a null directory.

This is not the first time I've been targeted like this, nor will it be the last. But Hrmmmmmm...How can I use this?

Well, I now have the means to VERY EASILY compile a list of every infected user out there and tie it into an email script.

In fact, I just realized, I could write up a very quick virus removal script, run it through bat2exe, and drop it right in place there -- except people who have done similar things have already been sued in this ridiculous world of ours.

I just emailed one antivirus vendor (and will email others) with the following:


Hello,

I am the owner of prime.gushi.org -- I recently discovered that I am rather popular with the bagle worm (I am listed 
as a download site for the malware) -- a file called 777.gif.  As far as I can tell, I have never hosted this file 
(that's my hostname, but leads to a "null" site).

I would like to do a bit more research to clean this up, as there's a file commonly distributed with movabletype 
that's also titled 777.gif.

I'd like to know if you could give me the file's info (the size, md5, sha1, strings output, etc).

I suppose I can email all the ISP's out there and have them fix their users -- or for that matter report those users to the various blacklists. Or I can just mod_rewrite this crap out of my logs.
gushi: (Default)

We've got a customer, actually a decent friend of mine too, who is getting plesk (one of those all-in-one hosting management tools). Except because he uses frontpage, he needs an older version of Plesk.

I get to installing it, and the installer complains "you have perl 5.008006 installed, you need perl 5.008007".

Okay, um...

I try looking for perl 5.8.7 -- the current version is 5.8.8, and the one that came with the OS is 5.8.6. For a VERY SHORT while there was a 5.8.7 available. The installer doesn't like 5.8.8 (grr!!).

I try modifying the installer -- to tell it "you want 5.8.8". No good. The installer has an embedded checksum -- and it breaks my head as to how to even CREATE a file that has an embedded checksum, since changing the checksum CHANGES THE FILE, and thus CHANGES THE CHECKSUM.

I look on the FreeBSD archive mirror (where old releases are kept in their entirety (linux could learn something from this)) -- and I find a perl 5.8.7 tarball here:

ftp://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/i386/6.0-RELEASE/packages/lang/perl-5.8.7.tbz

Except that when I install it, it dies a horrible death because it's a 6.0 package, and doesn't have the right library links.

Okay, fine. Cracks Knuckles

I go cruising through the FreeBSD cvsweb repository, find the exact date that things got upped from 5.8.7 to 5.8.8, then I unleash this:


cvs co -D '12/23/2005' ports/lang/perl5.8
cvs server: Updating ports/lang/perl5.8
U ports/lang/perl5.8/Makefile
U ports/lang/perl5.8/Makefile.man
U ports/lang/perl5.8/distinfo
U ports/lang/perl5.8/pkg-descr
U ports/lang/perl5.8/pkg-message
U ports/lang/perl5.8/pkg-plist
cvs server: Updating ports/lang/perl5.8/files
U ports/lang/perl5.8/files/patch-MM_Unix.pm
U ports/lang/perl5.8/files/patch-SDBM-errno-fix
U ports/lang/perl5.8/files/patch-freebsd.sh
U ports/lang/perl5.8/files/patch-perl.c
U ports/lang/perl5.8/files/perl-after-upgrade
U ports/lang/perl5.8/files/use.perl

Oh, Hello, Newman. I cd into the directory, build, kill of any previous versions, and install.

I take somewhat pride in the fact that nobody else, period, could have done this. We've got one other tech with possibly the skill, but there's more BSD-centric knowledge required.

gushi: (Default)

We've got a customer, actually a decent friend of mine too, who is getting plesk (one of those all-in-one hosting management tools). Except because he uses frontpage, he needs an older version of Plesk.

I get to installing it, and the installer complains "you have perl 5.008006 installed, you need perl 5.008007".

Okay, um...

I try looking for perl 5.8.7 -- the current version is 5.8.8, and the one that came with the OS is 5.8.6. For a VERY SHORT while there was a 5.8.7 available. The installer doesn't like 5.8.8 (grr!!).

I try modifying the installer -- to tell it "you want 5.8.8". No good. The installer has an embedded checksum -- and it breaks my head as to how to even CREATE a file that has an embedded checksum, since changing the checksum CHANGES THE FILE, and thus CHANGES THE CHECKSUM.

I look on the FreeBSD archive mirror (where old releases are kept in their entirety (linux could learn something from this)) -- and I find a perl 5.8.7 tarball here:

ftp://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/i386/6.0-RELEASE/packages/lang/perl-5.8.7.tbz

Except that when I install it, it dies a horrible death because it's a 6.0 package, and doesn't have the right library links.

Okay, fine. Cracks Knuckles

I go cruising through the FreeBSD cvsweb repository, find the exact date that things got upped from 5.8.7 to 5.8.8, then I unleash this:


cvs co -D '12/23/2005' ports/lang/perl5.8
cvs server: Updating ports/lang/perl5.8
U ports/lang/perl5.8/Makefile
U ports/lang/perl5.8/Makefile.man
U ports/lang/perl5.8/distinfo
U ports/lang/perl5.8/pkg-descr
U ports/lang/perl5.8/pkg-message
U ports/lang/perl5.8/pkg-plist
cvs server: Updating ports/lang/perl5.8/files
U ports/lang/perl5.8/files/patch-MM_Unix.pm
U ports/lang/perl5.8/files/patch-SDBM-errno-fix
U ports/lang/perl5.8/files/patch-freebsd.sh
U ports/lang/perl5.8/files/patch-perl.c
U ports/lang/perl5.8/files/perl-after-upgrade
U ports/lang/perl5.8/files/use.perl

Oh, Hello, Newman. I cd into the directory, build, kill of any previous versions, and install.

I take somewhat pride in the fact that nobody else, period, could have done this. We've got one other tech with possibly the skill, but there's more BSD-centric knowledge required.

gushi: (Default)

At the colo I wanted to build an "emergency netboot recovery machine", so that any user who had a machine that was crashed could remotely reboot and get onto a network-booted recovery CD.

The colo has only three real major OSes we use:

Redhat/Fedora (mainly on the cobalts)
Debian Linux
FreeBSD

On a whim, I decided to research the Debian option first.

Mostly URL's for my own reference )

Hrmmm, this shows serious promise.

What the world REALLY NEEDS, though...is a system that can just PXE-load ISO images. Or one that can mount an ISO, but also look at the boot blocks and create a similar boot-config file. The PXE loader would have to handle all the network throughput, and would need to handle "Emulating" a cdrom, since we can't assume the OS is smart enough to handle the network code.

Also, the same thing would need to have enough brains to redirect output to the serial ports/listening SSH server.

The problem here is that a lot of "netboot" applications are for different reasons:

1) No bootable media (like my laptop) -- of course, you HAVE a keyboard and mouse in this case.

2) No hard drive (like my HP Envizex) -- in this case you have a console as well.

3) No console hardware. I've used this in cases where the screen was shattered, and in situations where I was "remote", via our terminal server. It actually requires getting SEVERAL things to work right all at once (netboot support in the kernel and install media, serial support at all stages of the bootloader, and in some cases, serial support at the application level.)

It's going to be a bitch to get this working.

gushi: (Default)

At the colo I wanted to build an "emergency netboot recovery machine", so that any user who had a machine that was crashed could remotely reboot and get onto a network-booted recovery CD.

The colo has only three real major OSes we use:

Redhat/Fedora (mainly on the cobalts)
Debian Linux
FreeBSD

On a whim, I decided to research the Debian option first.

Mostly URL's for my own reference )

Hrmmm, this shows serious promise.

What the world REALLY NEEDS, though...is a system that can just PXE-load ISO images. Or one that can mount an ISO, but also look at the boot blocks and create a similar boot-config file. The PXE loader would have to handle all the network throughput, and would need to handle "Emulating" a cdrom, since we can't assume the OS is smart enough to handle the network code.

Also, the same thing would need to have enough brains to redirect output to the serial ports/listening SSH server.

The problem here is that a lot of "netboot" applications are for different reasons:

1) No bootable media (like my laptop) -- of course, you HAVE a keyboard and mouse in this case.

2) No hard drive (like my HP Envizex) -- in this case you have a console as well.

3) No console hardware. I've used this in cases where the screen was shattered, and in situations where I was "remote", via our terminal server. It actually requires getting SEVERAL things to work right all at once (netboot support in the kernel and install media, serial support at all stages of the bootloader, and in some cases, serial support at the application level.)

It's going to be a bitch to get this working.

gushi: (Default)
Guys,

phpBB 2.0.16 is out. Mass upgrades will be ensuing tonight. If you don't want this, upgrade on your own.

-Dan
gushi: (Default)
Some people ask what I do about system security on prime. I'm interested in sharing.

I've seen a lot of posts that say "don't give out ssh access". I think that's bullshit. Anyone who wants to can upload a CGI/PHP script that will allow them the equivalent of shell access almost instantly. Given, there are a class of users on the system who can do NOTHING but email, and they have no SSH/CGI/FTP access. Similarly, setting someone's shell to /bin/date (which will allow ftp, but not ftp won't stop them from uploading a script.

Security is a layered thing. I certainly don't know everything about it, and I don't believe anyone can. I know what I need to, and always try to learn.

I run Webmin. I run it behind SSL, and I run it on a non-standard port. In the event of a compromise, lockout, or fat-fingered root password, webmin is a convenient back door. Additionally, it's proven an invaluable tool for certain things, like MySQL. I exchange about one email a month with the author about possible improvements.

I run aide. Aide basically takes a checksum of important binaries on your system (in my case, anything in *bin (/bin, /sbin, /usr/bin, /usr/sbin, /usr/local/sbin, /usr/local/bin), and checks everything nightly. The checksum database resides on (get this), a write protected floppy sitting in the floppy drive. Good luck hacking that.

I have no qualms about running webmin, although there have been holes discovered in the past, because I run it someplace different from usual. How do I know people won't find it on a portscan? Simple. My open ports list is like a minefield. If you connect to any of 60 commonly-exploited ports, prime will defend itself and firewall itself against you. Permanently. You won't be able to connect to it at all. The ports list is scattered enough that it's hard to hit by accident.

I have a logfile parser that runs once an hour, that goes through all my logfiles and emails me if it finds anything unusual or out of the blue (failed logins, possible attacks, etc).

Additionally, there's also a system in place that keeps track of when people FIRST log in, as well as when they log in from an unusual suffix, cross-checked against a list of country codes. (i.e. if Joe logs in from Venezuela).

I run MRTG, which normally is used to graph traffic, but I use it to graph things like system load, the number of logged in (and unique) users, and the number of active processes.

This is all stuff to protect the server. Part 2 will be the stuff I do to protect the user.
gushi: (Default)
So I was tapped a few minutes ago to head into the city to replace a piece of equipment at one of the NYC telecom hotels. Writing this on the train now, will post later I guess since this laptop can't talk to the phone.

Well, that's a misnomer actually -- the phone itself can make LJ posts, and the phone itself has an SSH client. And since both speak IR, I could beam this whole post over to the phone and then upload it.

But that's overkill, ne?

Anyway, I've looked into the SSH key thing I mentioned before. And I've decided it's absolutely stupid.

Basically, publishing the keys in NORMAL dns isn't enough. You have to be using the DNSSEC secured DNS extensions. What this means is every time I changed the gushi.org zone, I would have to generate a signature (nothing inherently hard about that, everything's scripted anyway). I would have to publish that signature in my DNS. And then, what's worse, is that everyone else, has to be using a DNS client that UNDERSTANDS the security enhancements, and passes on the "yes, this is secure" data to the end user. Worse still, each of those DNS servers has to accept, and TRUST my DNS public key. So if you're a user on a dynamic comcast IP (and presumably using the comcast DNS servers), Comcast would have to accept my key and include it into their system. AND, they think you should be running some encrypted protocol between yourself and comcast's DNS servers, like IPSEC.

Now, why the hell I can't just take my GeoTrust certificate that says "Yes, we've certified that this person runs gushi.org", and stuff THAT into my zonefile (this is how Sendmail, Apache, ProFTPd, Webmin, Usermin ALL work)...and then comcast would say "we believe in GeoTrust, and they say to trust you, therefore everything seems to be in order".

Of course, the system outlined above seems to be a replacement for caching the keys locally, which is even more stupid. All I'd like to see is something like this.

shell#ssh danm@prime.gushi.org
checking dns for prime.gushi.org...
key found in DNS...

the ssh key coming from prime.gushi.org, id aa.aa.aa.aa.aa.aa.aa is not known,
HOWEVER, it *does* match the key found in DNS, as retrieved from ns2.gushi.org

Would you like to continue connecting? (y/n)



From there, it would be business as usual. The key caches would still be used, instead of relying purely on DNS. SSH would still check the key cache, and would still bitch heavily if the connecting public key didn't match the one in the cache. Period. This would only serve as a method of distributing the key that makes more sense than "just type yes".

I suppose, optionally, that this kind of thing could be checked *every time*...but the cache would still be preferred.

Now, it's assumed that someone with enough brains to spoof a man-in-the-middle attack would also be able to spoof the DNS query that grabs my key (that's why the guys were talking about DNSSEC).

The other thing I'd love to see, as an optional "comment" field in the key, is a how-to-verify field.

The ssh key aa.aa.aa.aa.aa.aa.aa.aa is not known, however, the creator of this key has stated that this key may be verified in any or all of the following way(s):

NOTE: You should personally check as many of the following as you feel are necessary to verify that this id is authentic.

"see url: http://www.gushi.org/keyinfo.txt"
"if in doubt, call Gushi at 1-866-LI-GUSHI, dial option 12"
"Check http://www.livejournal.com/userinfo.bml?user=gushisystems"
"Key fingerprint should be sent out in the footer of signup e-mails"
"fingerprint is printed on the back of Gushi's business card"

Now, of course, those methods are easily compromisable too...but security is a layered thing, but it's assumed that if someone is running a man-in-the-middle attack against prime.gushi.org, that they won't be able to gafutz with ALL those methods.

Doing the first bit, the DNS lookup, could be tweaked with only patching to the SSH code...right now, the spec states:

2.4 Authentication

   A public key verified using this method MUST NOT be trusted if the
   SSHFP resource record (RR) used for verification was not
   authenticated by a trusted SIG RR.

   Clients that do validate the DNSSEC signatures themselves SHOULD use
   standard DNSSEC validation procedures.

   Clients that do not validate the DNSSEC signatures themselves MUST
   use a secure transport, e.g. TSIG [9], SIG(0) [10] or IPsec [8],
   between themselves and the entity performing the signature
   validation.


Of course, the spec (http://www.snailbook.com/docs/dns-fingerprints.txt) also states "Expires March 5, 2004" so I'm not sure how real this is. I think I could make a serious motion toward getting this made real.

The sourceforge SSH servers got whacked a while ago, and a lot of people wound up revealing their sourceforge ssh passwords to the thing. The hackers were then able to log into the sourceforge shell accounts, and use the STORED KEYS that people had there to jump to other places. People actually VERIFYING KEYS would help this a lot.

As for the second part, the key "extensions", those would probably lead to widespread breakage, and we'd probably have to wait for the widespread adoption of ssh3 (which I'm not even sure is a draft yet).

August 2017

S M T W T F S
  12345
678 9101112
13141516171819
20212223242526
27 28293031  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 26th, 2017 07:13 am
Powered by Dreamwidth Studios