gushi: (Default)

In trying to get php4 (which I've admin'd for many years) running last night -- it occured to me: why not php3? I mean, that's very, very, very dead, right?

Still, the source code was available. I looked at the output of ./configure --help, and came up with something reasonable:

./configure --with-mysql=/usr/local --with-mcrypt=/usr/local --with-mhash=/usr/local --with-ftp --with-gettext=/usr/local --with-zlib=/usr

And then the trouble started. It wouldn't build.

Outright Build Failure

There were a ton of warnings (this always happens with older code), but the show-stopper was:

functions/crypt.c:133:12: error: 'PHP3_MAX_SALT_LEN' undeclared (first use in this function)

In looking through the source code, I found this:

#define PHP3_MAX_SALT_LEN 2
#define PHP3_MAX_SALT_LEN 9
#define PHP3_MAX_SALT_LEN 12

Stupidly, there was no default being set. So, I had to go figure out why.

gcc on a modern system

Most people who have used configure to make a unix program know it to be this roomba-like script that goes and magically discovers how your system works. To save reinventing the wheel, there's now a tool called GNU Autoconf that basically does most of this. But in the old-days, the way configure worked was by basically trying to trick the C compiler into building a small program.

For example, to figure out if the crypto functions were working, the user would see something like:

checking for standard DES crypt... no
checking for extended DES crypt...

But that "no" didn't come up immediately. Instead, the little c program that configure ran was segfaulting.

#line 4184 "configure" 
#include "confdefs.h"

#include <crypt.h>

main() {
exit (strcmp((char *)crypt("rasmuslerdorf","rl"),"rl.3StKT.4T8M"));

You can go ahead and run that program if you like with a modern GCC. It'll complain about "exit" not being properly defined, it'll complain that crypt isn't being included in the right places. Even if it compiles, it'll crash, segfault, and dump core if you try to run it.

So, after manually patching configure to include modern libraries and build the test programs right, it magically started detecting enough crypto functions to let the build work.

That little c program now looked like:

#line 4184 "configure"
#include "confdefs.h"
#include <crypt.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main() {
  exit (strcmp((char *)crypt("rasmuslerdorf","rl"),"rl.3StKT.4T8M"));

But there were more failures

If you look at the above, I included a bunch of libraries. Things like zlib, and mysql, and mcrypt. All those have been changing over time, and no longer have the exact same functions they were built with in 1998.

So, I relented: I simply decided to try to get the core functions running as a proof-of-concept.


It built, and I copied the new "php" binary to /usr/local/bin/php3-cgi

When I tried to run a script with it, however, it simply complained at me:

No input file specified

Hacking around the parser

It became clear to me that in the early days of PHP, there was a supplied CGI binary, but it still required some tight wiring into the webserver to work. So, rather than having suphp load it like other versions of PHP, I wrote a tiny wrapper CGI script, that would just call the "php info" functions:

open FILE, "/usr/local/bin/php3-cgi /home/danm/public_html/phptest/3/php.php|" or die "Cannot open PHP";
while (<FILE>) {

It still cryptically complained that there was no input file specified, and then it occured to me that something was telling it it was running as a CGI, but it wasn't picking up on the path. Thus, I cleared the %ENV hash to make it think it was running on the command line. (It still puts out a header and speaks HTML, tho).


What was the point of this. Well, for starters, one of my long-time claims to fame/shame has been: I am not a C programmer.. I've wanted to learn for a decade, and this is a sysadmin who runs critical internet infrastructure talking here. It bothers me a lot. So saying "nope, doesn't work" wasn't acceptable to me. The cost of an afternoon to know I could tackle this was worth it. And it was a nice excuse to get back to blogging.

Having the interpeter lying around isn't actually super harmful, but it's also not super useful. But I learned about C, and about myself. This silly project got me into the zone.

Proof of life

There You Go

gushi: (Default)

A lot of the people I host are not coders. They don't understand things like php, globally scoped variables, deprecation warnings, database authentication plugins, insecure hash types, or the like.

They know only that they have code that worked fine for a decade, and then Some Jerk Ferrit did something that made their site not work.

Most of this is because PHP, as a language, is a Shit Show. The only reason PHP scripts are not still majorly responsible for most of the botnet activity on the internet is because someone decided to make smart light bulbs with globally routable ipv6 addresses.

Coding in php is like trying to sculpt something in clay, except that people keep dumping ingredients in the clay that change its consistency: sand, water, cement, cheerios.

For an admin, php is a security nightmare: you have 300 users, whose code can all alter each others' files. Oh, and on most webservers? Users can't alter the files PHP created. They're owned by the "www" user.

Shit. Show.

So, because vague reasons, the people who make the PHP language decide that a particular function is not workable in the particular coding style that they feel people should be using at that time. So, somewhere in a README file that nobody actually reads, they say "hey, you should stop using this function, it may go away in the next version".

I hosted several hundred websites at one time -- nobody knew about that README file, which, as far as they knew, were on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.”

So, a long long time ago, I solved two birds with one stone. I installed a program called "suPHP". What suphp brilliantly does, is sacrifice some of the speed normally present in PHP, by running everyone's PHP scripts as them. It does this by decoupling PHP from the webserver, and winding up a tiny little PHP process to spawn your files.

The unexpected side effect here, is that it can run different versions of PHP for different users.

Now, as far as the operating system is concerned, you can only install packages for one version of PHP at a time, and right now, at the time of this writing, that's PHP56, with a bunch of removed functions and deprecation warnings.

I've been building PHP from scratch for years, tho, and I know how to install a tiny little shadow copy of an older version of PHP where the webserver can get at it.

So, if you were going to go look at: this page, you'll see a php info page that talks about php version 5.5. If you look at this default one, you'll see that it in turn is running php 5.6.

In fact, I even have a separate copy of apache running with the mod_php going on, for my webmail, where I can use the speed.

Best part? You can control it.

If you were to look at this htaccess file, you can see how easy it is to signal to the interperter that you want 5.5. (Normally, apache won't serve .htaccess files out to the world, this one is special). Basically three lines of code:

<FilesMatch "\.php$">
  SetHandler application/x-httpd-php55

In a former life, I let people use this to switch between php4 and php5. RIght now the only handlers are php5 and php55. I could maybe add php54 as well.

That said -- if you possibly can, I advise using upgraded code that supports the latest thing. So if you're running something like Wordpress, please do update. If on the other hand, you have an old copy of Gallery, and it's not being hacked or hammered, and it suddenly broke, the above will fix it.

gushi: (tall no good)

TL;DR: If you have a thing that's broken for you, contact me and we'll figure out a fix. If you have a DB-based thing or a PHP-based thing, this is likely.

If you have a thing that's broken for you, contact me and we'll figure out a fix. If you have a DB-based thing or a PHP-based thing, this is likely.

Upgrades last night went well, but a few things are being weird.

BSD Stupidity

  • For some reason, pkg upgrade didn't reinstall proftpd. Easily enough fixed, but if it missed that, it may have missed other things.

  • Mysql didn't get upgraded from 5.5 to 5.6, but all the php stuff was linked against 5.6, so I manually upgraded mysql-server to 5.6 and ran a bunch of upgrade scripts.

  • Stupidly, the FreeBSD installer removed named.conf because BIND is no longer part of the base tree. DUMB. Like, there's no other reason a person would want that file? (Luckily, I had backed it up).

  • Also stupidly, trying to install bind9.11 tries to uninstall zkt. WTaF?

  • Freebsd-update wanting to overwrite my (not MC, CF) was just plain dumb. Same with my ntp.conf. I think I'm just going to globally call a /usr/local/etc/ntp.conf in rc.conf, and let it stop complaining about any local changes.

  • Something tickles the password file that causes pkg's user-manipulations to fail, somehow getting the DB and the textfile out of sync.

  • People had warned me about my disk devices changing names, but as this is a VM with scsi-based vdisks this didn't affect me.

PHP Stupidity

  • PHP no longer likes mysql's built-in "old style" passwords. If you have a site that's DB-based and you've been hosted by me for like a LONG time, I'll need to do some tweaking on the backend for you.

  • PHP's session dir got weird again. I may need to define a startup script to fix perms on that. (Come to think of it, I should define a crontab to do cleanup on that anyway).

  • As usual, there's a number of deprecated and "removed" PHP functions. I'm vaguely contemplating building static versions of older versions of PHP from scratch to try and resolve these. Because I use suPHP, it lets me determine the PHP interpeter at a per-site or even per-file level. In a past life, this let me run php4 and php5 at the same time.

(Yes, an unstable version of php5.4 sticking around is arguably bad, but if it's a thing I only turned on for a given site that was otherwise broken and that site runs only as that user, I consider this fairly low risk).

Future Work

  • I've accepted that there's always going to be a couple of packages I need to build myself. That said, I should act like a proper port maintainer, and maintain "diff" files for them that are easily applied. I might even reach out to the official package maintainers on some of this stuff and see if they can be included.

  • Because this system started life using ports and pkg-classic, my packages have no idea which packages are "automatic" (i.e. were not explicitly installed, but merely installed as dependencies), so pkg autoremove may not work so well for me. At some point, I'll manually audit the dependency tree.

  • Squirrelmail's cert is marked as insecure because it's SHA1. I've put in for a reissue, but Geotrust is taking their sweet-ass time on it.

  • Now that I can support current state-of-the-art crypto, I'll likely do some cert tweaking for those things that use SSL. (Webmin, proftpd, Squirrelmail).

  • At some point, I really want to do a proof-of-concept that lets you accept weaker SSL settings, but redirect to a framed warning page. Because the default behavior of this (connection failed) just sucks.

gushi: (Nevar Button)

Mitigating a Mail Server Attack


A few days ago, I noticed an unusual number of bounceback messages from one specific user directed at email addresses. When I looked inside the mail queue, I noticed that each message had dozens of recipients.

The troubling thing is, looking at the mail logs, I saw the dreaded line

maillog.7.bz2:Aug 18 21:33:08 <> prime sm-mta[80096]: 
AUTH=server, relay=[],, 
mech=PLAIN, bits=0

The AUTH=server bit tells me that rather than a rogue script running here (a not-uncommon thing that happens when you let users run PHP scripts), this was an actual password that got leaked, and was being used to send mail just as a regular user would.

I quickly concluded "compromised account", changed the user's password, and contacted them out-of-band with a new password. Life seemed good.

...then I noticed a second account doing the same thing. Okay, that's weird. Maybe my users have started falling for a really effective phishing scam?

When it happened a third time, a few days later, I recalled the old Ian Fleming quote:

"Once is happenstance. Twice is coincidence. Three times, it's enemy action."

Combat Perl

So, faced with the task of seeing if there were any other users who were affected, I wrote a short little bit of perl code to analyze my mail logs, and spit out each login, as well as the number of times each IP had logged in.

The code looks like this. No, there's no fancy "use Strict" or anything like that. I used YAML as an output format because writing "Dump" is easier than writing a foreach loop to iterate over the hashes.


use YAML;
open FOO, "/usr/bin/bzgrep -i \"auth=server\" /var/log/maillog.0.bz2|";
my @lines = <FOO>;
my %thing;
foreach my $line (@lines) {
  chomp $line;
  if ($line =~ /\[(\d+\.\d+\.\d+\.\d+)\].*authid=(\S+),/) {
        print "ip address $1 authid $2 found in $line\n";

print Dump %thing;

Combat perl output

The code above produced output like what I have here. Note that I've altered all the logins and none of the below actually exist on my system. The IP addresses and counts, however, are real.

--- jim
--- 1 4
--- bob
--- 1
--- moe
--- 10
--- 9 10 3 1 2 3 2 6 1 4 9 2 7 4 2 3 3 4 5 1 2 4 6 4 2 5 2 1 10 3 8 1 4 1 4 3 7 7 7 5 6 2 3 4 3 2 7 13 4 4 1 2 8 5 12 3 8 7 8 2 1 3 3 2 4 6 1 4 5 5 4 4 3 3 5 4 7 1 7 1 8
--- thing
--- 1
--- stuff
--- 2
--- steve
--- 1 1
--- joe
--- 1

So, one of these things is not like the others. It's understandable that a person may have two or three ips in a given period. Their ips change, they're logging on from multiple computers.

Remember that these are ONLY the ip addresses pulled from the sendmail logs. Only on connections where a piece of mail is sent, using SMTP auth.

So, that entry? Yeah, that is that what security researchers call a "snowshoe" attack -- not one server sending hundreds of mails (which would be easy to block), but instead, it's spread out, and even though I now have a list of ips I could block, what we're looking at here is a botnet of otherwise compromised machines on a dozen or more ISPs.

The other thing to note about the entry is that it's a full user@domain entry. Put another way, it's an email address -- one where the LHS (left hand side) just so happens to match the user's actual login.

What was going on here is that the way I (and most people) do SMTP auth in sendmai, there can be a concept of multiple "realms" defined -- for example, to log in against different authentication databases. As I'm not using this feature, the realm and everything after is ignored (but still logged).

As I normally instruct my users to only use the barename to log in, any login using a full realm must be a compromised account.

Notifying the User

So, there's a problem here. While I can easily change the user's password and send them mail, this effectively locks them out of their account and keeps them from getting anything done until we touch base.

What I wanted to do was find a way to block the users who were using the "bad" format, while letting good users go on. I wanted a quick, guilt-free way to block the sending of mail, without breaking the communication link.

What I discovered was a ten year old post in the old usenet group comp.mail.sendmail, here.

With a little bit of tweaking, I had applied that same config to my own sendmail, and had configured a line in the access database to block a test user. The account still worked, but it wouldn't let them send mail. Perfect. I could block "" without blocking "curly". (And yes, this relies on a little bit of obscurity -- but it's a botnet, not monkeys at typewriters, it's only going to try what it knows).

Identifying the source

So, three accounts with relatively secure passwords compromised in a week. What was the common thread? Could these people have all used the same insecure passwordless wifi networks? Is there some newfangled router exploit that mails your traffic all off to the highest bidder?

I spoke to all the users. None of them had fallen for any phishing emails. They were running different OSes, so a password-stealing virus was out. And then it hit me. Like a ton of bricks.

I've recently seen a surge of spam to addresses like,,
The reason?

Well, it's all because adobe sucks at securing your data.
Sometime last year, people were able to download 150 million usernames and passwords from adobe's backend servers. And, as the article I just linked will tell you, those passwords were encrypted weakly, and in a way that all the users had the same encrypted password string, for a given password.

While I'm not 100 percent sure this was the attack vector -- there's been several other leaks that happened (, Linkedin, E-harmony), and about 90 percent sure that this is the likely cause, even if I don't know which site it was that ultimately spilled my users' beans.

May 2017



RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 26th, 2017 02:48 am
Powered by Dreamwidth Studios