php open locking daemon

Don’t you hate that… When it’s 2:00am… and you really should be in bed… But your mind has hold of a problem, and wont let it go. I have a project where it would be really handy for a process to be able to lock (arbitrary string identifier) and for another process to be able to check whether (arbitrary string identifier) is still locked. And the processes that do the locking can die… so the lock really needs to expire when they do. I could use MySQLs get_lock but I’m already abusing the hell out of that for more distributed things (and since you cannot have more than one mysql named lock at a time per connection, i don’t think it would work here…) in the originating processes, and these locks are machine wide, not network wide…

I don’t like flock because you have to actually create a file to try and lock it leaving race conditions and the possibility of orphaned files on the file system which just sucks… I thought about Memcached but I really need something that can be held open for long periods of inactivity and released if the client dies which precludes the infinite and the timed method of memcached value storage…

After some searching I found old — Open Lock Daemon which looked like a super good fit… Until I dug into the communication protocol… What a nightmare for wanting to lock a string… srsly. So not being able to find anything (and apparently not being able to sleep until I had a satisfactory answer) I decided to write one. In PHP, naturally. Weighing in at 180 lines I think it’s a pretty acceptable/workable first pass.

[ edit: code available here ]

Continue reading

Bigger Aint Always Better

Recently I was troubleshooting some inefficiencies with the jobs systems locking and fetching queries at work. Like a good little boy I, originally, came up with one index which satisfied all the queries that I needed to run against this particular critical table.

 (`completed`,`heartbeat`,`priority`,`datacenter`,`worker`) 
-- datetime,datetime,tinyint,varchar(16),varchar(255)

the query looked like this:

SELECT `id`,GET_LOCK(CONCAT('foo_',`id`),0) as mylock FROM `foo_jobs` 
WHERE `completed` = '0000-00-00 00:00:00' AND `worker` = '' AND `dirty` = 0 
AND IS_FREE_LOCK(CONCAT('foo_',`id`)) = 1 AND `priority` = '0' LIMIT 1

This worked really well as long as the number of jobs in the table remained small.. But there was an event horizon of sorts… where after a certain number of rows (say 10k-25k) the query above took so long to get rows that the system could no longer catch up, but would fall invariably behind… Apparently, the problem was that there was too much data in the key. Because the requirements for these queries had changed slightly (due to other scaling improvements) I was able to simplify and break the indexes into parts which vastly outperform the previous index.

KEY `get_a_job` (`priority`,`workerpid`)
-- tinyint, int
KEY `janitor` (`completed`,`heartbeat`,`worker`,`dirty`)
-- datetime,datetime,varchar(255),tinyint

This works because workerpid is enough to tell us whether a job definition is in process (or was, and has not been cleaned yet.) and makes the following query run against a super small index…

SELECT `id`,GET_LOCK(CONCAT('foo_',`id`),0) as mylock FROM `foo_jobs`
WHERE `priority` = '0' AND `workerpid` IS NULL 
AND IS_FREE_LOCK(CONCAT('foo_,`id`)) = 1 LIMIT 1       

If you’re interested in what dark monstrosity could possibly require queries like these (and are a brave PHP developer) I invite you to check out the jobs system

Just what you need to know to write a CouchDB reduce function

Lets say you have the CouchDB classes (located here) all compiled together and included into your test.php script. Lets also say that you have created a database with the built-in web ui called “testing”. Finally let us say that your test.php has the following code in it, which would add a record to the db every time it is run. (i know that the data in the document serves no useful purpose… but really I just want to figure out this map/reduce thing so that I can make awesome views… so this suffices sufficiently.)

require_once dirname( dirname( __FILE__ ) ) . '/includes/couchdb.php';
$couchdb = new CouchDB('testing', 'localhost', 5984);
$key = microtime();
$result = $couchdb->send(
    '/'.md5($key),
    'put',
    json_encode(
        array(
            "_id" => md5($key),
            "time" => $key,
            'md5' => md5($key),
            'sha1' => sha1($key),
            'crc' => crc32($key)
        )
    )
);
print_r($result->getBody(true));

After running the code a bunch of times you would end up with a bunch of documents which look more or less like this:

picture-1(click for full size)

Now lets say you want to write a view that told you what the first characters of the _id were and how many documents share that first letter. This is analogous to the following in MySQL

SELECT LEFT(md5, 1) AS `lchar`, count(md5) FROM `md5table` GROUP BY `lchar`

Your map function is easy, because you dont have any selection criteria, so we process all rows

function(doc){ emit(doc._id,doc); }

The reduce function is where the actual programming comes in… And it seems there aren’t many well explained examples of exactly how to do this (I just brute forced it by trial and error)

function(key, values, rereduce) { 
    var output = {};
    if ( rereduce ) { 
        // key is null, and values are values returned by previous calls
	//
	// see http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views
	//
	// essentially we are taking the previously reduced view, and the 
	// reduced view for new records, and we are reducing those two things
	// together.  Summarizing two summaries, essentially
        for ( var i in values ) {
	    // here we have multiple prebuilt output objects and we're simply combining them
  	    // just like below we have an array with a numeric id and an output object
	    // 
	    // retrieve a summary
            var vals = values[i];
            for ( var key in vals ) {
		// debugging
                // log(key);
		// 
		// store in or increment our new output object 
                if ( output[key] == undefined )
                    output[key] = vals[key];
                else
                    output[key] = output[key] + vals[key];
            }
        }
    } else {
        // key is an array, which we dont care about, and values are the 
	// values returned by the map
	//
	// see http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views
	//
	// we are taking each document and processing that, reducing it down
	// to a summary object (output) for each of the rows passed
        for ( var i in values ) {
	    // we have an array, values, with numeric ids and a document objects
	    //
	    // retrieve a document
            var doc = values[i];
	    // get what we want from it, the first char of the md5
            var key = doc._id.substr(0, 1);
	    // debugging
            // log( key + " :: " + doc._id );
	    //
	    // store or increment the output object
            if ( output[key] !== undefined )
                output[key] = output[key] + 1;
            else
                output[key] = 1;
        }
    }
    // done
    return output;
}

and in code, using a temporary view, ( if you used this view all the time you would want to make it permanent… but this is about how to lay out a reduce function, nothing more ) so request code that looks like this

$view = array(
    'map' => 'function(doc){ emit(doc._id,doc); }',
    'reduce' => '
        function(key, values, rereduce) { 
            var output = {};
            if ( rereduce ) { 
                // key is null, and values are values returned by previous calls
                for ( var i in values ) {
                    var vals = values[i];
                    for ( var key in vals ) {
                        // log(key);
                        if ( output[key] == undefined )
                            output[key] = vals[key];
                        else
                            output[key] = output[key] + vals[key];
                    }
                }
            } else {
                // key is an array, which we dont care about, and values are the values returneb by the map
                for ( var i in values ) {
                    var doc = values[i];
                    var key = doc._id.substr(0, 1);
                    // log( key + " :: " + doc._id );
                    if ( output[key] !== undefined )
                        output[key] = output[key] + 1;
                    else
                        output[key] = 1;
                }
            }
            return output;
        }
    '
    );
$result = $couchdb->send('/_temp_view', 'POST', json_encode($view) );
print_r($result->getBody(true));

would give you output that looks like this:

stdClass Object
(
    [rows] => Array
        (
            [0] => stdClass Object
                (
                    [key] => 
                    [value] => stdClass Object
                        (
                            [0] => 15
                            [1] => 17
                            [2] => 16
                            [3] => 13
                            [4] => 27
                            [5] => 18
                            [6] => 26
                            [7] => 15
                            [8] => 18
                            [9] => 21
                            [a] => 12
                            [b] => 23
                            [c] => 20
                            [d] => 27
                            [e] => 28
                            [f] => 26
                        )

                )

        )

)

I hope this helps somebody out.

a dumbed down version of wpdb for sqlite

I’ve been working, gradually, on a project using an sqlite3 database (for its convenience) and found myself missing the clean elegance of wpdb… so I implemented it. It was actually really easy to do, and I figured I would throw it up here for anyone else wishing to use it. The functionality that I build this around is obtainable here: http://php-sqlite3.sourceforge.net/pmwiki/pmwiki.php (don’t freak… its in apt…)

With this I can focus on the sql, which is different enough, and not fumble over function names and such… $db = new sqlite_wpdb($dbfile, 3); var_dump($db->get_results(“SELECT * FROM `mytable` LIMIT 5”));

the code is below… and hopefully not too mangled…

< ?php

class sqlite_wpdb {

        var $version = null;
        var $db = null;
        var $result = null;
        var $error = null;

        function sqwpdb($file, $version=3) { 
                return $this->__construct($file, $version); 
        }

        function __construct($file, $version=3) {
                $function = "sqlite{$version}_open";
                if ( !function_exists($function) )
                        return false;
                if ( !file_exists($file) )
                        return false;
                if ( !$this->db = @$function($file) )
                        return false;
                $this->version = $version;
                $this->fquery = "sqlite{$this->version}_query";
                $this->ferror = "sqlite{$this->version}_error";
                $this->farray = "sqlite{$this->version}_fetch_array";
                return $this;
        }

        function escape($string) {
                return str_replace("'", "''", $string);
        }

        function query($query) {
                if ( $this->result = call_user_func($this->fquery, $this->db, $query) )
                        return $this->result;
                $this->error = call_user_func($this->ferror, $this->db);
                return false;
        }

        function array_to_object($array) {
                if ( ! is_array($array) )
                        return $array;

                $object = new stdClass();
                foreach ( $array as $idx => $val ) {
                        $object->$idx = $val;
                }
                return $object;
        }

        function get_results($query) {
                if ( !$this->query($query) )
                        return false;
                $rval = array();
                while ( $row = $this->array_to_object(call_user_func($this->farray, $this->result)) ) {
                        $rval[] = $row;
                }
                return $rval;
        }

        function get_row($query) {
                if ( ! $results = $this->get_results($query) )
                        return false;
                return array_shift($results);
        }

        function get_var($query) {
                return $this->get_val($query);
        }

        function get_val($query) {
                if ( !$row = $this->get_row($query) )
                        return false;
                $row = get_object_vars($row);
                if ( !count($row) )
                        return false;
                return array_shift($row);
        }

        function get_col($query) {
                if ( !$results = $this->get_results($query) )
                        return false;
                $column = array();
                foreach ( $results as $row ) {
                        $row = get_object_vars($row);
                        if ( !count($row) )
                                continue;
                        $column[] = array_shift($row);
                }
                return $column;
        }

}

?>

Writing your own shell in php

I’ve always wanted to write my own simple shell in php.  Call me a glutin for punishment, but it seems like something that a lot of people could use to be able to do… If your web app had a command line interface for various things… like looking up stats, or users, or suspending naughty accounts, or whatever…. wouldnt that be cool and useful?  Talk about geek porn.  Anyways this this morning I got around to tinkering with the idea, and here is what i came up with… It’s rough, and empty, but its REALLY easy to extend and plug into any php application.

apokalyptik:~/phpshell$ ./shell.php

/home/apokalyptik/phpshell > hello

hi there

/home/apokalyptik/phpshell > hello world

hi there world

/home/apokalyptik/phpshell > cd ..

/home/apokalyptik/ > cd phpshell

/home/apokalyptik/phpshell > ls

shell.php

/home/apokalyptik//phpshell > exit

apokalyptik:~/phpshell$ ./shell.php

See the source here: shell.phps

so-you-wanna-see-an-image

We’ve been asked how we manage serving files from Amazons very cool S3 service at WordPress.com… This is how. (covering a requested image already stored on S3, not the upload -> s3 process)

A request comes into pound for a file. Pound hashes the hostname (via a custom patch which we have not, but may, release) , to determine which of several backend servers the request should hit. Pound forwards the request to that server. This, of course, means that a given blog always serves from the same backend server. The only exception to the afore-mentioned rule is if that server is, for some reason, unavailable in which case it picks another server to serve that hostname from temporarily.

The request then comes into varnishd on the backend servers. The varnishd daemon checks its 300Gb worth of files cache and (for the sake of this example) finds nothing (hey, new images are uploaded all the time!) Varnishd then checks with the web server (running on the same machine, just bound to a different IP/Port#) and that request is handled by a custom script.

So, a http daemon on the same backend server runs the file request. The custom script checks the DB to gather information on the file (specifically which DC’s it is in, size, mod time, and whether its deleted or not) all this info is saved in memcached for 5 minutes. The script increments and checks the “hawtness” (term courtesy of Barry) of the file in memcached (if the file has been accessed over a certain # of times it is then deemed “hawt”, and a special header is sent with the response telling varnishd to put the file into its cache. When that happens the request would be served directly by varnishd in the previous paragraph and never hit the httpd or this script again (or at least not until the cache entry expires.)) At this point, assuming the file should exist (deleted = 0 in the files db) we fetch the file from a backend source.

Which backend source depends on where it is available. The order of preference is as follows: Always fetch from Amazon S3 if the file lives there (no matter what, the following preferences only ever occur if, for some reason, s3 = 0 in the files db), and if that fails fetch from the one files server we still have (which has larger slower disks, and is used for archiving purposes and fault tolerance only)

After fetching the file from the back end… the custom script hands the data and programatically generated headers to the http daemon, which hands the data to varnishd, varnishd hands the data to pound, pound hands the data to the requesting client, and the image appears in the web browser.

And there was much rejoicing (yay.)

For the visual people among us who like visuals and stuff… (I like visuals…) here goes…

PHP CLI Status Indicator

Most times when people write command line scripts they just let the output flow down the screen as a status indicator, or just figure “it’s done when it’s done” But sometimes it would be nice to have a simple clean status indicator, allowing you to monitor progress and gauge time-to-completion. This is actually very easy to accomplish. Simply use
instead of
in your output. Obviously the example below is very simplified, and this can be applied in a much more sophisticated fashion. But it works.

$row_count = get_total_rows_for_processing();
$limit=10000;
echo "\r\n[  0%]";
for ( $i=0; $i < = $row_count; $i = $i + $limit ) {
  $query="SELECT * FROM table LIMIT {$limit} OFFSET {$i}";
  // do whatever
  $pct = round((($i+$offset)/$row_count)*100);
  if ( $pct < 10 ) {
    echo "\r[  $pct%]";
  } else {
    if ( $pct < 100 ) {
      echo "\r[ $pct%]";
    } else {
      echo "\r[$pct%]";
    }
  }
}
echo "\r[100%]\r\n";