I couldn’t have put it better myself

Yesterdays penny arcade hit it on the nose. Osx IS more convenient. It IS worth changing your opinions and telling everybody that all that Mac trash talking was about OS9, but OSX radically altered the very fabric of the universe and that you now have to take it all back: the Mac is now a truly reformed beast using its nefarious powers only for the purpose of good and justice (and the occasional profit margin)! Think Ghost Rider meets BSD :).

All geeky references aside. I was once a mac hater. Now I’m a mac lover.  And you know what. I’m okay with that!

MacFuse, no it’s not an Apple venture into the hardware aisle!

But its details are here: The macfuse google code project page
And it looks like a good thing indeed!  Being able to use the Linux fuse file systems on OSX will make for a wild ride, and really open a lot of doors

Here are just a few Fuse based file systems:

Needless to say, the number of things that have been (and can be) done via fuse makes its adoption on the Mac a very exciting idea.

Amazon Ec2 Cookbook: Startup Flexibility

Disclaimer: these code segments have not been really “tested” verbatim. I assume anyone who is successfully bundling EC2 images (that run) will know enough about copying shell scripts off blogs to test for typos, etc! Oh, and, sorry for lack of indentation on these… I’m just not taking the time 🙂

As I’ve been searching for a way of using ec2 in a production environment. To keep it as simple as possible, but also eliminate the need for unnecessary (and extremely tedious) image building both during and after the development process (development of the AMI, not the service). This is what I’ve come up with.

Step 1: our repository
Create a subversion repository which is web accessible and password protected (of course) like so:

  • ami/trunk/init.sh
  • ami/trunk/files/
  • ami/tags/bootstrap.sh
  • ami/tags/

ami/tags/bootstrap.sh would read:



## Prepare the bootstrap directory
echo -en “\tPreparing… ”
if [ -d /mnt/ami ]
rm -rf /mnt/ami
mkdir -p /mnt/ami/
if [ $? -ne 0 ]; then exit $?; else echo “OK”; fi
## populating the bootstrap
echo -en “\tPopulating… ”
svn –force \
–username $BootUser \
–password $BootPass \
export \
$BootProtocol://$BootHost/$BootLocation/ \
/mnt/ami/ 1>/dev/null 2>/dev/null
if [ $? -ne 0 ]; then exit $?; else echo “OK”; fi
chmod a+x /mnt/ami/init.sh
## hand off
echo -e “\tHanding off to init script…”
exit $?

ami/trunk/init.sh would read something like:

## Filesystem Additions/Changes
echo -en “\t\tSynchronizing System Files… ”
cd /mnt/ami/files/
for i in $(find -type d)
mkdir -p “/$i”
echo -en “d”
for i in $(find -type f)
cp -f “$i” “/$i”
echo -en “f”
echo ” OK”
## Any Commands Go Here
## All Done!
exit 0

Step 2: configure your AMI

  • create /etc/init.d/servicename
  • chkconfig –add servicename
  • chkconfig –levels 345 servicename on
  • /etc/init.d/servicename should look something like:

    #! /bin/sh
    # chkconfig: – 85 15
    # description: Ec2 Bootstrapping Process
    case “$1” in
    /usr/bin/wget \
    -o /dev/null -O /mnt/bootstrap.sh \
    http://user:[email protected]/ami/tags/bootstrap.sh
    /bin/bash /mnt/bootstrap.sh
    exit 0
    $0 start
    echo “Usage: $0 {start|stop|restart}”
    exit 1
    exit $RETVAL

    And now when the AMI boots itself up we hit 85 during runlevel 3 bootup (well after network initialization), servicename starts, and the bootstrapping begins. We’re then able, with our shell scripts, to make a great deal of changes to the system after the fact. These changes might be bugfixes, or they might be setup processes to reconstitute a database and download the latest code from a source control repository located elsewhere… They might be registration via a DNS API… anything at all.
    The point is that some flexibility is needed, and this is one way to build that in!

    I Dont Know Whether To Be Amused, Enthrawled, Or Very Very Frightened


    so I’ll let you judge for yourselves

    Now, to be fair to MySQL

    you could use the mysql binary logs, stored on an infinidisk, to accomplish much the same thing, however, the fact that the pgsql WAL’s are copied automatically by the database server, and no nasty hacks are needed makes PostgreSQL a much cleaner first choice IMHO. However I’ve of course not tested this… yet..

    EC2 S3 PGSQL WAL PITR Infinidisk: The backend stack that just might change web services forever!

    I have written mostly about MySQL here in the past. The reason for this is simple: MySQL is what I know. I have always been a die hard “everything in its place and a place for everything” fanatic. I’ll bash Microsoft with the best of them, but I still recognize their place in the market. And now it’s time for me to examine the idea of PostgreSQL And this blog entry about amazon web services is the reason. I don’t claim to exactly agree with everything said here… as a matter of fact I tend to disagree with a lot of it… but I saw “PS: Postgresql seems to win hands down over MySQL in this respect; WAL is trivial to implement with Postgresql)” and thought to myself: “hmm, whats that?” I found the answer in the PostgreSQL documentation on Write Ahead Logging (WAL) and it all made sense! The specific end goal here is Continuous Archiving and Point-In-Time Recovery (PITR). This plus the S3 Infinidisk certainly do make for an interesting concept. One that I am eager to try out! I imagine that the community version of infinidisk would suffice here since we’re not depending on random access here… that ought to make for some chewy goodness!


    News: Aug 24, 2007: Michael T. provided a patch to fix some date issues he was having with amazon aws. I have not verified this yet, but seeing as I’m not precisely sure when I will be able to verify it I figured I would put his code up here for you to download if you’re experiencing authentication issues like he was! Get it here: Patched Storage3 Class

    Current Version: 1.0.1


    • 1.0.0
      • Initial Public Release
    • 1.0.1
      • Added function fileExists($s3bucket, $s3file)
      • Added support for listing bucket files for buckets with over 1,000 files
      • Added contributed function setACL($s3bucket, $s3file, $shorthand=public-read)
      • Added support for setting an objects acl during the upload process
      • Added grabbing of response headers (which contain a LOT of userful information)
      • s3test.php includes an example of using headers to verify the integrity of a file stored in s3 via md5 hash

    This is a revised version of the file posted here: http://blog.apokalyptik.com/storage3.phps and includes:

    • A local, modified, version of the pear HTTP/Request package
    • A local copy of all other required pear packages
    • Documentation (under development… docs arent my strong suite)
    • An example application (s3test.php)


    I’m always excited when we see something new for amazon web services


    This certainly looks very interesting! I cant help but wonder if the memory caching in the neterprise version is enough to run small MySQL instances on? At the very least being able to MySQLdump regularly to a file directly on S3 would be useful as opposed to mysqldump to a file, split it into chunks, copy the chunks off to s3.

    Perhaps I’ll contact them next week and see if they’ll let me take it for a test drive?!