Bash wizardry: Command Line Switches

If you’re like me (and God help you if you are) You write a lot of bash scripts… When something comes up bash is a VERY handy language to use because it’s a) portable (between almost all *nixes), b) lightweight, and c) flexible (thanks to the plethora of linux commands which can be piped together) One large reason people prefer perl (or some other language) is because they’re more flexible. And one of those cases is processing command line switches. Commonly bash scripts are coded in a way which makes it necessary to give certain switches as a certain argument to the script. This makes the script brittle, and you CANNOT leave out switch $2 if you plan to use switch $3. Allow me to help you get around this rather nasty little inconvenience! (note: this deals with SWITHCES ONLY! *NOT* switches with arguments!)

check_c_arg() {
  count=0
  for i in $@
    do
      if [ $count -eq 0 ]
        then
          count=1
        else
          if [ "$i" = "$1" ]
            then
              return 1
          fi
      fi
  done
  return 0
}

This beautiful little bit of code will allow you to take switches in ANY order. Simply setup a script like this:

#!/bin/bash
host="$1"

check_c_arg() {
  count=0
  for i in $@
    do
      if [ $count -eq 0 ]
        then
          count=1
        else
          if [ "$i" = "$1" ]
            then
              return 1
          fi
      fi
  done
  return 0
}

check_c_arg "-v" $@
cfg_verbose=$?
check_c_arg "-d" $@
cfg_dry_run=$?
check_c_arg "-h" $@
cfg_help=$?


if [ $cfg_help -eq 1 ]
  then
    echo -e "Usage: $0 [-v] [-h]"
    echo -e "\t-v\tVerbose Mode"
    echo -e "\t-d\tDry run (echo command, do not run it)"
    echo -e "\t-h\tPrint this help message"
    exit 1
fi

if [ $cfg_dry_run -eq 1 ]
  then
    echo "ping -c 4 $host"
  else
    if [ $cfg_verbose -eq 1 ]
      then
        ping -c 4 $host
      else
        ping -c 4 $host 1>/dev/null 2>/dev/null
    fi
fi

In the above all of the following are valid:

  • 127.0.0.1 -v -d
  • 127.0.0.1 -d -v
  • 127.0.0.1 -v
  • 127.0.0.1 -d
  • 127.0.0.1 -h
  • 127.0.0.1 -h -v -d
  • 127.0.0.1 -h -d -v
  • 127.0.0.1 -h -v
  • 127.0.0.1 -h -d
  • 127.0.0.1 -v -h -d
  • 127.0.0.1 -d -h -v
  • 127.0.0.1 -v -h
  • 127.0.0.1 -d -h
  • 127.0.0.1 -v -d -h
  • 127.0.0.1 -d -v -h

I hope this helps inspire people to take the easy (and often times more correct) path when faced with a problem which requires a solution, but not necessarily a terribly complex one.

Cheers!
DK

Google Version Control

http://code.google.com/

Not at all a suprise offering from Stein 😀 I’m sure it will be top notch. (remember it’s still beta right now, of course it isnt feature complete!)

What this could give google is a distinct in-road to the emerging generation of web and application developers. What better way to know who to hire for a programming position than to have their entire development history available at a moments notice. Will this be the beginning of google knocking on the door of candidates that THEY want, rather than candidates seeking google out?

hmm.

The Failure of Modern Package Management

Package management, in all its necessary infamy/glory, is a joke. The problem with designing a package management system is the various conflicting needs that different groups of people have.

Group 1: “I write documents, use e-mail, and surf the web.”

This group wants stability! They have no desire to be on the bleeding edge, because the bleeding edge is plagued with blood, and blood frightens this group. “We just want it to work like it always has” is their motto, and the only time they want to install something is when they cant view a particular web page. On this group the acceptability of spending hours and hours managing and updating their packages is a frustrating ordeal! Lets face it, unless its going to cause my computer to bleed silicon out the floppy drive I dont really care what updates it needs… A lot of the time we techies try and address these people in a “protect them from themselves” methodology. We writeup our “foolproof” “do this once a week task list. But this fails the first time that a person forgets to do it (leading to forgetting the next time, and so on) or, worse, when something in the process changes and they can no longer follow the directions.

Usually, besides a basic firewall and a virus scanner what they need protection from is their son/daughter/neice/nephe… If I had a dime for every time I heard “my {insert relative or friend here} came over and installed a bunch of stuff and now it doesnt work. {person} is really good with computers, and I’m not.” The rub is that their very own statement is self contradictory… the person who is good with computers… broke yours? yea… uh huh… stop letting little jimmy use the damn thing.

So this group is plagued by a constant stream of changes to their routines which leaves them frustrated, vulnerable, and afraid (though not many of them are sure enough of themselves to admit it… they’re afraid) that the next time they update their packages they’ll have to spend months figuring out how to work around the new changes to do the things that they’ve been doing for years… in the same way they’ve been doing them…

Group 2: “have you played {insert new video game title here}”

This group’s motto is something along the lines of “hold on, I need to reboot again.” These folk are constantly in search of bigger, better, newer, and faster. They’re the ones paying $700 for a video card preorder months in advance. This is the group which is constantly plagued with (because of their 0-day hardware / software nature) has grown accustomed to saying things like “when my video card manufacturer releases the next version of the driver it’ll fix that incompatibility between the $60 game I just bought and the $700 video card I just bought… They say it should only be 2 weeks… I’m stoked, cant wait. Until then I’ll be in {some other game} getting my frag fix there”

Ironically this is the group which is most at ease with modern package management… It’s been their way of life so long… They just accept it. The adrenalyn junkies put up with enough crap in one month of using their computer to make Group 1 want to throw theirs off a twelve story building. But the fact that they can no longer see the foreest for the trees doesnt mean they arent being stepped all over. I’m not sure when it became OK to use your most loyal (and profitable) customers as guinnea pigs (instead of, you know, real-grown-up-big-kid-quality-control) but thats whats happened. It’s suddenly a good thing to ship a product before its ready… because someone else will tell you whats wrong with it… nice.

Group 3: “One hundred and ten more workstations to go”

This is the group who, probably, most loathes the daily, weekly, monthly required updates to products… Oh, they understand that theres a new security hole and it’s got to be fixed… thats a given in corperate {insert country here}… what gets their goat is that they they cant just upgrade whats broken… nope… lots of new features, functionality and dependencies means every time they fix a flaw they leave another feature for Lucy and Bob in accounting to start playing with (which will naturally leave a gigantic hole in their filesystem… or worse.) You’d be amazed how one small change to the word processing program… or spreadsheet program… can wreak havock across an entire business… but the little changes which cause the occasional all-nighter with the tape backup unit are nothing compared to those dreaded words “end of life”… those words cause months of lost sleep due to relentless nightmares.. The fact that corperate {country} actively weighs the pros and cons of sticking with a set of software which is no longer supported by anyone tells you, without any room for misinterpretation, that something is desperately wrong with package (and os) management in this day and age…

Group 4: “The environment I need to run this is pretty custom”

This is the group who’s motto is more or less “I have to get things done, and sometimes I have to go to extremes to do it” They have some pretty odd requirements to operate effectively. Maybe they’re maintaining legacy systems, or perhaps they’re developing their own and have found that the only wat to do X is with a special case Y. Whatever it is “upgrade everything to upgrade anything” is simply NOT an option. And realistically it shouldnt have to be! If you, for some reason simply have to use an older version of, say, glibc… and the absolute latest version of some other package… you’re stuck in maintenence hell… Your only option is to put a lot of manual effort into maintaining your packages… And if you have to do this in a server farm… Well… May a higher power help you!

So what is the answer?

Like so many things in life this is the point where we have to think about 2 things. First the answers might already be out there, and second how benificial is reinventing a particular wheel?

What’s already out there…..

Binary distribution: windows .exe files, RPM, DEB, TGZ… these work but all have dependencies! and anyone who’s managed a redhat system knows what dependency hell can be like (tell me why, again, I need a printer daemon on my web server?). There are also disadvantages to the producer… if your software comes with a lot of optional functionality, or runs on multiple platforms you have to support and maintain and distribute multiple packages to satisfy your user base. Also these kinds of packages tend to require an OS reboot for anything halfway important…. And before you Linux people give me any grief I know at least a few of you out there have done something stupid like a batch software update and didnt realize something critical was replaced which required at least some manual recovery after the first reboot (what… a month from the update?!) which is MORE dangerous that an immediate reboot… at least with an immediate reboot you have a good idea of which packages caused the problems…. but if you’ve updated once per week for 8 weeks and then rebooted because you moved your pc across the room… which package was it, exactly, that is preventing it from coming back up? Now… would your grandma be able to handle this as well as you have? no..

Compile from source: Ahh yes, Gentoo, BSD, Slackware users shine here (And before you start hating, or those of you who know me start nagging I *AM* a Gentoo/BSD person, so this is telling myself off a bit here too) You, the master of your destiny, can build your OS to fit any necessary use! Of course slackware is also akin to having to carve your own little houses out of plastic before you can play manopoly… BSD suffers from (compared to windows, and Linux) a lack of choice. And Gentoo suffers from the need to be bleeding edge (but… just because this ebuild says it needs the latest GCC doesnt mean it wont compile without it… grrr.) If you’re good with a couple scripting languages and have some patience and ingenuity you can thrive in this environment… but… mom wont…

Binary patching… *cough*

The answer, I think, lies in a paradigm shift brought on by server side web applications. Long gone are the days of bulky CGI compiled from something like C or C++ source code. Web developers have realized the potential of interpreted languages and they have set the web free. Now a days, thanks to interpreted languages like PHP, Perl, and Python the internet you know and love is a fast paced, dynamic, carnival of new sites and sounds! The reason that things have progressed more quickly with these languages as opposed to the compiled ones?

They must be faster right? Nope! Compiled code usually runs quite a bit faster than a script does…

They’re more powerful? Not really… in the right hands C/C++ or the likes can absolutely dazzle you and show you that nothing is beyond the true master programmers reach.

So… More portable? Thats half of it! When you write a PHP, Perl, or Python application theres a pretty darn good chance that the code you wrote will work well no matter where you run it… you’ll get the same results all the time. Add GTK to the mix and you have a scripting language capable of being a real client side application platform…

And… Easier to develop? You guessed it! The beauty of debugging and adding features to an interpreted language is the ability to develop one piece at a time withtout all that nasty muching about in the compiler, linking, etc, etc… just run the code… tweak, run, tweak… The old model was tweak, compile, wait, wait, (sometimes wait a LOT longer) run, tweak (repeat) Naturally removing the com[piling and waiting has sped up development quite a lot

So it’s my humble opinion that the empty promise that is package management will necessarily slip into obscurity, and the reign of interpreted code for general consumption will begin to be more and more commonplace. Todays fast machines, and engine side compiler caches, make thefeasibility of this in regards to the end users machine possible (where it was not realistic on your 286 to compile code each time it was run, your P4 3Ghz with 2Gb RAM is readily capable.) It will probably soon be very commonplace to have a php4, php5, php6 perl5, python22, and a python23 interpreter on your system and your applications from day to day will be run and maintained in much the way of the modern web application

A class for normalizing mixed value containers

There are times (like when dealing with simplexml) when you just wish you had an array that you can iterate over for whatever your reasons are (especially when dealing with variable structured multidimensional unpredictable input.) This is also a good example of recursive functions, simplification of difficult specialized problems, and how one might use a class to accomplish large tasks in an encapsulated fashion.
[coolcode]

/**
* A class for [N]ormalizing a [C]ontainer to an array
*
* This will class will take any type of variable container and return a normalized array
* For example this will take the output of simplexml_load_string and render it all into
* a multidimensional array.
*
*
*/

/**
*
*/

class nc2array {

private $original_container;
private $normalized_container;

/*
* Takes a variable, and builds a normalized comtainer out of it
*/
function __construct($container) {
$this->original_container=$container;
if ( is_object($container) ) {
$this->normalized_container=$this->recursive_parse_object($container);
} elseif ( is_array($container) ) {
$this->normalized_container=$this->recursive_parse_array($container);
} else {
$this->normalized_container[]=$container;
}
return($this->normalized_container);
}

/*
* takes an array and parses it recursively, passing objects off to recursive_parse_object
*/
function recursive_parse_array($array) {
foreach ( $array as $idx => $val ) {
if ( is_array($val) ) {
$rval[$idx]=$this->recursive_parse_array($val);
} elseif( is_object($val) ) {
$rval[$idx]=$this->recursive_parse_object($val);
} else {
$rval[$idx]=$val;
}
}
return($rval);
}

/*
* Takes an object and parses it recursively, passing arrays back off to recursive_parse_array
*/
function recursive_parse_object($obj) {
$vals=get_object_vars($obj);
while (list($idx, $val) = each($vals)) {
if ( is_object($val) ) {
$rval[$idx]=$this->recursive_parse_object($val);
} elseif ( is_array($val) ) {
$rval[$idx]=$this->recursive_parse_array($val);
} else {
$rval[$idx]=$val;
}
}
return($rval);
}
}
?>

[/coolcode]

I’ve become a MySQL snob…

I find that, after keeping alive a database whose size I cant comment on specifically, on a budget that I cant comment on specifically (which would shock you if only you knew), I’ve become aloof to a great many people and their MySQL war stories… 300Mb, 1Gb, 100Gb, 1Tb, HAH! HAH I SAY! Here’s a quarter call someone who’ll be impressed.

Next time you have to move your datacenter across the country (literally coast to coast,) and you cant let your application go down, and you have 14 days to plan and execute, alone, oh and you cant cease taking information in on the old coast until you start can taking it in on the new coast, with over 1Tb of data to move, and replication to deal with… Oh, yea, and you dont get to take a break from running the old datacenter to impliment the new one, you have to do both simultaneously… yea… then come knock on my door.

Till then go take your 2Gb “one server is enough but i have a second server for backup purposes” MySQL (or any brand really) database, and be thankful you arent playing with the big boys. I’ve had the potential to lose more data to a power outage than you’ve got total.

academically interesting…

I completely relate to Joe Stagner, who just wrote Trying to grok Ruby on Rails and says “Rails is what I would describe as academically interesting. […] But I kept asking myself “so what”?” Well, Joe, I would say that you grok it perfectly well. It’s definately an excersize in “old hat” as you say.

There is nothing truly new or unique about it. This is much like we’re seeing with all web development. Got a bunch of young fresh smart guys (and I’m serious, this is NOT a put down of anyones skills) who have a, genuinely, good Idea and say “Hey look what I just figured out I can do!” Unfortunately it’s like an uneducated person coming up with the theory of evolution: Yes its a good idea, yes you’re very smart and probably right… but… no… it’s not new…”

The monkeys may, eventually (and given enough time), pound out Shakespear but it was still good old William who really invented it.