I hacked together this little C program from this other little c program. Basically acts as an execution wrapper that lets you fork() and detach and run a command in the background with a pidfile and log file for program output. So far I havent had any problems with it… but then I’m not a true C guy so any input is welcomed.
CLI
-v is for verbose, damnit
So, the vast majority of every day administrative command line utilities for Linux use -v as the switch for verbose…. when you use -v you EXPECT verbose. Well sometimes you get that one package which just CANNOT follow the rules. Someone has to think outside the box, someone has to be a unique snowflake. That someone should not be a mass process killing utility! whoever thought up that the argument -v to pkill should INVERT THE MATCH should really take a long slow look at how important being unique really is… because if you’re not aware of this, and you run something like pkill -9 -v nagios…. as root… it’s not going to do what you expect. Nothing good comes of that command.
This has been a PSA
It’s good for the server. It’s good for the soul.
ack (http://petdance.com/ack/), love it (thanks nikolay)
colorizing php cli scripts
It’s pretty common in most scripting languages which center around the command line (bash, perl, etc) to find information on colorizing your shell script output, mainly because those languages are tied very tightly to command line use. It can be difficult, however, to find information about adding this same nice feature to a php cli script. The reason is simple: most people dont use php for cli applications; most cli programmers use something else. It’s not difficult to adapt the same techniques listed in most bash howtos (generally in the section reserved for colorizing your command prompt) for generating colored terminal output for php.
echo "\033[31mred\033[37m "; echo "\033[32mgreen\033[37m "; echo "\033[41;30mblack on red\033[40;37m ";
Simple, functional, useful (even if a bit complicated.) I leave it to you to lookup a bash prompt colorization howto to hunt down your own list of escape color codes (call it homework.)
Cheers
Bash Tip: Closing File Descriptors
I recently found that you can close bash file descriptors fairly easily, it goes like this:
exec 0>&- # close stdin exec 1>&- # close stdout exec 2>&- # close stderr
Which makes it easy to daemonize things using only bash (lets face it there are times when you JUST don’t need anything more than a simple bash script, you just need it backgrounded/daemonized). Take this example of a daemon that copies any new files created in a directory to another place on the filesystem
#!/bin/bash ## ## Shell Daemon For: Backup /root/ ## (poorly coded, quick and dirty, example) ## PIDFILE="/var/run/rootmirror.pid" LOGFILE="/log/log/rootmirror-%Y-%m-%d.log" NOHUP="/usr/bin/nohup" CRONOLOG="/usr/bin/cronolog" case $1 in start) exec 0>&- # close stdin exec 1>&- # close stdout exec 2>&- # close stderr $NOHUP $0 run | $CRONOLOG $LOGFILE >> /dev/null & ;; stop) /bin/kill $(cat $PIDFILE) ;; run) pgrep -f "$0 $1" > $PIDFILE while [ true ]; do event=$(inotifywait -q -e close_write --format "%f" /root/) ( cp -v "/root/$event" "/var/lib/rootmirror/$event" )& done ;; *) echo "$0 [ start | stop ]" exit 0 ;; esac
One especially nice detail here is that this wont hang while exiting your SSH session after you start it up (a big pet peeve of mine).
Throttle your Threads…
Lets say you want to run some command, such as /bin/long-command on a set of directories. And you have a lot of directories. You know it’ll take forever to complete serially, so you want to cook up a way to run these commands in parallel. You know the server CAN handle more than one command at once, but you have no idea how many it can handle without keeling over, and you have thousands of commands to run. Running them all at once backgrounded will kill the system for sure. You COULD try and stagger them and let the delay in overlap be a natural throttle, but sometimes the command completes in one minute and sometimes in 10, so thats not a good idea either. So you decide it would be best to set a process concurrency limit. But what if you set that limit too low? too high? restarting in the middle would be bad… you COULD make some sort of completed log and build into your script a skip for completed files, but why? that doesnt seem so elegant. Your car is good at handling variable speed allowances… it goes fast when you say and slow when you say… maybe we can give a simple bash script a gas pedal? That just might work!
echo '5' > /tmp/threads
for i in $(fileroot/*); do while [ $(pgrep long-command) -ge $(cat /tmp/threads) ] do sleep 1 done ( /bin/long-command $i )& sleep 1 done
Now you can speed it up and throttle it back by adjusting the integer value inside /tmp/threads.
“It was the little old server from Pasadena…”
Useful bash oneliner: List Server IP Addresses
for i in $(/sbin/ifconfig | grep addr: | cut -d’:’ -f2 | cut -d’ ‘ -f1 | grep -Ev ‘^$’); do echo -n $i”, “; done | sed -r s/’, $’/”/g; echo
I have to say
this is amazing: http://jan.kneschke.de/2007/10/7/wormhole-index-reads and I cant wait to try it somewhere!
Autumn Leaves Leaf #3: Commander
This leaf is capable of running a script on the local server in response to the !deploy channel command. For security you have to authenticate first. To do so you send it a message with a password. it then it http authenticates against a specific url with your nickname and the mesage text as the password. If the file fetched matches predesignated contents then you are added to the internal ACL. Anyone in the ACL can run the !deploy command. If you leave the chan, join the chan, change nicks, or quit irc you will be removed from the ACL and have to re-authenticate. This could be adapted to any system command for any purpose. I ended up not needing this leaf; I still wanted to put it out there since its functional and useful.
require 'net/http' require 'net/https' class Commander < AutumnLeaf before_filter :authenticate, :only => [ :reload, :sync, :quit, :deploy ] $authenticated = [] def authenticate_filter(sender, channel, command, msg, options) return true if $authenticated.include?(sender) return false end def did_receive_private_message(sender, msg) # assumes there is a file at # http://my.svnserver.com/svn/access # whose contents are "granted" Net::HTTP.start('my.svnserver.com') {|http| req = Net::HTTP::Get.new('/svn/access'); req.basic_auth(sender, msg) response = http.request(req) $authenticated < < sender if response.body == "granted" } end def someone_did_quit(sender, msg) $authenticated.delete(sender) end def someone_did_leave_channel(sender, channel) $authenticated.delete(sender) end def someone_did_join_channel(sender, channel) $authenticated.delete(sender) end def deploy_command(sender, channel, text) message "deploying..." system("sudo /usr/local/bin/deploy.sh 1>/dev/null 2>/dev/null") end end
Autumn Leaves Leaf #2: Feeder
This handy little bot keeps track of RSS feeds, and announces in the channel when one is updated. (note: be sure to edit the path to the datafiles) Each poller runs inside its own ruby thread, and can be run on its own independent schedule
require 'thread' require 'rss/1.0' require 'rss/2.0' require 'open-uri' require 'fileutils' require 'digest/md5' class Feeder < AutumnLeaf def watch_feed(url, title, sleepfor=300) message "Watching (#{title}) [#{url}] every #{sleepfor} seconds" feedid = Digest::MD5.hexdigest(title) Thread.new { while true begin content = "" open(url) { |s| content = s.read } rss = RSS::Parser.parse(content, false) rss.items.each { |entry| digest = Digest::MD5.hexdigest(entry.title) if !File.exist?("/tmp/.rss.#{feedid}.#{digest}") FileUtils.touch("/tmp/.rss.#{feedid}.#{digest}") message "#{entry.title} (#{title}) #{entry.link}" end sleep(2) } rescue sleep(2) end sleep(sleepfor) end } sleep(1) end def did_start_up watch_feed("http://planet.wordpress.org/rss20.xml", "planet", 600) watch_feed("http://wordpress.com/feed/", "wpcom", 300) end end