Autumn Leaves Leaf #1: Announcer

This bot is perfect for anything where you need to easily build IRC channel notifications into an existing process. It’s simple, clean, and agnostic. Quite simply you connect to a TCP port, give it one line, the port closes, the line given shows up in the channel. eg: echo ‘hello’ | nc -q 1 bothost 22122

require 'socket'
require 'thread'

class Announcer < AutumnLeaf

        def handle_incoming(sock)
                Thread.new {
                line = sock.gets
                message line
                sock.close
                }
        end

        def did_start_up
                Thread.new {
                        listener = TCPServer.new('',22122)
                        while (new_socket = listener.accept)
                                handle_incoming(new_socket)
                        end
                }
        end

end

Bash: “I Can’t Eat Another Byte”

root@server:/dir/ # ls | wc -l

1060731

root@server:/dir/ # for i in *; do rm -v $i done; done

me@home:~/ #

HUH?

Turns out that bash just couldn’t eat another byte, and next time I logged in I saw this: “bash[5469]: segfault at 0000007fbf7ffff8 rip 00000000004749bf rsp 0000007fbf7fffe0 error 6“… Impressive 🙂

Scripting without killing system load

Let us pretend for a moment that you have a critical system which can *just* handle the strain that it’s under (I’m sure all of you have workloads well under your system capabilities, or capabilities well over your workload requirements, of course; still for the sake of argument…) And you have a job to do which will induce more load. The job has to be done. The system has to remain responsive. Your classic response to this problem is adding a delay, for example:

     #!/bin/bash
     cd /foo
     find ./ -type d -daystart -ctime +1 -maxdepth 1 | head -n 500 | xargs -- rm -rv
     while [ $? -eq 0 ]; do
          sleep 60
          find ./ -type d -daystart -ctime +1 -maxdepth 1 | head -n 500 | xargs -- rm -rv
     done

Of course this is a fairly simplistic example. Still it illustrates my point. The problem with this solution is that the machine you’re working on is likely to have a variable workload where its main use comes in surges. By defining a sleep time you have to iether sleep so long that the job takes forever to finish, or skirt with high loads and slow response times. Ideally you would be able to let her rip while the load is low and throttle her back while the load is high, right? Well we can! Like so:

     #!/bin/bash
     function waitonload() {
          loadAvg=$(cat /proc/loadavg | cut -f1 -d'.')
          while [ $loadAvg -gt $1 ]; do
               sleep 1
               echo -n .
               loadAvg=$(cat /proc/loadavg | cut -f1 -d'.')
               if [ $loadAvg -le $1 ]; then echo; fi
          done
     }

     waitonload 1
     find ./ -type d -daystart -ctime +1 -maxdepth 1 | head -n 500 | xargs -- rm -rv
     while [ $? -eq 0 ]; do
          waitonload 1
          find ./ -type d -daystart -ctime +1 -maxdepth 1 | head -n 500 | xargs -- rm -rv
     done

This modification will only run the desired commands when the system load is less than 2, it will wait for that condition to continue the loop. This can be very handy for very large jobs needing to be run on loaded systems. Especially jobs which can be subdivided into small tasks!

All things being equal, the simplest solution tends to be the best one.

Occam’s razor strikes again!

Tonight we ran into an interesting problem. A web service – with a very simple time-elapsed check – started reporting negatives… Racking our brain, pouring over the code, produced nothing. It was as if the clock were jumping around randomly! No! On a whim Barry checked it and the clock was, indeed, jumping around…

# while [ 1 ]; do date; sleep 1; done
Wed May 30 04:37:52 UTC 2007
Wed May 30 04:37:53 UTC 2007
Wed May 30 04:37:54 UTC 2007
Wed May 30 04:37:55 UTC 2007
Wed May 30 04:37:56 UTC 2007
Wed May 30 04:37:57 UTC 2007
Wed May 30 04:37:58 UTC 2007
Wed May 30 04:37:59 UTC 2007
Wed May 30 04:38:00 UTC 2007
Wed May 30 04:38:01 UTC 2007
Wed May 30 04:38:02 UTC 2007
Wed May 30 04:38:19 UTC 2007
Wed May 30 04:38:21 UTC 2007
Wed May 30 04:38:22 UTC 2007
Wed May 30 04:38:23 UTC 2007
Wed May 30 04:38:24 UTC 2007
Wed May 30 04:38:08 UTC 2007
Wed May 30 04:38:09 UTC 2007
Wed May 30 04:38:10 UTC 2007
Wed May 30 04:38:28 UTC 2007
Wed May 30 04:38:12 UTC 2007
Wed May 30 04:38:30 UTC 2007
Wed May 30 04:38:31 UTC 2007
Wed May 30 04:38:32 UTC 2007
Wed May 30 04:38:33 UTC 2007
Wed May 30 04:38:34 UTC 2007
Wed May 30 04:38:35 UTC 2007
Wed May 30 04:38:19 UTC 2007
Wed May 30 04:38:20 UTC 2007
Wed May 30 04:38:21 UTC 2007
Wed May 30 04:38:22 UTC 2007
Wed May 30 04:38:40 UTC 2007
Wed May 30 04:38:41 UTC 2007
Wed May 30 04:38:42 UTC 2007
Wed May 30 04:38:43 UTC 2007
Wed May 30 04:38:44 UTC 2007

PHP CLI Status Indicator

Most times when people write command line scripts they just let the output flow down the screen as a status indicator, or just figure “it’s done when it’s done” But sometimes it would be nice to have a simple clean status indicator, allowing you to monitor progress and gauge time-to-completion. This is actually very easy to accomplish. Simply use
instead of
in your output. Obviously the example below is very simplified, and this can be applied in a much more sophisticated fashion. But it works.

$row_count = get_total_rows_for_processing();
$limit=10000;
echo "\r\n[  0%]";
for ( $i=0; $i < = $row_count; $i = $i + $limit ) {
  $query="SELECT * FROM table LIMIT {$limit} OFFSET {$i}";
  // do whatever
  $pct = round((($i+$offset)/$row_count)*100);
  if ( $pct < 10 ) {
    echo "\r[  $pct%]";
  } else {
    if ( $pct < 100 ) {
      echo "\r[ $pct%]";
    } else {
      echo "\r[$pct%]";
    }
  }
}
echo "\r[100%]\r\n";

Backgrounding Chained Commands in Bash

Sometimes it’s desirable to have a chain of commands backgrounded so that a multi-step process can be run in parallel. And often times its not desirable to have yet another script made to do a simple task that doesn’t warrant the added complexity. An example of this would be running backups in parallel. The script sniplet below would allow up to 4 simultaneous tar backups to run at once — recording the start and stop times of each individually — and then wait for all the tar processes to finish before exiting

max_tar_count=4
for i in 1 3 5 7 2 4 6 8
 do
 cur_tar_count=$(ps wauxxx | grep -v grep | grep tar | wc -l)
 if [ $cur_tar_count -ge $max_tar_count ]
  then 
  while [ $cur_tar_count -ge $max_tar_count ]
   do
   sleep 60
   cur_tar_count=$(ps wauxxx | grep -v grep | grep tar | wc -l)
  done
 fi
 ( echo date > /backups/$i.start &&
     tar /backups/$i.tar  /data/$i &&
     echo date > /backups/$i.stop )&
done
cur_tar_count=$(ps wauxxx | grep -v grep | grep tar | wc -l)
while [ $cur_tar_count -gt 0 ]
 do
 sleep 60
 cur_tar_count=$(ps wauxxx | grep -v grep | grep tar | wc -l)
done

The real magick above is highlighted in red. You DO want that last loop in there to make the script wait until all the backups are really done before exiting.

CryoPID

Now this is cool: CryoPID a process freezer for linux.

“CryoPID allows you to capture the state of a running process in Linux and save it to a file. This file can then be used to resume the process later on, either after a reboot or even on another machine.

CryoPID was spawned out of a discussion on the Software suspend mailing list about the complexities of suspending and resuming individual processes.

CryoPID consists of a program called freeze that captures the state of a running process and writes it into a file. The file is self-executing and self-extracting, so to resume a process, you simply run that file. See the table below for more details on what is supported.”

I find myself wondering: Could this be a new way of distributing interpreted language desktop apps as binary files without releasing the source?

Kudos to the openfount guys

I’m really very impressed with the speed at which the Openfount guys responded to my last post. I definitely give Kudos to Bill for being on top of things! I’m running out the door so I’ll keep this short and sweet.

He’s right, I did generalize databases into InnoDB, but thats because it’s what I use. So my apologies for that.
I definitely had no intention of badmouthing the Openfount guys (if thats what it sounded like I did, I apologize) Just reporting what I saw, and my impressions.

The Bill – I would have either used

  • apokalyptik
  • at
  • apokalyptik
  • dot
  • com

or

  • consult
  • at
  • apokalyptik
  • dot
  • com

or

  • demitrious
  • dot
  • kelly
  • at
  • gmail
  • dot
  • com