ruby-Delicious v0.001

Since I’ve worked out the kinks mentioned in my last blog entry (was a problem with re-escaping already escaped data, by the way (never debug while sick and sleep deprived!)) I’ve scraped things together into a class which is a client for the api itself.  It’s relatively sparse right now, but good enough for using in an application.  Which is what the client is geared towards, by the way.  Specifically (and privately) tagging arbitrary data.  It *can* publicly tag URLs, but thats more or less a side effect if what delicious… is… and not a direct intention while writing the API.  You can visit the quickly thrown together ruby-Delicious page here (link also added up top)

Having a strange problem with the del.icio.us api

(note: example urls broken because of line width, they’re each on one line in my code :D)

I’m using code referenced here: http://www.bigbold.com/snippets/posts/show/2431 to access the del.* tagging api with only limited success. I dont think it’s the code, though, because the problem is replicatable in the browser, and everything *seems* to line up with the docs.. The URL I use to create the item is:

https://api.del.icio.us/v1/posts

/add?&url=la+la+la&description=foo&tags=foo

This works. I get a nice “foo” with the proper url “la la la” and I get a pretty <result code=”done”/> Then I try to delete the item with iether of these urls:

https://api.del.icio.us/v1/posts
/delete?&url=la+la+la

https://api.del.icio.us/v1/posts
/delete?&url=foo

Niether of these work, I still get a pretty <result code=”done”/>, but the item is never deleted…

I saw this problem referenced on the Tucows Farm blog but the only suggestion in the comments was: “Google for “delicious-import.pl”, it deletes bookmarks upto 100 at a time. A quick little override in the code will make it delete all bookmarks. Handy when you screw up an import. Not so handy in other situations, which is why you cant do it by default. This script will read a netscape/firefox/mozilla type bookmark file. I am re-working it to do Opera for me.” Which I did, the URL build inside of the script there is http://user:[email protected]/api/posts/delete?&url=la+la+la but that’s out of date. I tried it anyhow, andit redirected me to the /v1/ query above (https://api.del.icio.us/v1/posts/delete?&url=la+la+la) Which still didnt work. I can’t imagine that I’m the only person who’s run into this problem

Tag anything, anywhere?

I’ve not been able to find anything really high profile (good google page rank) but is there an API which allows you to tag *anything* anywhere? (not just URLS, but… any piece of data?) Being able to take one arbitrary identifier, optionally a type, and add arbitrary tags to it sounds like the stuff of web 2.0, yea? but seems people are just home-brewing their own. Now if I were able to go somewhere and /tags/people/demitrious or /tags/blogs/demitrious or /tags/*/demitrious or /tags/urls/apokalyptik.com /tags/foo/bar then we’d be getting somewhere

The quest for clean data

When you’re on the leading edge of things you always have a problem: dirty data. And the quest for clean data is always at the forefront of your mind. Last night while searching for a geocoding service which didn’t suck outside the US and major EU countries I cane upon this article which put into words the stormy mood that had been brewing whilst I struggled in my quest: Geocoding, Data Quality and ETL. I know geocoding is outside my normal sphere of writings, but the way technology is going some of you are going to eventually have to work with geocoding at some point.

And the bottom line is this: While we now have the tools and techniques necessary for getting the job done right, It’s going to be a long time until we actually get it right. It’s just one of those things that takes a lot of time, money, and manpower to accomplish.

That being said… I wonder how difficult it would be to mashup, say, Google Earth and current cartographic services to specifically draw attention to problem areas, and to setup as an automatic alert for new expansion (or demolition for that matter?!) Not being my area of expertise I’d be hard pressed to get that right, at least without some insider help anyhow. But I’d be willing to bet that it would prove a valuable tool for the geospacial community. And make no mistake about it: the better they are at doing what they do the easier it is for you to find the newest starbucks.

Openfound (cont)

If there’s one thing that the OpenFount guys have shown me is that they’re serious about the Infinidisk product.  Mr. Donahue gave me a quick call this evening (seems my e-mail server and his e-mail server aren’t talking properly, so while I get his communications) he has not received mine (probably explaining the lack of response to my pre-sales inquiry)) to chat about his product. The particular bug that I noticed, he mentioned, was fixed a while ago in a later release than I’d tried.  In my defense the page looks precisely like it did when I first got the product, and the release tar file has no version numbers on it… yet… so I did check for updates.  I found out some good info, though.  They’re working on  putting up a trac page for some real bidirectional community participation soon.  They’ll also be putting version numbers in their archives soon. Both of those things will help, I think, improve their visibility to people like me (who have very very little time.)
I’ll be re-testing the infinidisk product again, later, when next I customize an AMI.

CryoPID

Now this is cool: CryoPID a process freezer for linux.

“CryoPID allows you to capture the state of a running process in Linux and save it to a file. This file can then be used to resume the process later on, either after a reboot or even on another machine.

CryoPID was spawned out of a discussion on the Software suspend mailing list about the complexities of suspending and resuming individual processes.

CryoPID consists of a program called freeze that captures the state of a running process and writes it into a file. The file is self-executing and self-extracting, so to resume a process, you simply run that file. See the table below for more details on what is supported.”

I find myself wondering: Could this be a new way of distributing interpreted language desktop apps as binary files without releasing the source?

Kudos to the openfount guys

I’m really very impressed with the speed at which the Openfount guys responded to my last post. I definitely give Kudos to Bill for being on top of things! I’m running out the door so I’ll keep this short and sweet.

He’s right, I did generalize databases into InnoDB, but thats because it’s what I use. So my apologies for that.
I definitely had no intention of badmouthing the Openfount guys (if thats what it sounded like I did, I apologize) Just reporting what I saw, and my impressions.

The Bill – I would have either used

  • apokalyptik
  • at
  • apokalyptik
  • dot
  • com

or

  • consult
  • at
  • apokalyptik
  • dot
  • com

or

  • demitrious
  • dot
  • kelly
  • at
  • gmail
  • dot
  • com

Infinidisk Update

I mentioned a while back that I was going to be playing with the S3 Infinidisk Product.  What I found in my testing was that this product is not prime time ready.  There was a nasty bug which caused data to be lost if the mv command was used. The scripts themselves were unintuitive.  They required fancy-pants nohupping or screening to use long term.  Oh, and a database definitely will not work on top of this FS. It seems obvious in retrospect, but I wanted to be sure.  InnoDB wont even build its initial files much less operated on the FS.  To top it all off, My pre-sales support question was never even so much as acknowledged.

No, I think I’ll be leaving this product alone for now and sticking with clever uses of s3sync and s3cmd, thanks.