Friday Apr 03, 2009

Automatically Change Your Twitter Background

As a follow-on to yesterday's post, the Python script has been updated to automatically send your newly generated random image to Twitter to become your new background image. That functionality is not part of the "standard" Twitter Python API so many thanks to lucy for providing the magic necessary code

The new version of the script is here. Save it, then rename it to You will also need Before you use it, you will need to adjust the username and password lines (about line 55) to be your valid user name and password.

Twitter is sometimes flaky in taking these images ("urllib2.HTTPError: HTTP Error 500: Internal Server Error"), but then again, I sometimes get similar problems when I try to change my background image from the Twitter Settings dialog.

The script sometimes generates images greater than the allowed size when used with the wallpaper option. I'll need to fix that.



Thursday Apr 02, 2009

Generate Random Backgrounds For Twitter Revisited

You may remember my previous attempt to do this. There was the problem of it generating a .bmp file, which then had to be converted to .jpg or .gif (with something like Gimp), before it could be uploaded to Twitter.

I've gone back and reworked my cback script so that it now uses the Python Imaging Library. Not only does this fix the .jpg problem, it has made it an order of magnitude faster. The new source code is here. The changes were minimal.

The final problem to try to solve, (in trying to totally automate this), is to see if there is a way to automatically upload the new image to Twitter. I'll leave that for another day.



Thursday Mar 26, 2009

Generate Random Backgrounds For Twitter

I had a hack attack yesterday. I converted an old C program of mine that automatically generated random patterned backgrounds for an X11 desktop, to now generate similar random patterns, but this time save them in a .bmp file. If you then convert then to .jpg (or .gif or .png) they can be uploaded and used as your new Twitter background.

The simple Python script that did this is here. A big thankyou to Paul McGuire for writing the code that makes this so easy. You can find that file here.

You can see a sample random circle background pattern on my Twitter page.

It's version 0.1 of the code. A quick simple conversion. It needs to be tidied up and improved. It should also just generate .jpg (or .gif or .png) files. I need to see if there is a standard Python class trhat does something like that.

What would be even nicer was if it could automatically change your Twitter background, but I don't see anything in the Python API for Twitter that allows anything like that. Pity.



Sunday Dec 21, 2008

Make Your Own Favorite Instructables CDROM

Over the past few weeks I've been viewing the Instructables web site by rating, to see which ones I might like to do with my son. As I found interesting ones, I saved the PDF of the instructions in an Instructables directory on my computer. I was then going to burn a CDROM with all these saved files on it.

The problem with this approach is that it's not always obvious from the saved names, what the Instructable was about.

So this morning, I hacked up a simple script, that would look at all the filenames in that Instructables directory, then use the various Instructable web pages to create an index.html file that shows thumbnails and titles for each PDF file. I then added that file to the Instructables directory.

I then burned a CDROM with all the saved files and the index.html so that we can navigate them more easily.

Here's the script. It assumes the index.html file is in the same directory as all the PDF's.

If you'd like to use it, then you should save the PDF's you are interested in into a specific directory, then adjust the instDir variable in the script to point to that directory. If you've been selecting them by rating and you know how many pages you've viewed, then you can adjust the maxPage variable accordingly. The script will exit when it's either found all your saved files, or it's hit the maxPage web page.

(I initially tried to use BeautifulSoup to parse the Instructables web pages, but there's something there that it doesn't like. I also tried SimpleJSON, but that failed to parse it too. In the end, I just parsed the HTML myself. It didn't find all my PDF files. I can only assume that some of the ones I'd previously saved have now been renamed or deleted).

Here's a sample of the output it generated for the index.html file to show what it looks like in a browser. Hover over the images to get the titles. Obviously the links won't work as the PDF's aren't in the same directory for my blog.

The index.html that went on the CDROM has a lot more potential projects. Now we'll get to see which ones Duncan might be interested in helping to make.




Friday Sep 05, 2008

[UPDATE] Automatic Help Bookmarks For Numerous Subjects

I revisited the new version of the auto-bookmarks Python script today. I adjusted the search to return the URL of the search result that had the most keywords from the query string in its title. Note that you only get four search results per query so sometimes this isn't a vast improvement.

I also vastly extended the number of subjects. They now consist of computer languages, tools, areas of computer technology and various other things I'm interested. I've also increased the number of topics per subject that it'll search for. I note that not all of them make sense for some of the topics.

I've made the updated version of the script available. It should be trivial to adjust it for other topics you might be interested in.

I would have embedded mine here, but I've noticed that the Roller blogging software still imposes a maximum blog post size that makes that impossible, so I've placed them in my resources area. You can find them here.






Monday Sep 01, 2008

Automatic Computer Language Help Bookmarks

After seeing the old post I referenced in my last blog entry, I thought I'd dig out the code to auto-generate computer language help bookmarks and update it.
Nowadays I try to use Python for all my programming tasks. A while ago, I would have been able to use pyGoogle to do my Google searches, but that SOAP based interface no longer seems to work, and we are dependent upon the Google AJAX Search API.

There doesn't seem to be an extensive Python API to that, but looking around I did find this simple working example and used that as the basis of my script.

This script will, for 36 languages, automatically generate a web page containing the top Google search results for the topics of 'Home Page', 'Reference', 'FAQ', 'Tutorial' and 'HOWTO'. Here's the results (with the initial and final HTML commands removed so that I can embed it here):

Ada: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Asp: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Awk: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Basic: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Boo: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

C: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

C++: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

C#: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Caml: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Cobol: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Eiffel: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Erlang: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

F#: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Forth: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Fortran: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Haskell: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Java: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

JavaScript: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Lisp: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Lua: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Oberon: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

OCaml: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Oz: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Pascal: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Perl: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

PHP: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Prolog: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Python: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Rebol: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Rexx: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Ruby: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Scala: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Scheme: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Scriptol: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Smalltalk: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

Tcl: [Home Page] [Reference] [FAQ] [Tutorial] [HOWTO]

It's very simplistic. I'm not trying to use standards or get as complex as I did last time.

Some of the results are still a little bogus, or in some cases, just a bit too specific. I suspect they could be improved by using better search query strings. Another possible improvement is to check all the search results returned by the Google query, and use the one which has the most keywords in the title (rather than always using the first result).




Monday Aug 18, 2008

Olympic Proxy

Olympic Proxy

Just like last time, NBC seems to have this aversion to showing anything at the Olympics unless there is an American in it. Their web site content is just as bad.

If you are as fed up as Jason and I am, you might want to try setting up an olympic proxy.

(Thanks Hackszine).




Wednesday Apr 30, 2008

BeautifulSoup - Get A 10 Day Weather Forecast For Your Zip Code

After Matt Harrison mentioned BeautifulSoup in a comment to an old Python script post of mine, I've been looking for somewhere where I could use it.

BeautifulSoup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping.

I initially played around with it, seeing if I could use it to get listings of when new episodes of my favorite TV programs were appearing, now that Zap2It Labs are no longer making their listing available for free. The problem there (I think) is that, because of the dynamically generated content on their TV listings website, I can't find a URL that BeautifulSoup can parse.

So I picked something different to cut my teeth on.

I often go to and get a 10 day forecast for the city where I live. Easy to do, but I used this as an example of something to extract from a web page and then also email it to me so I have it handy.

This script does this. You will also need to get a copy of for it to work properly. I've simply put them both in the same directory and run it with:

  $ python ./

If others are interested in running this, then there are two variables that you will need to change in the script to meet your needs:

# Zip code to get 10 day forecast for.
zipCode = "94024"

# Email address to sent results to.
emailAddr = ""

Just like my early attempts with using XPath in some of my JavaScript scripts, I suspect that I'm not doing it the best way.I predict that there are much nicer ways of writing the extractForecast() routine.

Still it works and that's the first step in programming.




Friday Feb 22, 2008

Another Python Library Script

Something I've been meaning to do for a while.

When I go to the library, I'll first look in the "new books" section. If there is nothing there that I'm interested in, I'll look for books off one or more lists that I have. One of those lists is for the books on my Amazon Wish Lists.

My library is part of the Santa Clara County library system. Several branches work together. If the book is in the county library system, but not available at my local branch, I can put in a request, and they'll ship a copy to me as soon as one is available.

It's also possible that my local branch has a copy and it's out. What I really want to know is which books I'm interested in are available now in my local branch, so I can grab them if I immediately visit the library.

This script helps me do that. Here's how it works. For each of the books on each of the given Amazon Wish List ID's, it'll extract the ISBN and use that to query my libraries online catalog.

It'll first check the HTML reply, looking for the string "Sorry, could not find anything matching". If it finds, that, it'll go onto the next book. If it doesn't find it, then the county library has at least one copy of the book. It then looks for the string "Los Altos Library". If it finds that, it'll then grab the reply from that point upto the sub-string "Add Copy to MyList" and divide it into tokens, using "<" as a separator. It'll then look for tokens that start with "a class" and grab the sub-string from the ">" character to the end of the string. When that's complete for all the tokens, it'll check to see if the last one is "In". If it is, we've found a book that's in at my local library, and the book results are written to standard out. The script also writes a few messages to stderr that give information on the books that are in the county library system and/or are available from my local branch but are not currently in.

Here's a partial listing of what the program output to stderr looks like as it's running:

$ python ./ >booklist.txt
Found in County Library: Watercolor: Painting Smart
Found in County Library: Chaos and Fractals: New Frontiers of Science
Los Altos Library has a copy
Found in County Library: Ships-In-Bottles: A Step-By-Step Guide to a Venerable Nautical Craft
Los Altos Library has a copy
Currently IN
Found in County Library: Complete Stories of Robert Bloch: Final Reckonings (Complete Stories of Robert Bloch)
Los Altos Library has a copy
Currently IN
Found in County Library: Looking for Jake: Stories
Los Altos Library has a copy
Found in County Library: 123 Robotics Experiments for the Evil Genius (TAB Robotics)
Los Altos Library has a copy
Found in County Library: The Art and Craft of Paper Sculpture: A Step-By-Step Guide to Creating 20 Outstanding and Original Paper Projects
Found in County Library: Bad Science: The Short Life and Weird Times of Cold Fusion

The booklist.txt entries for those two books above that are found in my local library look like:

Ships-In-Bottles: A Step-By-Step Guide to a Venerable Nautical Craft  Nonfiction Section  745.5928 HUBBARD  In
Complete Stories of Robert Bloch: Final Reckonings (Complete Stories of Robert Bloch)  Science Fiction Section  SF BLOCH ROBERT  In

As you can see, the output even tells me which section of the library to look in for each book.

For anybody who wants to use the script as the basis for doing something similar for their library, you are going to have to make small changes in four areas. Three are trivial. The fourth will take a little Python programming.

  • You'll need to replace the "KKKKKKKKKKKKKKKKKKKK" on the line:
    amazonAccessKey = "KKKKKKKKKKKKKKKKKKKK"
    with your own Amazon Access key. If you don't already have one, then click the "Sign up now" link on the right side of their Amazon Web Services web page.

  • You'll need to replace the "XXXXXXXXXXXXX", "YYYYYYYYYYYYY" on the line:
    amazonWishListIDs = [ "XXXXXXXXXXXXX", "YYYYYYYYYYYYY" ]
    with the ID's of the Amazon wish lists you are interested in. One way to find out the Amazon wish list ID, is to view the wish list, then click on the "Edit list information" link near the top left of the web page. You'll be taken to another web page with a URL of the form:

    The "XXXXXXXXXXXXX" part is the wish list ID.

  • The URL of your libraries online catalog. This old post of mine and Jon Udell's LibraryLookup Bookmarklet Generator should help with that.

  • And the last part is where you'll need to cut some Python code. The checkLibrary() routine will need to be rewritten to search for the appropriate strings on the HTML web page that your library catalog generates.

For the Python naming pedants, I've found that I simply like CamelCase variable names better that the current Python naming "standard", so you'll going to have to just deal with it. Other constructive Python criticisms are appreciated.

As I was writing this, I realized I didn't take into consideration the case where my local library might have multiple copies of the same book, and one or more of them might be in, even though the first one wasn't.

But that'll be a fix for the next version.




Thursday Jan 24, 2008

Take 3 - LifeHacker Category Viewer GreaseMonkey Script Working Again

You may remember a post from last November that described a GreaseMonkey script that would display the list of categories of the LifeHacker web site and allow you to dynamically display all the posts associated with each category.

Tyler Trafford, (who gave me lots of help getting that working), emailed me today with a change I would need to make because of the new security enhancements in the latest version of the GreaseMonkey Firefox add-on.

In testing it, we discovered that the script no longer worked with the LifeHacker site, with or without the suggested change.

Before I could get to it (darn RealWork™ getting in the way again), Tyler went and worked out what other changes had to be made to the GreaseMonkey script and sent them to me (thankyou!).

For you conspiracy theorists, I should let you know that when I posted the previous version back in November, I sent an email to the LifeHacker folks telling them about it. I was under the naïve impression that they might want to let their users know. Hah! Not a dickie bird. Nothing posted to their web site. No acknowledgment whatsoever.

And now we find that the old script doesn't work anymore because they've changed their website layout!

Coincidence? I think not. Let this be just our little secret this time.






Wednesday Jan 23, 2008

Your Ultimate Hacking Tools

Hack a Day have an interesting post today. It's a contest.

Here's the challenge: Given a budget of $600, put together the best hacking workbench you can. Don't include computers or the actual bench in your budget. Oh, and you have to spend it all.

This is for hardware hacking. See the comments to their post, for the replies so far. I should probably wait until they pick the five winners to see which tools I should add to my collection.

This post got me thinking about the ultimate tools for software hacking. Hacking in the nice sense of the word. If they are open source and/or freely available, then the cost would just be time not money.

So if you have any recommendations on your essential tools for your hacking arsenal (especially if your code in Python), please feel free to comment. If I get a sufficient response, I'll summarise in a future post.




Wednesday Jan 16, 2008

Roly Poly Pot Redux

I saw this post on Hackszine about a flower pot that will tilt over when it needs water.

That's cute, but why stop there? One article below it shows how you can use an Arduino board for helicopter control to stabalize the roll and pitch.

Why not combine the two? When the pot tips, the Arduino detects this and waters the plant. Hopefully the pot goes back to vertical and the water is turned off.

Now that would be a neat hack!


Tuesday Jan 08, 2008

Wii Remote and Nunchuck Projects

If you can pry the Wii Remote and/or Nunchuck from your child's hands, then you could possibly use it for one of these interesting projects:

Let's hope the Wii parts all go back together again afterwards.



Friday Dec 14, 2007

ListsofBests List of Lists GreaseMonkey Script

Friday is hacking day, so here's another little GreaseMonkey hack.

Previously I'd created a GreaseMonkey script that would take one of the ListsofBests lists and turn it into a plain text list, making it easier to read.

This new script will take their list of lists, for the awards, definitive or personal lists categories, for Books, Music, Movies, Places, People or More, and turn it into a simple list of links. No having to click through numerous pages to get to something you might be interested in. No web site bling to distract you. Just the list.

If you're like me, (because these lists do take a while to regenerate, especially for the personal categories), you'll then save away a copy and bookmark it.

If I get enthused, the next step is to adjust the script so that clicking on a list entry will expand that list inline, rather than going off to the actual list web page and then using my other GreaseMonkey script.

But that's for another day. Back to RealWorkTM






Monday Nov 26, 2007

LifeHacker Category Viewer GreaseMonkey Script

Another GreaseMonkey script, this time to list out all the posts under all the categories at the LifeHacker web site.

If you running Firefox and have installed GreaseMonkey and this script and have it enabled, then if you visit their archives web page, it'll do its thing.

Note that there are a lot of posts there and most of them have been cross-categorized, so this will take a long time. It generates a new web page that's over 3.8Mb when saved. It also loads a lot of extra web pages very quickly which must be disruptive to their web servers.

If anybody can tell me how I can adjust this script to "throttle back", I'd very much appreciate it.









« April 2014