Tuesday Apr 07, 2009

I'm starting to like 1and1.com for webhosting

When I first started using 1and1.com for webhosting, I got frustrated with their emphasis on advertising their site and not seeming to help me. The straw that broke my back was when they couldn't help me load a module into their WordPress.com blog software. I called up sales and had them yank my account. The guy I talked to told me how to fix my problem - install the tarball myself.

That was over a year ago and I've been happy since then. I like being able to solve a problem by installing something.

I've since used them to host 97red.com for my son's soccer team. I've had a blast rolling my own perl scripts.

For my son's soccer club site Blitz United Soccer Club, I decided to go with Network Solutions which wasn't an easy choice. I did it in part because the board of directors had some positive experience with Network Solutions and it was the quickest way to move the domain pointers (administered by Network Solutions).

I held my ground and got the Unix offering. And I almost lost it when I realized that while the host was running Unix, I had no frigging ssh or shell access! What I have to do is use lftp to reverse mirror the site from my home web server.

But you know, I now actually like the model. It keeps an instant backup ready and it forces me to make the changes and view the results before I push. Right now I'm not using a database or PHP on the host, which may change my mind. By the way, my users just love the mail filtering and interface that Network Solutions offers. I've gotten a lot of leeway due to just that.

Originally posted on Kool Aid Served Daily
Copyright (C) 2009, Kool Aid Served Daily

Wednesday Mar 04, 2009

I revamped my blog template/style

I totally trashed my old Roller template and went with a simpler scheme. With the experience I got from doing the web sites for Connectathon.org, BlitzUnited.org, and 97Red.com, it was pretty simple.

The hardest issue was getting the Categories to be formatted. I wanted them to have better class names, but in the end I looked in the resulting html file and just changed my style file to match.

I wish I knew how to see other's custom templates. It is pretty easy to see any style files they develop, but I'm afraid that the templates are within a database.

I'm done for now. Oh, and I'd like to figure out why the colors I used for the site banner with Gimp are coming out different than the background color I set. This is pretty consistent across browsers. The only other thing I see is the custom borders for the content looks washed in Safari.

Originally posted on Kool Aid Served Daily
Copyright (C) 2009, Kool Aid Served Daily

Tuesday Feb 03, 2009

Scripts are great, but don't reinvent the wheel

I'm in the process of moving my son's soccer club's website. And one of the steps I need to do is to backup the user-generated content on it. I don't have ftp, ssh, or even a backup program to access the remote server. Also, some of the content is not readily linked on a given web page. I.e., there might be photo albums with no static links.

I started with wget, but even recursively, it needs a set of links to follow. The current provider suggested:

There are applications available that allow you to download all pages of
any website for viewing offline which could serve as sort of a backup
for you.  Try Google searching for something like
"website downloader spider tools"

I did that, but I still came up against paying for something I knew I could script and also, I still needed that list of files.

I managed to get a listing of the files from the directory management web page. I then used vi to get it down to a list of files like:


First off, wget wasn't making my directories for me when I tried one of these manually. So instead of adding the base URL in vi, I was going to write a perl script to loop over the files, pull out the base directory, make sure to do a 'mkdir -p', retrieve the file, and store it in the correct place.

When I code such things, I skip all over the place. At the start, I was interested in how could I force wget to put the file in the correct place. And as I started to read the manpage, I realized that perhaps I didn't have to write a script. wget could loop over a file, it could prepend a base URL, it could force directory creation.

From the start, I refused to buy a commercial tool because I knew wget should be able to do it for me. I felt vindicated.

In any event, here is the invocation I will end up using:

wget -x -B http://foo.org -i start.list

In retrospect, I believe I didn't even have to strip out all of the fluff from those html pages I had saved. I think wget would have been able to wade through it for me and get the pages.

Originally posted on Kool Aid Served Daily
Copyright (C) 2009, Kool Aid Served Daily

Wednesday Oct 08, 2008

pdf to jpg via ImageMagick

I'm the volunteer webmaster for my son's soccer club: Blitz United Soccer Club. We occasionally get logos and such from sponsors. We want jpeg images for the website and they want high quality pdf for printing. Until now, I've simply asked them for the images in a format we can handle.

I got tired of doing that and googled 'pdf to jpg'. There were a lot of hits of sites that either wanted to install to my windows box or get an email address. I added 'linux' to my search parameter and found a nice hit: Batch converting PDF to JPG/JPEG using free software.

Having heard of ImageMagick vaguely in the past, and since they had many download sites, I installed it on my WinXP desktop. And it didn't convert for me:

C:\\Documents and Settings\\thud\\Desktop\\Downloads\\97red>convert cooper.pdf cooper.jpg
convert: `%s': %s "gswin32c.exe" -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -d NOPROMPT -dMaxBitmap=500000000 -dEPSCrop -dAlignToPixels=0 -dGridFitTT=0 "-sDEVICE=pnmraw" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-r72x72"  "-sOutputFile=C:/DOCUME~1/thud/LOCALS~1/Temp/magick-UtqkGDcw" "-fC:/DOCUME~1/thud/LOCALS~1/Temp/magick-MpE4YxWI" "-fC:/DOCUME~1/thud/LOCALS~1/Temp/magick-z6ByBicB".
convert: Postscript delegate failed `cooper.pdf': No such file or directory.
convert: missing an image filename `cooper.jpg'.

Well, I solved that fairly quickly by:

[thud@adept ~/tmp]> sudo yum install ImageMagick
Setting up Install Process
Parsing package install arguments
Package ImageMagick- already installed and latest version
Nothing to do
[thud@adept ~/tmp]> convert -density 600 cooper.pdf cooper.jpg

Which is probably what I should have tried in the first place.

Originally posted on Kool Aid Served Daily
Copyright (C) 2008, Kool Aid Served Daily

Tuesday Aug 26, 2008

Updated connectathon.org

I just revamped connectathon.org. I added a style sheet - which let me get rid of the all encompassing table to do layouts. I've got a left, content, and right style defined.

I'm still using the grand table approach over at Serialized Science Fiction. But I had just created Blitz United '97 Red for my son's soccer team and I wanted to do away with the table. So I took the Greenline style from GPS Gazette" for WordPress.org and modified it to fit my needs. (I had been using it for Behind The Scenes and like the layouts.)

When I needed to update connectathon.org for 2009, I decided to further modify the style sheet for that site.

I know I can also probably use php or something to consistently wrap the header, left, right, and footers around the content, but I thought that was too heavy handed. So I feel back to pulling those parts out and using some simple scripts to rebuild the final html files. Basically I use cat and sed to modify the files. I didn't worry about changing the titles, which is a bit more complex, but rather just to change the year in the different talksXX/index.html files.

Originally posted on Kool Aid Served Daily
Copyright (C) 2008, Kool Aid Served Daily



« July 2016