Friday Feb 11, 2005

More SMF

Inspired by my success in getting Squid into the Service Management Framework for Solaris 10, I have also now managed to hack something together for Samba.

The script used to kick off and kill the samba daemons is here. This should be copied to a suitable system location - I chose the default service methods directory: /lib/svc/method/.

The service bundle file is here. You could save this to the system services directory: /var/svc/manifest/network, but I chose just to keep it in my homedir as I am only experimenting with it. I guess if you plan to use LiveUpgrade in the future, it might be worthwhile copying this file over as I'm not sure how or whether the services database is rebuilt during Live Upgrade.

The service bundle is loaded into the SMF using the command:

# svccfg validate samba.xml
# svccfg import samba.xml

Note that I have marked it as disabled by default. To enable it, run:

# svcadm enable network/samba

That was easy, wasn't it ?

Tuesday Feb 08, 2005

IBM's Cell is "OS Neutral"

There's a quote from IBM in this article [BBC News] about the PlayStation 3 which comments on the design of the chip...

IBM said Cell was "OS neutral" and would support multiple operating systems simultaneously but designers would not confirm if Microsoft's Windows was among those tested with the chip.

I wonder whether they have tested Solaris 10 on it either, or whether they meant for it just to run AIX and/or Linux?

Come to think of it, It's not even clear what the term "OS Neutral" really means. Aren't all microprocessors OS neutral in reality (all require some level of porting of an OS to the processor's architecture) ?

Friday Feb 04, 2005

My first foray into Service Management Framework

Having got Solaris 10 installed and running with the minimum of fuss, I decided that I ought to learn about the Service Management Framework stuff.

So, I decided that my first task (other than reading the docs ;)) would be to set up a suitable configuration for the Squid proxy I am using.

I am using the Solais CSWsquid package from Blastwave, which comes with its own 'init' script that was copied over when I performed the Live Upgrade a couple of days ago. This is launched as a kind of pseudo-service by the SMF because it is legacy code.

The simplest way to start seemed to be to take a dump of the entire services database and then have a look at how things had been set up for a similar service (I chose the Apache 2 web server). You can get a dump of the entire services configuration using:

svccfg export > /tmp/svcs.xml

Before anyone mentions it, I show this for example only - the sources for the system services can be found in /var/svc/manifest/\* :)

With some judicious re-reading of the smf(5), service_bundle(4) and svccfg(1M) man pages, along with existing entry for apache, here is what I came up with as the service bundle for the squid proxy:

<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">

<service_bundle type='manifest' name='CSWsquid:squid'>

<service name='network/httpproxy' type='service' version='1'>

        <!-- Only want one instance and I want it enabled by default -->
        <create_default_instance enabled='true'/>

        <!-- Can't do anything without a network layer -->
	<dependency name='loopback' grouping='require_all'
		    restart_on='error' type='service'>
		<service_fmri value='svc:/network/loopback:default'/>

	<dependency name='physical' grouping='require_all'
		    restart_on='error' type='service'>
		<service_fmri value='svc:/network/physical:default'/>

	<!-- Squid needs to be able to resolve host names. -->
	<dependency name='name-services' grouping='require_all'
                    restart_on='refresh' type='service'>
		<service_fmri value='svc:/milestone/name-services'/>

	<!-- Need to ensure that /opt is mounted -->
	<dependency name='fs-local' grouping='require_all'
		    restart_on='none' type='service'>
		<service_fmri value='svc:/system/filesystem/local'/>

	<exec_method type='method' name='start'
		exec='/opt/csw/lib/svc/method/squid start'
		timeout_seconds='60' />

	<exec_method type='method' name='stop'
		exec='/opt/csw/lib/svc/method/squid stop'
		timeout_seconds='5' />

	<exec_method type='method' name='refresh'
		exec='/opt/csw/lib/svc/method/squid refresh'
		timeout_seconds='5' />

	<stability value='Unstable' />

			<loctext xml:lang='C'>CSW Squid Proxy</loctext>
			<manpage title='squid' manpath='/opt/csw/man' section='8' />
			<doc_link name=''
				uri='' />

I used a similar naming scheme for this file as is used for the standard Solaris services. However, I stuck under /opt/csw/var rather than /var. So, the file name was: /opt/csw/var/svc/manifest/network/squid.xml. I then created a method script (/opt/csw/lib/svc/method/squid), based on /etc/init.d/cswsquid:


. /lib/svc/share/


[ ! -f ${CONF_FILE} ] && exit $SMF_EXIT_ERR_CONFIG

case "$1" in
        echo 'Starting squid server.'
        ${SQUID} -D &
        echo 'Stopping squid server.'
        ${SQUID} -k shutdown
        exit 0
        echo 'Refreshing squid server.'
        ${SQUID} -k reconfigure
        exit 0
        echo "Usage: $0 { start | stop | refresh }"
        exit 1

Then, it was a simple matter to first validate the XML service configuration:

svccfg validate /opt/csw/var/svc/manifest/network/squid.xml

and then import the configuration into the SMF, which starts squid (after I'd shut down the previously running version):

svccfg import /opt/csw/var/svc/manifest/network/squid.xml

A process listing shows the Squid server running and a quick check with my browser shows that it is working fine. Job done!

I'm not 100% sure I've got the dependencies correct though. As you'll see from the exec_method script, I test for the existence of the Squid config file before attempting to take action. This, however, can be expressed as a dependency using a dependency of type 'path' rather than service:

    <dependency name='config-file' type='path' grouping='require_all' restart_on='none'>
        <service_fmli value='file:///opt/csw/etc/squid.conf'/>

As is common in all sorts of IT development, there is more than one way to skin this particular cat.

Of course, your mileage may vary if you copy this, and don't blame me if it all goes pear shaped - I'm just learning the stuff ;)

Thursday Feb 03, 2005

Solaris 10 is for me via Live Upgrade

I am indebted to Chris for pointing me in the direction of the Solaris Live Upgrade feature to update my system rather than copying 2.3GB of DVD image and burning a DVD. The great thing about Live Upgrade is that you can retain your existing Solaris install whilst building a new copy, thus allowing you to switch between the two easily.

I'd heard of Live Upgrade before, but only in passing, so there was a bit of a learning curve required. However, it turned out to be very simple, and the only slow part was performing the actual upgrade over a 1Mb ADSL link to Sun from my home.

I followed the instructions here [] and have to say that they were very useful, although there was a bit of jumping about between pages to get all of the information I needed. So, here, in summary is what I did:

  1. Create a new partition (slice) on my disk to take the new OS image. In this case, it was slice 4 of my single 72GB disk, and I made it approx 10GB
  2. lucreate -m /:/dev/dsk/c0t1d0s4:ufs -n solaris10

    This copies the existing system onto the new partition and names it "solaris10"

  3. Upgrade the live upgrade packages to their respective Solaris 10 versions:
    • pkgrm SUNWluu SUNWlur
    • pkgadd -d /net/install-server/export/install/sparc/os/10/latest/Solaris_10/Product SUNWlur SUNWluu
  4. luupgrade -u -n solaris10 -s /net/install-server/export/install/sparc/os/10/latest
  5. luactivate solaris10
  6. shutdown -y -g0 -i0
Then just type 'boot' at the OK prompt (on a SPARC workstation) and the system boots the Solaris 10 image.

I'm now running on Solaris 10 JDS rather than Solaris 9 and Gnome 2.0. Some of my Gnome configuration information failed to be read by JDS, so I'm having to change preferences on the fly and add launchers back into the panels, but all in all, everything went very smoothly.

A text file showing all of the output from the upgrade process is here.

Tuesday Feb 01, 2005

Solaris 10 - not yet for me

I have a paltry 1Mb ADSL connection to the Internet, but decided to have a go at downloading S10 rather than waiting for it to be delivered to my home through my Solaris Update contract with Sun.

So I went to the download site, clicked on the DVD option, and there's the first problem - 2.3Gb for the 5 chunks that make up the S10 DVD. That's 8hrs at maxed-out DSL speed!!!

I thought I'd give it a go anyway - where's the harm in trying, even if it does take a couple of days to complete. However, my aspirations were very quickly curtailed when I realised that I was only getting under 20KB/s download speed instead of around 120KB/s. That's 8hrs per segment :(

Perhaps obviously, demand is high for this ground-breaking OS, so maybe I'll give it a few days to calm down, and let the customers get at it first.

Tuesday Jan 11, 2005

IBM frees 500 software patents

IBM have announced that they plan to release 500 software patents to the Open Source community.

What this apparently means (to quote the BBC News website) is:

IBM will continue to hold the 500 patents but it has pledged to seek no royalties from the patents.

The company said it would not place any restrictions on companies, groups or individuals who use them in open-source projects.

I'm not entirely sure whether or not software patents are a good thing. Certainly the views on this are pretty strong and polarised in the software industry, but I can see both sides of the argument (especially as I have a few software patents pending ;)).

I wonder what the response of Sun (particularly) and other companies will be? I also wonder whether these released patents are actually of any use to the Open Source community anyway ?

Wednesday Dec 15, 2004

Firefox 1.0 stability

Well, having built FF from scratch for myself, I can safely say that it is no more stable than the "official" Sun contributed build.

I still get the occasional wierd behaviour wheerby when I click a link, the content is displayed in a popped up undecorated window (actually, it still has the window manager decorations, but no Mozilla decoration). If I click the 'X' to get rid of it, FF goes away. If I minimize it, it disappears altogether, but some time afterward (maybe one or two link clicks later), the browser simply crashes.

I haven't noticed much of a performance improvement in the v8plusb version either. However, I am running it on a reasonably fast box, and my 512k ADSL line is probably more of a performance hit than the browser content rendering speed.

Friday Dec 10, 2004

SPARC v9 Firefox build

Well, after succeeding in building a v8plusb architecure version of Firefox 1.0, I applied myself to the challenge of building a v9 version. It started off swimmingly after I'd discovered a little bug I'd introduced in a Makefile whilst messing with trying to get a v8plusb version to work. Then, I got an error from the assembler when building a SPARC assembly file. I need to specify the arch in ASFLAGS too...

ASFLAGS=-xarch=v9b; export ASFLAGS

Got a bit further, then got some errors to do with casting anonymous pointers to unsigned ints (you naughty boys!). This works okay on a 32-bit architecture, but the 64-bit compiler rightly complains that you can't cast a 64-bit pointer to a 32-bit unsigned int. So, I had to add a little tweak to nsHttpConnectionMgr::OnMgsUpdateParam(...):

#ifdef __sparcv9
    PRUint64 param2 = (PRUint64)param;
    PRUint32 param2 = PRUint32(param);
    PRUint16 name  = (PRUint32(param2) & 0xFFFF0000) >> 16;
    PRUint16 value =  PRUint32(param2) & 0x0000FFFF;

The build carried on for a while from there and then fell over with another anonymous pointer to int cast. This time, the culprit was Glib's GPOINTER_TO_INT macro. It took me an age to discover that the include file which contains this declaration is not in a file in /usr/include/glib-2.0 as anticipated, but in /usr/lib/glib-2.0/include. However, I needed the sparcv9 version which was in /usr/lib/sparcv9/glib-2.0/include. Had to make a quick hack to the Makefile to add this file to the INCLUDES list - I'm not smart enough to figure out where in the complexity of auto-config it needs to go.

A few seconds more of build, then anouther failure in the same directory. This time we have nsDragService.cpp casting a GdkAtom to an int. GdkAtom is actually a pointer, so this was why it was failing. I'm not sure what the correct fix is here, but I assumed that Glib's GPOINTER_TO_INT() macro would do the trick, so i changed all of the offending casts to use that.

Restart the build, off for a cup of tea and do something else for a few minutes.

Alas, the anticpated downfall came when the build tried to glue all of the gtk code into a shared object. Unfortunately, the code references a number of GNOME libraries which don't have 64-bit counterparts on Solaris 9:


Short of downloading the source for each of these and building 64-bit versions, I'm not sure what I can do to rectify this one. I did check Blastwave, but they too only do 32-bit binaries for these libraries :(

I'm not defeated yet, but I don't expect that getting the above libraries as 64-bit objects will be the end of it

Tuesday Nov 30, 2004

I'm disappointed with Looking Glass

Well, not Looking Glass per-se, but the Open Source website.

I thought it would be nice to have a crack at getting Looking Glass to run on my Solaris (SPARC) workstation, and finiding nothing within Sun that would help, I went off to the Looking Glass Development Community, registered and proceeded to have a little look around.

Well, as far as I can make out from the forums, you can download and build the Looking Glass core, but if you want it to work with native (non-Java) apps on Solaris and specifically SPARC, then you are out of luck. It appears that Looking Glass relies on some extensions in the Xorg X11 server, but this hasn't as yet (it appears) been ported to Solaris. Indeed, as of August, it apparently wasn't even a staffed project.

So, here's my beef: Sun comes up with a rather innovative new technology, yet Solaris appears to be the last platform (well, maybe excepting one ;)) on which it will run. Whilst I appreciate that gaining critical mass is probably better served by making Linux (x86) the initial development/deployment platform, it irks me that those of without a Linux box are so badly served.

If I had to guess, I would say that Sun are probably developing the native code for Solaris in-house. If so, why not tell everyone about it rather than seeming to skulk.

Now, maybe someone reading this knows better about the state of a Solaris (SPARC) port and my understanding will yet be corrected. Fingers crossed!

Friday Nov 19, 2004

Rob Gingell leaves Sun

Although I barely knew him and he certainly didn't know me, I'm saddened by Rob's departure.

Much of where Sun is today can be traced back to work that Rob worked on, and whilst I'm sure others will step up to the plate (as the Americans quaintly put it), not many will match his ability to marry up technical innovation with practicality as well as he did.

Good luck Rob!

Compilers and platforms

I was reading a blog the other day about Solaris on x86 and SPARC and the benefits to customers of being able to simply recompile their source to get it to run on the other platform, and it got me thinking that there must be an easier way.

On an internal mail alias here at Sun, I read something a couple of weeks ago which talked about how much better the Sun compilers are than GCC (or perhaps, how much faster the code produced is), and that one of the advantages the Sun compilers have is the generation of an intermediate code representation (ILF?) prior to (IIRC) the final code generation step.

So what I'm thinking is why can't users simply compile their code using the Sun compilers and have it leave the result in this intermediate form? We could then have an interpreter/translator/code-generator on each platform which when called to run the "binary" converts to platform specific assembler and runs it.

I'm reliably informed by a colleague that this is pretty much what .NET does, with the CLR providing the actual platform implementation, and the idea has been around much longer than that (probably even longer than I've been in computing).

It just strikes me that it would make life a lot easier for customers and ISV's and Sun could then write the CLR's for the various platforms and OS's. In fact, if Sun were really keen on OpenSource, they could even publish the ILF spec and allow people to write their own runtime environments.

I'm sure that there are some pretty substantial downsides to this idea, otherwise people in Sun with far bigger brains than me would have already put this into production, but it's food for thought for any of you compiler bods out there.




Top Tags
« July 2016