Tuesday Jan 16, 2007

Squid

I have finally installed a transparent caching server on the home server. Mainly as it provides an easy way to block unsuitable sites from the kids.

Adding these lines to ipnat.conf, recall my internal network is nge0 and the internet lives on rtls0

rdr nge0 0.0.0.0/0 port 80 -> 127.0.0.1 port 8080 tcp

The proxy server is listening on port 8080.


Then taking the source to squid which I built with these options:


CC=cc ./configure --prefix=/opt/squid --enable-ipf-transparent --enable-ssl

Whilst I could have used the package from blastwave.org I would like to wean the system off blastwave packages as they pull in lots of duplicated libraries when used on Solaris 10 or as in my case Nevada.


The squid cache is being stored in it's own ZFS file system /tank/squid/cache as you would expect however thanks to the way I have laid out the file systems it does snapshotted so it won't chew through disk.


Then using the work that Trev has done I now have a working manifest and start script.


Tags:

Thursday Sep 07, 2006

ddclient meets SMF

Nice quick bit of progress on the home server today and one more service of the Qube.

Downloaded ddclient and copied over the configuration from the Qube. Then just had to write a smf manifest:

<?xml version="1.0"?>

<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">

<service_bundle type='manifest' name='ddclient'>

<service
        name='network/ddclient'
        type='service'
        version='1'>

        <create_default_instance enabled='true' />

        <dependency
                name='network'
                grouping='require_all'
                restart_on='none'
                type='service'>
                <service_fmri value='svc:/network/initial:default' />
        </dependency>

        <exec_method
                type='method'
                name='start'
                exec='/usr/sbin/ddclient'
                timeout_seconds='0' />

        <exec_method
                type='method'
                name='stop'
                exec=':kill -15'
                timeout_seconds='3' />

        <stability value='Unstable'/>

        <template>
                <common_name>
                        <loctext xml:lang='C'>Dynamic DNS client</loctext>
                </common_name>
        </template>
</service>
</service_bundle>

Then the usual 'svccfg import ddclient.xml' and then we are going:

# svcs -x ddclient
svc:/network/ddclient:default (Dynamic DNS client)
 State: online since Thu Sep 07 22:36:47 2006
   See: /var/svc/log/network-ddclient:default.log
Impact: None.
#

tags:


Tuesday Feb 08, 2005

smf meets nis_cachemgr

If you use NIS+ and reboot system you will know that occasionally the files in /var/nis get corrupted and nis_cachmgr will dump core. So many people opt for starting nis_cachemgr with the flag “-i” so that it does not use the cache at start time and goes and gets new one.

So how do you do this with smf? Oddly there is no option in the manafest to set this:

# svccfg export nisplus
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
  <service name='network/rpc/nisplus' type='service' version='0'>
    <dependency name='keyserv' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/network/rpc/keyserv'/>
    </dependency>
    <exec_method name='start' type='method' exec='/lib/svc/method/nisplus' timeout_seconds='60'>
      <method_context/>
    </exec_method>
    <exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'>
      <method_context/>
    </exec_method>
    <instance name='default' enabled='true'>
      <property_group name='application' type='application'>
        <stability value='Unstable'/>
        <propval name='emulate_yp' type='boolean' value='false'/>
      </property_group>
    </instance>
    <stability value='Unstable'/>
    <template>
      <common_name>
        <loctext xml:lang='C'>NIS+</loctext>
      </common_name>
      <documentation>
        <manpage title='rpc.nisd' section='1M' manpath='/usr/share/man'/>
      </documentation>
    </template>
  </service>
</service_bundle>



However looking in “/lib/svc/method/nisplus”, there is a property that would be used if set:

        cache=`/usr/bin/svcprop -p application_ovr/clear_cache $SMF_FMRI \\
            2>/dev/null`
        if [ $? != 0 ]; then
                cache=`/usr/bin/svcprop -p application/clear_cache $SMF_FMRI \\
                    2>/dev/null`
        fi

        [ "$cache" = "true" ] && cachemgr_flags="$cachemgr_flags -i"

So if you set “application_ovr/clear_cache” or “application/clear_cache” to true you will get the -i option.

# pgrep -fl nis_cache
  260 /usr/sbin/nis_cachemgr
# svccfg -s svc:/network/rpc/nisplus:default \\
    setprop application/clear_cache = boolean: "true"
# svcadm refresh  svc:/network/rpc/nisplus:default
# svcprop -p  application/clear_cache svc:/network/rpc/nisplus:default
true
# svcadm restart svc:/network/rpc/nisplus
# pgrep -fl nis_cach  
1788 /usr/sbin/nis_cachemgr -i



I'm sure this is all crystal clear in the docs.

Wednesday Jan 26, 2005

My first smf service.

With some trepidation I set about porting a very simple rc script to into smf. It seems I should not have been overly concerned. The service in question was a very small Postgres database, which on Solaris 9 had been started from rc3.d. Moving the rc script over to 10 worked but I wanted a fuller smf experience so decided to create a manifest and import that into smf on the zone running the database.

Starting with the start part of the rc script this became:

#!/sbin/sh

FMRI="svc:/appl/postgres"

getproparg() {
        val=`svcprop -p $1 $FMRI`
        [ -n "$val" ] && echo $val
}

p=`getproparg postgres/path`
d=`getproparg postgres/dbdir`
u=`getproparg postgres/user`

PATH=$p:${PATH}
export PATH 

exec /bin/su $u -c "postmaster -i -S -D $d"

It struck me that we are no longer confined to the bourne shell at this point. As long as we depend on the files systems being mounted we could use a more exotic shell, though never the csh. So all the possible options that postgres uses are defined as properties. Then this manifest can be imported and we are away:


<?xml version="1.0"?>

<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">

<service_bundle type='manifest' name='postgres'>

<service
        name='appl/postgres'
        type='service'
        version='1'>

        <create_default_instance enabled='true' />

        <dependency
                name='autofs'
                grouping='require_all'
                restart_on='none'
                type='service'>
                <service_fmri value='svc:/system/filesystem/autofs' />
        </dependency>

        <exec_method
                type='method'
                name='start'
                exec='/postgres/svc/method/postgres'
                timeout_seconds='0' />

        <exec_method
                type='method'
                name='stop'
                exec=':kill -15'
                timeout_seconds='3' />

        <property_group name='postgres' type='application'>
                <propval name='user' type='astring' value='postgres' />
                <propval name='path' type='astring' value='/usr/local/pgsql/bin' />
                <propval name='dbdir' type='astring' value='/var/pgsql' />
        </property_group>
</service>
</service_bundle>

It depends on autofs so that if the binaries or the database files are automounted it will be ready.

Then:


# svccfg import postgres.xml

and we are away. Clearly I did not manage to get this right first time, and someone more knowledgeable in smf will tell me I still have not, but this was easy to debug as the output of the script when into /var/smf/log/appl-postgres:default.log.


dredd 5 # svcs -l svc:/appl/postgres:default
fmri         svc:/appl/postgres:default
enabled      true
state        online
next_state   none
restarter    svc:/system/svc/restarter:default
contract_id  27355 
dependency   require_all/none svc:/system/filesystem/autofs (online)
dredd 6 # 

Now killing postgres using kill or pkill results in the database being restarted. Nice.

This was on a zone running build 67 so things may appear different in the released version.

About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« July 2015
MonTueWedThuFriSatSun
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today