Tuesday May 03, 2016

Zones with latest Java

Java, Solaris 11 and Solaris Zones


Java is seamlessly integrated in the Solaris 11 IPS packaging system, therefore you can use the repository commands to manage the installation and configuration of Java.


Install Java 7 (JRE and JDK)


# pkg install jre-7 jdk-7

Install Java 8 (JRE and JDK)


# pkg install jre-8 jdk-8

Considerations


These commands, installs the incorporation version of Java defined within the Solaris 11 base version or SRU (Support Repository Update). This version however doesn't contain all the latest update to Java; to receive the latest updates to Java, you need to specify in the packaging system that for the Java packages you don't want to be limited to the SRU versions:



# pkg change-facet version-lock.consolidation/java-8/java-8-incorporation=false

# pkg update jre-8 jdk-8



# pkg change-facet version-lock.consolidation/java-7/java-7-incorporation=false

# pkg update jre-7 jdk-7


and this will bring your system up to date, with the latest Java version.




FWIW, expanding a bit this concept, if you need to select the latest
Java IPS package from your repository when automating the
installation of multiple zones, I've found it helpful the following
template file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE auto_install SYSTEM
"file:///usr/share/install/ai.dtd.1">

<auto_install>
    <ai_instance name="zone_default">
        <target>
            <logical>
                <zpool name="rpool">
                    <filesystem name="export"
mountpoint="/export"/>

                    <filesystem name="export/home"/>
                    <be name="solaris">
                        <options>
                            <option name="compression"
value="on"/>

                        </options>
                    </be>
                </zpool>
            </logical>
        </target>

        <software type="IPS">
            <destination>
                

            </destination>
            <!-- Install required software packages: -->
            <software_data action="install">
               
<name>pkg:/group/system/solaris-small-server</name>

                <name>pkg:/system/locale</name>
                <name>pkg:/editor/vim</name>
                <name>pkg:/text/gnu-sed</name>
                <name>pkg:/network/telnet</name>
               
<name>pkg:/developer/java/jdk-7</name>

               
<name>pkg:/system/fault-management/snmp-notify</name>

            </software_data>
        </software>
    </ai_instance>
</auto_install>


Which allows you to unlock the facet for your Java installation, directly during the zone creation.

Tuesday Apr 26, 2016

Renaming a zone and changing the mountpoint

Zones are great, since they allow you to run and manage in isolated containers all of your application... but sometimes, just at the end of the entire installation, you realize that probably you could have picked up better naming conventions for the zones and of the zpools/datasets. So you start scratching your head repeating yourself that reinstalling everything from scratch, is not an option... So, here's the scenario, my zones are both hosted on a zpool named BADPOOL, which I'm going to rename to ZonesPool:


# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
BADPOOL    15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -
rpool  63.5G  16.8G  46.7G  26%  1.00x  ONLINE  -
# zfs list |grep BADPOOL
BADPOOL                                          5.87G  9.76G    31K  legacy
BADPOOL/old                                    2.93G  9.76G    32K  /zones/old
BADPOOL/old/rpool                              2.93G  9.76G    31K  /rpool
BADPOOL/old/rpool/ROOT                         1.93G  9.76G    31K  legacy
BADPOOL/old/rpool/ROOT/solaris-15              1.93G  9.76G  1.73G  /zones/old/root
BADPOOL/old/rpool/ROOT/solaris-15/var           206M  9.76G   206M  /zones/old/root/var
BADPOOL/old/rpool/VARSHARE                     1.13M  9.76G  1.07M  /var/share
BADPOOL/old/rpool/VARSHARE/pkg                   63K  9.76G    32K  /var/share/pkg
BADPOOL/old/rpool/VARSHARE/pkg/repositories      31K  9.76G    31K  /var/share/pkg/repositories
BADPOOL/old/rpool/app                          1022M  9.76G  1022M  /app
BADPOOL/old/rpool/export                        120K  9.76G    32K  /export
BADPOOL/old/rpool/export/home                    88K  9.76G  55.5K  /export/home
BADPOOL/old/rpool/export/home/admin            32.5K  9.76G  32.5K  /export/home/admin
BADPOOL/bad                                    2.94G  9.76G    32K  /zones/bad
BADPOOL/bad/rpool                              2.94G  9.76G    31K  /rpool
BADPOOL/bad/rpool/ROOT                         1.92G  9.76G    31K  legacy
BADPOOL/bad/rpool/ROOT/solaris-15              1.92G  9.76G  1.72G  /zones/bad/root
BADPOOL/bad/rpool/ROOT/solaris-15/var           204M  9.76G   204M  /zones/bad/root/var
BADPOOL/bad/rpool/VARSHARE                     1.13M  9.76G  1.07M  /var/share
BADPOOL/bad/rpool/VARSHARE/pkg                   63K  9.76G    32K  /var/share/pkg
BADPOOL/bad/rpool/VARSHARE/pkg/repositories      31K  9.76G    31K  /var/share/pkg/repositories
BADPOOL/bad/rpool/app                          1.02G  9.76G  1.02G  /app
BADPOOL/bad/rpool/export                        110K  9.76G    32K  /export
BADPOOL/bad/rpool/export/home                  78.5K  9.76G    46K  /export/home
BADPOOL/bad/rpool/export/home/admin            32.5K  9.76G  32.5K  /export/home/admin

Current zone names are old and bad, and I'd like to rename them to this and that; first of all, of course the zones should be at least down:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global          running     /                          solaris    shared
   - old             installed   /zones/old                 solaris    shared
   - bad             installed   /zones/bad                 solaris    shared

Now, let's first deal with the zpool part. In ZFS there's no way of renaming a zpool which is already 'imported', the only way to do that is to export the pool and re-import it with the new, correct name:


# zpool list BADPOOL
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
BADPOOL   15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -
# zpool export BADPOOL
# zpool import BADPOOL ZonesPool
# zpool list ZonesPool
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
ZonesPool    15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -

And that was easy ;-) But the various dataset are still reflecting the previous naming, with old and bad names and mountpoints:


# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/old                                     2.93G  9.76G    32K  /zones/old
ZonesPool/old/rpool                               2.93G  9.76G    31K  /rpool
ZonesPool/old/rpool/ROOT                          1.93G  9.76G    31K  legacy
ZonesPool/old/rpool/ROOT/solaris-15               1.93G  9.76G  1.73G  /
ZonesPool/old/rpool/ROOT/solaris-15/var            206M  9.76G   206M  /var
ZonesPool/old/rpool/VARSHARE                      1.13M  9.76G  1.07M  /var/share
ZonesPool/old/rpool/VARSHARE/pkg                    63K  9.76G    32K  /var/share/pkg
ZonesPool/old/rpool/VARSHARE/pkg/repositories       31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/old/rpool/app                           1022M  9.76G  1022M  /app
ZonesPool/old/rpool/export                         120K  9.76G    32K  /export
ZonesPool/old/rpool/export/home                     88K  9.76G  55.5K  /export/home
ZonesPool/old/rpool/export/home/admin             32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/bad                                     2.94G  9.76G    32K  /zones/bad
ZonesPool/bad/rpool                               2.94G  9.76G    31K  /rpool
ZonesPool/bad/rpool/ROOT                          1.92G  9.76G    31K  legacy
ZonesPool/bad/rpool/ROOT/solaris-15               1.92G  9.76G  1.72G  /
ZonesPool/bad/rpool/ROOT/solaris-15/var            204M  9.76G   204M  /var
ZonesPool/bad/rpool/VARSHARE                      1.13M  9.76G  1.07M  /var/share
ZonesPool/bad/rpool/VARSHARE/pkg                    63K  9.76G    32K  /var/share/pkg
ZonesPool/bad/rpool/VARSHARE/pkg/repositories       31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/bad/rpool/app                           1.02G  9.76G  1.02G  /app
ZonesPool/bad/rpool/export                         110K  9.76G    32K  /export
ZonesPool/bad/rpool/export/home                   78.5K  9.76G    46K  /export/home
ZonesPool/bad/rpool/export/home/admin             32.5K  9.76G  32.5K  /export/home/admin

So we need to rename the datasets:


# zfs rename ZonesPool/old ZonesPool/this
# zfs rename ZonesPool/bad ZonesPool/that
# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/this                                      2.93G  9.76G    32K  /zones/old
ZonesPool/this/rpool                                2.93G  9.76G    31K  /rpool
ZonesPool/this/rpool/ROOT                           1.93G  9.76G    31K  legacy
ZonesPool/this/rpool/ROOT/solaris-15                1.93G  9.76G  1.73G  /
ZonesPool/this/rpool/ROOT/solaris-15/var             206M  9.76G   206M  /var
ZonesPool/this/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/this/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/this/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/this/rpool/app                            1022M  9.76G  1022M  /app
ZonesPool/this/rpool/export                          120K  9.76G    32K  /export
ZonesPool/this/rpool/export/home                      88K  9.76G  55.5K  /export/home
ZonesPool/this/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/that                                      2.94G  9.76G    32K  /zones/bad
ZonesPool/that/rpool                                2.94G  9.76G    31K  /rpool
ZonesPool/that/rpool/ROOT                           1.92G  9.76G    31K  legacy
ZonesPool/that/rpool/ROOT/solaris-15                1.92G  9.76G  1.72G  /
ZonesPool/that/rpool/ROOT/solaris-15/var             204M  9.76G   204M  /var
ZonesPool/that/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/that/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/that/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/that/rpool/app                            1.02G  9.76G  1.02G  /app
ZonesPool/that/rpool/export                          110K  9.76G    32K  /export
ZonesPool/that/rpool/export/home                    78.5K  9.76G    46K  /export/home
ZonesPool/that/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin

As well as the mount points:


# zfs set mountpoint=/zones/this ZonesPool/this
# zfs set mountpoint=/zones/that ZonesPool/that
# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/this                                      2.93G  9.76G    32K  /zones/this
ZonesPool/this/rpool                                2.93G  9.76G    31K  /rpool
ZonesPool/this/rpool/ROOT                           1.93G  9.76G    31K  legacy
ZonesPool/this/rpool/ROOT/solaris-15                1.93G  9.76G  1.73G  /
ZonesPool/this/rpool/ROOT/solaris-15/var             206M  9.76G   206M  /var
ZonesPool/this/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/this/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/this/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/this/rpool/app                            1022M  9.76G  1022M  /app
ZonesPool/this/rpool/export                          120K  9.76G    32K  /export
ZonesPool/this/rpool/export/home                      88K  9.76G  55.5K  /export/home
ZonesPool/this/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/that                                      2.94G  9.76G    32K  /zones/that
ZonesPool/that/rpool                                2.94G  9.76G    31K  /rpool
ZonesPool/that/rpool/ROOT                           1.92G  9.76G    31K  legacy
ZonesPool/that/rpool/ROOT/solaris-15                1.92G  9.76G  1.72G  /
ZonesPool/that/rpool/ROOT/solaris-15/var             204M  9.76G   204M  /var
ZonesPool/that/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/that/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/that/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/that/rpool/app                            1.02G  9.76G  1.02G  /app
ZonesPool/that/rpool/export                          110K  9.76G    32K  /export
ZonesPool/that/rpool/export/home                    78.5K  9.76G    46K  /export/home
ZonesPool/that/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin

Now that we have the filesystems in place, we still need to 'refine' the zones, as in the zones configuration, we still have the old names and definitions:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - old             installed   /zones/old                 solaris    shared
   - bad             installed   /zones/bad                 solaris    shared
# zoneadm -z old rename this
# zoneadm -z bad rename that
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             installed   /zones/old                 solaris    shared
   - that             installed   /zones/bad                 solaris    shared

After changing the names of the zones, we have to change now their PATH, but again, also this operation cannot be done while the zone is in the 'installed' state and it is attached to a live system; therefore we should first forcibly detach the zone (we'll have to use the -F option to force the dismount since the dataset on which the zone was built is not there anymore):


# zoneadm -z this detach -F
# zoneadm -z that detach -F
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             configured  /zones/old                 solaris    shared
   - that             configured  /zones/bad                 solaris    shared

Now that both zones are detached, we can change the zone path:


# zonecfg -z this info zonepath
zonepath: /zones/old
# zonecfg -z that info zonepath
zonepath: /zones/bad
# zonecfg -z this set zonepath=/zones/this
# zonecfg -z this set zonepath=/zones/this
# zonecfg -z this info zonepath
zonepath: /zones/this
# zonecfg -z that info zonepath
zonepath: /zones/that

And verify that the change has been correctly made, but the zones are (of course), still in the 'configured' state:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this configured  /zones/this                 solaris    shared
   - that configured  /zones/that                solaris    shared

We're now ready to re-attach the zones to the live system:


# zoneadm -z this attach
Progress being logged to /var/log/zones/zoneadm.20160426T151603Z.this.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: FE/this/rpool/ROOT/solaris-15
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image. (zone:this)

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/this/root/var/log/zones/zoneadm.20160426T151603Z.this.attach
# zoneadm -z that attach
Progress being logged to /var/log/zones/zoneadm.20160426T153312Z.that.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: FE/that/rpool/ROOT/solaris-15
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image. (zone:that)

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/that/root/var/log/zones/zoneadm.20160426T153312Z.that.attach
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             installed   /zones/this                  solaris    shared
   - that             installed   /zones/that                  solaris    shared


Finally, we need to boot the zones:


# zoneadm -z this boot
# zoneadm -z that boot
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   6 this             running     /zones/this                  solaris    shared
   7 that             running     /zones/that                  solaris    shared
# zlogin this zonename
this
# zlogin that zonename
that

And this concludes our journey through zones, zpools and datasets ;-)

Tuesday Apr 12, 2016

Sendmail SMF services in Solaris

Sendmail SMF services in Solaris


With the latest versions of Solaris (10u6 and 11.x) the classic sendmail program has been split in two separate daemons


root@host1 # svcs -a |grep sendmail
online         20:31:17 svc:/network/smtp:sendmail
online         20:33:53 svc:/network/sendmail-client:default
root@host1 # svcs -p smtp:sendmail sendmail-client:default
STATE          STIME    FMRI
online         20:31:17 svc:/network/smtp:sendmail
               20:31:18    24564 sendmail
online         20:33:53 svc:/network/sendmail-client:default
               20:33:53    24574 sendmail
root@host1 # ps -aef|grep sendmail
    root 24595 24233   0 21:01:37 pts/1       0:00 grep sendmail
    root 24564     1   0 20:31:18 ?           0:00 /usr/lib/inet/sendmail -bd -q15m
   smmsp 24574     1   0 20:33:54 ?           0:00 /usr/lib/inet/sendmail -Ac -q15m


The first one is the real Message Transfer Agent (MTA), whereas the second one handles the client queues used by the local Message Submission Programs (MSP).


In an ideal world, with all the internet hosts up and running, and with all the connections between them working properly, the difference by these two instances won't be so immediate, but what happens in the real world?


In the real world, it could simply happen that the MTA program could be down for some reason (scheduled maintenance or unexpected issues):


root@host1 # svcadm disable smtp:sendmail sendmail-client
root@host1 # svcs -a |grep -i sendmail
disabled       21:27:33 svc:/network/smtp:sendmail
disabled       21:27:33 svc:/network/sendmail-client:default


but a generic MSPs (like mail or mailx) could still be allowed to submit the email:


root@host1 # mailq -Ac
/var/spool/clientmqueue is empty
                Total requests: 0
root@host1 # echo "Test message with mailx." | mailx -s "Test with both DOWN" user1@host1
user1@host1... Connecting to [127.0.0.1] via relay...
user1@host1... Deferred: Connection refused by [127.0.0.1]

root@host1 # echo "Test message with mailx." | mailx -s "Test with both DOWN" user2@host2
user2@host2... Connecting to [127.0.0.1] via relay...
user2@host2... Deferred: Connection refused by [127.0.0.1]


But in this case, since the MTA cannot deliver the message, the email is kept in the MTA-client queues:


root@host1 # mailq -Ac
                /var/spool/clientmqueue (2 requests)
-----Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient-----------
u3CJUgsu024771       25 Tue Apr 12 21:30 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user1@host1
u3CJTMqr024760       25 Tue Apr 12 21:29 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user2@host2
                Total requests: 2


At this point, even if we enable the MTA and check again the queues, we will still see the same messages queued:


root@host1 # svcadm enable smtp:sendmail
root@host1 # svcs -a | grep sendmail
disabled       21:27:33 svc:/network/sendmail-client:default
online         21:32:54 svc:/network/smtp:sendmail
root@host1 # mailq -A
c
               /var/spool/clientmqueue (2 requests)
-----Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient-----------
u3CJUgsu024771       25 Tue Apr 12 21:30 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user1@host1
u3CJTMqr024760       25 Tue Apr 12 21:29 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user2@host2
                Total requests: 2


The difference is that if now we try to send email messages, these will be immediately delivered, both locally, and remotely:


root@host1 # echo "Test message with mailx." | mailx -s "Test smtp:sendmail UP and sendmail-client DOWN" user1@host1
root@host1 # echo "Test message with mailx." | mailx -s "Test smtp:sendmail UP and sendmail-client DOWN" user2@host2


user1@host1 will see:


user1@host1 $ mail
From root@host1 Tue Apr 12 22:20:33 2016
Date: Tue, 12 Apr 2016 22:20:32 +0200 (CEST)
From: Super-User <root@host1>
Message-Id: <201604122020.u3CKKWP5024867@host1>
To: user1@host1
Subject: Test smtp:sendmail UP and sendmail-client DOWN
Content-Length: 25

Test message with mailx.

? d
user1@host1 $


and same thing will happen for user2@host2:


user2@host2 $ mail
From root@host1 Tue Apr 12 22:20:39 2016
Date: Tue, 12 Apr 2016 22:20:38 +0200 (CEST)
From: Super-User <root@host1>
Message-Id: <201604122020.u3CKKcmI024873@host1>
To: user2@host2
Subject: Test smtp:sendmail UP and sendmail-client DOWN
Content-Length: 25

Test message with mailx.

? d
user2@host2 $

but the messages submitted previously will still be in the client mail queues:


root@host1 # mailq -Ac
               /var/spool/clientmqueue (2 requests)
-----Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient-----------
u3CJUgsu024771       25 Tue Apr 12 21:30 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user1@host1
u3CJTMqr024760       25 Tue Apr 12 21:29 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user2@host2
                Total requests: 2


Only if we enable the service which is taking care of the client queues, these emails will be delivered:


root@host1 # svcadm enable sendmail-client
root@host1 # svcadm refresh sendmail-client
root@host1 # mailq -Ac
/var/spool/clientmqueue is empty
                Total requests: 0
root@host1 #


And we can get a confirmation from the two users;


user1@host1 $ mail
From root@host1 Tue Apr 12 21:31:48 2016
Date: Tue, 12 Apr 2016 21:29:22 +0200 (CEST)
From: Super-User <root@host1>
Message-Id: <201604121930.
u3CJUgsu024771@host1>
To: user1@host1
Subject: Test with both DOWN
Content-Length: 25

Test message with mailx.

?


and:


user2@host2 $ mail
From root@host1 Tue Apr 12 21:31:48 2016
Date: Tue, 12 Apr 2016 21:29:22 +0200 (CEST)
From: Super-User <root@host1>
Message-Id: <201604121929.u3CJTMqr024760@host1>
To: user2@host2
Subject: Test with both DOWN
Content-Length: 25

Test message with mailx.

?

... Piece of cake ;-)

Thursday Mar 17, 2016

CPU Clock Speed in Solaris

Sometimes in a multi-cpu/core environment, you might want to check which is the actual speed of the various cores, so a good starting point is the cpu_info module of the kstat command output, which combined with some awk can display exactly what you're looking for; here is a sample of from an x86_64 machine (actually an old, but still faithful x4150):

# kstat -m cpu_info 5 | awk '/instance/{printf "CPU: %3s ---> ", $4}; /current_clock_Hz/ {print $2/1000000 " MHz"}'

CPU:   0 ---> 2003 MHz

CPU:   1 ---> 2003 MHz

CPU:   2 ---> 2003 MHz

CPU:   3 ---> 2003 MHz

CPU:   4 ---> 2003 MHz

CPU:   5 ---> 2003 MHz

CPU:   6 ---> 2003 MHz

CPU:   7 ---> 3166 MHz

CPU:   0 ---> 2003 MHz

CPU:   1 ---> 2003 MHz

CPU:   2 ---> 2003 MHz

CPU:   3 ---> 2003 MHz

CPU:   4 ---> 2003 MHz

CPU:   5 ---> 2003 MHz

CPU:   6 ---> 2003 MHz

CPU:   7 ---> 3166 MHz

Of course, you might want to skip the first interval... but you'll have a 'rolling update' with the current clock speed in MHz of your CPU/cores.

Tuesday Apr 17, 2012

Remote Desktop from Solaris to Windows

Building rdesktop on Solaris 10

[Read More]

Sunday Jan 09, 2011

trackerd

I've upgraded my laptop to the latest Oracle Solaris 11 Express (snv_151a X86) and at a first glance, I've to say that seems a good step forward respect to my previous OpenSolaris... but ... Like all good nerds, I was exploring the new system, playing with configurations and installing the typical nerd software I need, while I stumbled on a process (eating a lot of CPU and RAM): /usr/bin/trackerd that I've never seen on my previous OpenSolaris installation...
Nothing special, is not a virus or an E.T.: is just the default GNOME indexing/tracking tool that from this release is installed and enabled by default:


root-AT-vesuvio:~# pkg info tracker
          Name: library/desktop/search/tracker
       Summary: Desktop search tool
   Description: Desktop search tool
      Category: Applications/System Utilities
         State: Installed
     Publisher: solaris
       Version: 0.5-DOT-11
 Build Release: 5.11
        Branch: 0.151.0.1
Packaging Date: Fri Nov 05 05:52:57 2010
          Size: 3.09 MB
          FMRI: pkg://solaris/library/desktop/search/tracker@0.5.11,5.11-0.151.0.1:20101105T055257Z
root@vesuvio:~#


Since I'm very conscious about my CPU clock cycles/RAM bits, and my nerd software doesn't like CPU/MEM spikes that could be easily triggered from that software I simply removed the package:


root@vesuvio:~# pkg uninstall tracker
                Packages to remove:     1
           Create boot environment:    No
               Services to restart:     2
PHASE                                        ACTIONS
Removal Phase                                373/373

PHASE                                          ITEMS
Package State Update Phase                       1/1
Package Cache Update Phase                       1/1
Image State Update Phase                         2/2
root@vesuvio:~#



I admit that this solution may sound a bit 'extreme', but I really don't like/use this piece of software. I do not like that kind of programs running in background, browsing and crawling the directories of your HD to index the content of your documents, pictures, emails etc. This could be a nice feature to have on an average end-user desktop/station, not for a laptop that I mainly use as my nerd-lab test bench ;-)


People interested in using this tool can find plenty of ways of throttling down the CPU/MEM resources, excluding directories or assigning specific paths to monitor, etc...



  • Tracker Project home page on GNOME

  • HOWTO that explains how to customize the tracker daemon behaviour

Wednesday Nov 03, 2010

Oracle Directory Server Enterprise Edition on OpenSolaris b150, with DSCC7 in bundled Tomcat6

Objective: Install Oracle Directory Server Enterprise Edition (ODSEE11g) and DSCC7 on OpenSolaris without using privileged users.


Since we'll run the Directory Server instances unprivileged, let's create a group:

mm206378@vesuvio:~$ pfexec groupadd -g 389 oragrp

and a user with a password, so that it will be able to login:

mm206378@vesuvio:~$ pfexec useradd -u 389 -g oragrp -d /opt/dsee7 oradir
mm206378@vesuvio:~$ pfexec passwd oradir
New Password: \*\*\*\*\*\*\*\*
Re-enter new Password: \*\*\*\*\*\*\*\*
passwd: password successfully changed for oradir
mm206378@vesuvio:~$


Since this user should be able to manage network services on privileged ports (TCP < 1024), we shall explicitly grant this privilege:

mm206378@vesuvio:~$ pfexec usermod -K defaultpriv=basic,net_privaddr oradir

This test machine (my laptop, hostname 'vesuvio') has only one internal disk, so I won't be creating any dedicated zpool to host binaries and/or Directory Server instances. To insure logical separation, we'll just create the datasets to isolate the deployment:

mm206378@vesuvio:~$ pfexec zfs create -o mountpoint=/opt/dsee7 rpool/dsee7-bin
mm206378@vesuvio:~$ pfexec zfs create -o mountpoint=/opt/dsee7/var rpool/dsee7-var


and change the ownership of these datasets to user oradir:

mm206378@vesuvio:~$ ls -ltRa /opt/dsee7/

/opt/dsee7/:
total 5
drwxr-xr-x 3 root root 3 2010-11-12 00:12 .
drwxr-xr-x 2 root root 2 2010-11-12 00:12 var
drwxr-xr-x 8 root sys  8 2010-11-11 23:48 ..

/opt/dsee7/var:
total 3
drwxr-xr-x 3 root root 3 2010-11-12 00:12 ..
drwxr-xr-x 2 root root 2 2010-11-12 00:12 .
mm206378@vesuvio:~$ pfexec chown -R oradir:oragrp /opt/dsee7/
mm206378@vesuvio:~$ ls -ltRa /opt/dsee7/
/opt/dsee7/:
total 5
drwxr-xr-x 3 oradir oragrp 3 2010-11-12 00:12 .
drwxr-xr-x 2 oradir oragrp 2 2010-11-12 00:12 var
drwxr-xr-x 8 root   sys    8 2010-11-11 23:48 ..

/opt/dsee7/var:
total 3
drwxr-xr-x 3 oradir oragrp 3 2010-11-12 00:12 ..
drwxr-xr-x 2 oradir oragrp 2 2010-11-12 00:12 .
mm206378@vesuvio:~$


Uncompress the packages:

mm206378@vesuvio:~$ pfexec su - oradir
Oracle Corporation    SunOS 5.11    snv_150    October 2010
oradir@vesuvio:~$ pwd
/opt/dsee7
oradir@vesuvio:~$ ls -tlra
total 11
drwxr-xr-x   8 root     sys            8 Nov 11 23:48 ..
drwxr-xr-x   2 oradir   oragrp         2 Nov 12 00:12 var
drwxr-xr-x   3 oradir   oragrp         4 Nov 12 00:41 .
-rw-------   1 oradir   oragrp        18 Nov 12 00:41 .sh_history
oradir@vesuvio:~$ mkdir inst && cd inst
oradir@vesuvio:~/inst$ unzip -q /tmp/ODSEE11g\\-S10x86.zip
oradir@vesuvio:~$ ls -l /opt/dsee7/ && cd /opt
total 6
drwxr-xr-x   4 oradir   oragrp         7 Nov 12 00:42 inst
drwxr-xr-x   2 oradir   oragrp         2 Nov 12 00:12 var
oradir@vesuvio:/opt$ unzip -q dsee7/inst/ODSEE_ZIP_Distribution/sun\\-dsee7.zip
oradir@vesuvio:/opt$ ls -ltra dsee7/
total 38
drwxr-xr-x   7 oradir   oragrp        12 Apr 26  2010 jre
drwxr-xr-x   3 oradir   oragrp         6 Jun 30 23:09 include
drwxr-xr-x   2 oradir   oragrp         4 Jun 30 23:09 etc
drwxr-xr-x   6 oradir   oragrp         6 Jun 30 23:10 dsrk
drwxr-xr-x   8 root     sys            8 Nov 11 23:48 ..
drwxr-xr-x   4 oradir   oragrp         7 Nov 12 00:42 inst
drwxr-xr-x   4 oradir   oragrp         4 Nov 12 00:46 ext
drwxr-xr-x  10 oradir   oragrp        10 Nov 12 00:46 resources
drwxr-xr-x   3 oradir   oragrp         3 Nov 12 00:47 var
drwxr-xr-x  12 oradir   oragrp        13 Nov 12 00:47 .
drwxr-xr-x   7 oradir   oragrp        18 Nov 12 00:47 lib
drwxr-xr-x   4 oradir   oragrp        23 Nov 12 00:47 bin
-rw-------   1 oradir   oragrp       450 Nov 12 00:50 .sh_history
oradir@vesuvio:/opt$


Now we have to configure CACAO and the DSCC instance:


oradir@vesuvio:~/bin$ dsccsetup initialize



The initialization will start and we'll have to provide the credentials for the admin user, but at the end we'll have both CACAO and the ADS instance up and running:

oradir@vesuvio:~/bin$ ps -aef | grep oradir
  oradir  7936  7934   0 00:54:32 ?           0:08 /opt/dsee7/jre/bin/java -Xmx128M -Dcom.sun.management.jmxremote -Dfile.encoding
  oradir  8007  5002   0 01:00:51 pts/2       0:00 ps -aef
  oradir  8008  5002   0 01:00:51 pts/2       0:00 grep oradir
  oradir  5002  3339   0 00:41:41 pts/2       0:00 -sh
  oradir  7934     1   0 00:54:32 ?           0:00 /opt/dsee7/ext/cacao_2/usr/lib/cacao/lib/tools/launch -w /opt/dsee7/ext/cacao_2
  oradir  7958     1   0 00:54:44 ?           0:04 /opt/dsee7/lib/64/ns-slapd -D /opt/dsee7/var/dcc/ads -i /opt/dsee7/var/dcc/ads/
oradir@vesuvio:~/bin$

The web container in which will be deployed the Directory Service Control Center should have access at least to the /opt/dsee7/var/dcc/ads/config to fetch the basic informations, so we will switch the runtime user of the 'tomcat6' service to oradir [it's still an unprivileged user, with the only additional right to run servers on privileged ports (<1024/TCP)].

mm206378@vesuvio:~$ svccfg -s tomcat6
svc:/network/http:tomcat6> listprop start/user
start/user  astring  webservd
svc:/network/http:tomcat6> listprop start/group
start/group  astring  webservd
svc:/network/http:tomcat6> refresh
svc:/network/http:tomcat6> end
mm206378@vesuvio:~$


We can now deploy the Directory Service Control Center manually:

oradir@vesuvio:/var/tomcat6/webapps/dscc7$ unzip -q /opt/dsee7/var/dscc7.war

and enabled the service:

mm206378@vesuvio:~$ svcadm enable tomcat6

now, take a browser, navigate to


http://vesuvio:8080/dscc7


et volia' DSCC7 is there. You can now login and create/manage instances.



P.S.: I've 'tied up' this post following the suggestion of the first comment and I've found extremely useful the following post: Locking Down Apache .


The next logical step, would be tuning the TCP/IP stack... but I've already covered this steps on a previous post



Friday Oct 29, 2010

ZFS ARC Cache tuning for a laptop...

I've my laptop (Toshiba Tecra M5 - Intel Core2-Duo@2GHz - 2GB RAM) with OpenSolaris (snv_150) and I've noted that sometimes it becomes slow and unresponsive for a few seconds in which the disk was spinning hardly... a very simple probe showed the problem:


# kstat -m zfs -n arcstats -T d 2


I'll save you all the neverending output, but the interesting numbers were the ones coming from c,c_max, c_min and size.


As I read on the ZFS Evil Tuning Guide :


[...] The ZFS Adaptive Replacement Cache (ARC) tries to use most of a system's
available memory to cache file system data. The default is to use all
of physical memory except 1 GB. As memory pressure increases, the ARC
relinquishes memory. [...]


Mine problem was that when trying to launch many application (typically at the login, when you may start Firefox, Thunderbird, Netbeans, Acrobat Reader and OpenOffice almost sequentially) the laptop was clogged up and was with the disk spinning and almost unresponsive. I know that my laptop has limited performances and is not the latest piece of hardware available on the market, but still when I launch the same applications under other O.S.-es [both Linux Ubuntu 10.10 (64-bit) and WinXP SP3 (32-bit)] I don't have to wait that long and the system looks more responsive.


Monitoring the size parameter of the ARC cache, I've seen that it was always around 1 GB Size, and the applications were instead unable to run with few available memory and swapping on the disk... this was not sane.


First I shrinked the amount of ram allocated for ZFS ARC live (as explained in the "ZFS Guide";), and since the performances and the stability of the machine seemed improved, I set that value into the /etc/system file to make it persistent across reboots:


set zfs:zfs_arc_max = 822083584


Even if the ZFS ARC cache size is more constant now (I've an average that is close to the set value, with limited 'fluctuations'), I'm running without any apparent problem.... So far, so good ;-)

Sunday Oct 24, 2010

Managing users with UAT

In the various Unix/Linux flavours, each user is assigned a numeric UID (Unique IDentifier) that is fundamental for granting privileges and granting access a user to the various system resources.


Even though every distribution still keeps the original command line tools to manage the users (useradd/del/mod, etc.) various tools have been developed to ease the burden of system administration, but I found some restrictions with the UAT (User Administration Tool) that is a component of the GNOME desktop of the Ubuntu distribution.


If you intend to manage the users with this tool, be aware that by default settings it 'masks' all the users whose UID is smaller than 1000 and bigger than 60000; so if you assign such UIDs to your users and restart the UAT, they simply vanish in the haze: you're not able to manage them anymore with this tool... unless... you change the shadow password suite configuration file: /etc/logins.defs. In this file, you can find the following definitions:


#
# Min/max values for automatic uid selection in useradd
#
UID_MIN                  1000
UID_MAX                 60000
# System accounts
#SYS_UID_MIN              100
#SYS_UID_MAX              999

#
# Min/max values for automatic gid selection in groupadd
#
GID_MIN                  1000
GID_MAX                 60000
# System accounts
#SYS_GID_MIN              100
#SYS_GID_MAX              999


That prevent you to create and manage users and groups outside the interval 1000-60000. Once you change these values to a more reasonable number according to your needs, restart the UAT... et voila' your users and groups are back in the tool.



P.S.: For the full story and historical reasons of UIDs, please consult the related UID Wiki page

Monday May 31, 2010

Sun ... set.

I had a dream. It has been great, it has been fun: it has been SUN.


Now the Oracle says: ... ibis ... redibis ... non ... morieris in bello


[... you shall go ... you shall return ... not ... die in the war]


Our job will to place a comma in the right place be to have the correct sentence.


Kick butts and have fun!


Sunnies! ;-)

Large groups...

Sometimes ACIs evaluation on large (static) groups can play a significant role in Directory Server performances, especially when there are applications that makes massive and frequent queries to evaluate group membership.


Directory Server (since 5.2patch3) has a nice feature to handle the behavior of these queries, since ACIs are generally small instructions, they are cached into the for a faster access... but to avoid having too much space


nsslapd-groupevalsizelimit


maximum number of
members in a group during acl evaluation ( there is a parameter for that
(forgot which one but I could search )

    acl would be rejected and not kept in cache in that case ...

cacao and cacao_2

[root@cnode ~]# netstat -naf inet | grep 11162
127.0.0.1.11162       \*.\*                0      0 49152      0 LISTEN
[root@cnode ~]#


[root@cnode ~]# ps -aef | grep cacao
    root  1817     1   0 14:53:28 ?           0:00 /usr/lib/cacao/lib/tools/launch -w /var/cacao/instances/default -L 16384 -P /va
[root@cnode ~]# pargs 1817
1817:   /usr/lib/cacao/lib/tools/launch -w /var/cacao/instances/default -L 16384 -P /va
argv[0]: /usr/lib/cacao/lib/tools/launch
argv[1]: -w
argv[2]: /var/cacao/instances/default
argv[3]: -L
argv[4]: 16384
argv[5]: -P
argv[6]: /var/run/cacao/instances/default/run/hb.pipe
argv[7]: -f
argv[8]: -U
argv[9]: root
argv[10]: -G
argv[11]: sys
argv[12]: --
argv[13]: /usr/jdk/jdk1.5.0_18/bin/java
argv[14]: -Xms4M
argv[15]: -Xmx128M
argv[16]: -Dcom.sun.management.jmxremote
argv[17]: -Dfile.encoding=utf-8
argv[18]: -Djava.endorsed.dirs=/usr/lib/cacao/lib/endorsed
argv[19]: -classpath
argv[20]: /usr/share/lib/jdmk/jdmkrt.jar:/usr/share/lib/jdmk/jmxremote_optional.jar:/usr/lib/cacao/lib/cacao_cacao.jar:/usr/lib/cacao/lib/cacao_j5core.jar:/usr/lib/cacao/lib/bcprov-jdk14.jar
argv[21]: -Djavax.management.builder.initial=com.sun.jdmk.JdmkMBeanServerBuilder
argv[22]: -Dcacao.print.status=true
argv[23]: -Dcacao.config.dir=/etc/cacao/instances/default
argv[24]: -Dcacao.monitoring.mode=smf
argv[25]: -Dcom.sun.cacao.ssl.keystore.password.file=/etc/cacao/instances/default/security/password
argv[26]: com.sun.cacao.container.impl.ContainerPrivate
[root@cnode ~]#


[root@cnode ~]# cacaoadm status
default instance is DISABLED at system startup.
Smf monitoring process:
1817
1818
Uptime: 0 day(s), 1:13
[root@cnode ~]# cacaoadm list-params
snmp-adaptor-port=11161
snmp-adaptor-trap-port=11162
jmxmp-connector-port=11162
commandstream-adaptor-port=11163
rmi-registry-port=11164
secure-webserver-port=11165
java-flags=-Xms4M -Xmx128M -Dcom.sun.management.jmxremote -Dfile.encoding=utf-8 -Djava.endorsed.dirs=/usr/lib/cacao/lib/endorsed
micro-agent=false
java-home=/usr/jdk/jdk1.5.0_18
jdmk-home=/usr/share/lib/jdmk
nss-lib-home=/usr/lib/mps/secv1
nss-tools-home=/usr/sfw/bin
retries=4
log-file-limit=1000000
log-file-count=3
log-file-append=true
enable-instrumentation=false
user=root
group=sys
network-bind-address=127.0.0.1
watchdog-heartbeat-timeout=60
[root@cnode ~]#



[root@cnode ~]# cacaoadm stop
[root@cnode ~]# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 129.157.108.81 netmask fffffe00 broadcast 129.157.109.255
        ether 8:0:20:c3:cf:b8
[root@cnode ~]# cacaoadm set-param network-bind-address=129.157.108.81
[root@cnode ~]# cacaoadm list-params
snmp-adaptor-port=11161
snmp-adaptor-trap-port=11162
jmxmp-connector-port=11162
commandstream-adaptor-port=11163
rmi-registry-port=11164
secure-webserver-port=11165
java-flags=-Xms4M -Xmx128M -Dcom.sun.management.jmxremote -Dfile.encoding=utf-8 -Djava.endorsed.dirs=/usr/lib/cacao/lib/endorsed
micro-agent=false
java-home=/usr/jdk/jdk1.5.0_18
jdmk-home=/usr/share/lib/jdmk
nss-lib-home=/usr/lib/mps/secv1
nss-tools-home=/usr/sfw/bin
retries=4
log-file-limit=1000000
log-file-count=3
log-file-append=true
enable-instrumentation=false
user=root
group=sys
network-bind-address=129.157.108.81
watchdog-heartbeat-timeout=60
[root@cnode ~]# cacaoadm start
[root@cnode ~]# netstat -naf inet | grep 11162
129.157.108.81.11162       \*.\*                0      0 49152      0 LISTEN
[root@cnode ~]#






Solaris and core files

[mm206378-AT-sr1-emln03-04 /net/cores.central/cores/dir27/71037750/20090516]$ grep "conn=121227 op=2 msgId=141" access
[16/May/2009:00:42:19 -0500] conn=121227 op=2 msgId=141 - MOD dn="uid=mammy22g, ou=RegisteredUsers, ou=People, o=nextel.com"
[16/May/2009:02:37:44 -0500] conn=121227 op=2 msgId=141 - RESULT err=0 tag=103 nentries=0 etime=6925.169930 csn=4a0e6ee1000000670000

[mm206378-AT-sr1-emln03-04 /net/cores.central/cores/dir27/71037750/20090516]$ head access
[15/May/2009:22:57:53 -0500] conn=107402 op=1 msgId=9079 - SRCH base="ou=registeredusers,ou=people,o=nextel.com" scope=2 filter="(uid=heatherwilson1983)" attrs="uid cn sn givenName reggender reghintquestion reghintanswer mail reginoutemailoption postalCode st ppcity c ppbirthdate nxtptn nxtimei nxtindustrycode nxtbusinessrole nxtincome nxtbusinesscust nxtmanagedaccount nxtupsubscriberaddressid nxtaccount regnickname pplangpreference ppregion regstreetaddress1 regstreetaddress2 accountadmins objectClass nxtphonetype regcompanyname userPassword nxtconfirmationcode nxtemailverified"
[15/May/2009:22:57:53 -0500] conn=107403 op=-1 msgId=-1 - fd=87 slot=87 LDAP connection from 10.214.117.27:17547 to 144.226.242.7 (allowed by  rule: ALL:10.214.117.)
[15/May/2009:22:57:53 -0500] conn=107403 op=0 msgId=39348 - BIND dn="uid=6LN, ou=Special Users, o=nextel.com" method=128 version=3
[15/May/2009:22:57:53 -0500] conn=107403 op=0 msgId=39348 - RESULT err=0 tag=97 nentries=0 etime=0.000780 dn="uid=6ln,ou=special users,o=nextel.com"
[15/May/2009:22:57:53 -0500] conn=107403 op=1 msgId=39349 - SRCH base="ou=registeredusers,ou=people,o=nextel.com" scope=2 filter="(nxtptn=9193521611)" attrs="uid givenName sn regstreetaddress1 regstreetaddress2 ppcity st postalCode mail nxtemailverified reginoutemailoption nxtptn nxtupsubscriberaddressid nxtimei nxtphonetype"
[15/May/2009:22:57:53 -0500] conn=107403 op=1 msgId=39349 - RESULT err=0 tag=101 nentries=0 etime=0.000390
[15/May/2009:22:57:53 -0500] conn=107403 op=2 msgId=39350 - UNBIND
[15/May/2009:22:57:53 -0500] conn=107403 op=2 msgId=-1 - closing from 10.214.117.27:17547 - U1 - Connection closed by unbind client -
[15/May/2009:22:57:53 -0500] conn=107403 op=-1 msgId=-1 - closed.
[15/May/2009:22:57:53 -0500] conn=107404 op=-1 msgId=-1 - fd=87 slot=87 LDAP connection from 10.214.117.27:52005 to 144.226.242.7 (allowed by  rule: ALL:10.214.117.)
[mm206378-AT-sr1-emln03-04 /net/cores.central/cores/dir27/71037750/20090516]$ tail access
[16/May/2009:02:37:49 -0500] conn=137659 op=2 msgId=3 - UNBIND
[16/May/2009:02:37:49 -0500] conn=137659 op=2 msgId=-1 - closing from 10.214.117.6:37222 - U1 - Connection closed by unbind client -
[16/May/2009:02:37:49 -0500] conn=137659 op=-1 msgId=-1 - closed.
[16/May/2009:02:37:49 -0500] conn=137657 op=1 msgId=2624 - RESULT err=0 tag=101 nentries=1 etime=0.626520
[16/May/2009:02:37:49 -0500] conn=137657 op=2 msgId=2625 - UNBIND
[16/May/2009:02:37:49 -0500] conn=137657 op=2 msgId=-1 - closing from 10.214.117.23:21280 - U1 - Connection closed by unbind client -
[16/May/2009:02:37:49 -0500] conn=137657 op=-1 msgId=-1 - closed.
[16/May/2009:02:37:51 -0500] conn=137660 op=-1 msgId=-1 - fd=87 slot=87 LDAP connection from 10.214.117.21:21294 to 144.226.242.7 (allowed by  rule: ALL:10.214.117.)
[16/May/2009:02:37:51 -0500] conn=137660 op=0 msgId=2629 - BIND dn="uid=6JN, ou=Special Users, o=nextel.com" method=128 version=3
[16/May/2009:02:37:51 -0500] conn=137660 op=0 msgId=2629 - RESULT err=0 tag=97 nentries=0 etime=0.000880 dn="uid=6jn,ou=special users,o=nextel.com"
[mm206378@sr1-emln03-04 /net/cores.central/cores/dir27/71037750/20090516]$ grep -c "BIND dn="uid=6JN" access
>
[mm206378@sr1-emln03-04 /net/cores.central/cores/dir27/71037750/20090516]$ grep -c "BIND dn=\\"uid=6JN" access
12830
[mm206378@sr1-emln03-04 /net/cores.central/cores/dir27/71037750/20090516]$ grep -c MOD access
5579
[mm206378@sr1-emln03-04 /net/cores.central/cores/dir27/71037750/20090516]$

----------------------------------
shsh

COREADM_GLOB_PATTERN=
COREADM_GLOB_CONTENT=default
COREADM_INIT_PATTERN=core
COREADM_INIT_CONTENT=default
COREADM_GLOB_ENABLED=no
COREADM_PROC_ENABLED=yes
COREADM_GLOB_SETID_ENABLED=no
COREADM_PROC_SETID_ENABLED=no
COREADM_GLOB_LOG_ENABLED=no  

(2:18:04 PM) Marco Milo: and what's the output of coreadm <DS_PID> so we have the exaxt settings also for the specific process...
(2:18:51 PM) ft96309@im.sun-DOT-com/SUN-N52RZ0V6L0W:        ahhhh, we have been doing $gcore -o <file.out> <pid>    
(2:19:51 PM) ft96309@im.sun-DOT-com/SUN-N52RZ0V6L0W:        should i do the command as you list it above?    
(2:20:15 PM) Marco Milo: yes, just to see what are the settings of coreadm for our Directory Server process
(2:20:51 PM) ft96309@im.sun-DOT-com/SUN-N52RZ0V6L0W:
/tmp: ps -ef|grep slapd
dsee 20491     1   0   May 08 ?          18:40 /ldap/dsee61/ds6/lib/64/ns-slapd -D /ldap/slapd-smps -i /ldap/slapd-smps/logs/p
dsee  5515 13942   0 07:19:27 pts/4       0:00 grep slapd
dsee 23348     1   2 15:53:09 ?        1216:42 /ldap/dsee61/ds6/lib/64/ns-slapd -D /ldap/dsee6-nol -i /ldap/dsee6-nol/logs/pid
/tmp: coreadm 23348                                                                                                                  23348:  core    default 
(2:22:26 PM) Marco Milo: what was the output of coreadm?

 coreadm
 global core file pattern:
 global core file content: default
 init core file pattern: core
 init core file content: default
 global core dumps: disabled
 per-process core dumps: enabled
 global setid core dumps: disabled
 per-process setid core dumps: disabled
 global core dump logging: disabled


gcore -c all -o <OUT_FILE> <PID>

ACI debugging on:
# dsconf set-log-prop -p 6330 error level:err-acl

ACI debugging off:
# dsconf set-log-prop -p 6330 error level:default


5-digit:  x44403
passcode: 8765762

  866-545--5227    
(12:16:43 PM) vt98645-muppets:        pin 2486862

Friday May 15, 2009

etime in microseconds

To set the etimes in the access log in microseconds, we need to set the following:


# ldapmodify -D <DIRECTORY_MANAGER> -w <PASSWORD> -p <PORT> -h <HOST>
dn: cn=config
changetype: modify
replace: nsslapd-accesslog-level
nsslapd-accesslog-level: 131328
\^D

Thursday May 07, 2009

Make X listen on external TCP ports (Solaris and OpenSolaris)


In Solaris 10 and OpenSolaris X Server is enabled per default and controlled via SMF (Service Management Facility):



# ps -aef | grep Xsun
root 4767 4764 0 15:10:44 ? 0:01
/usr/openwin/bin/Xsun :0 -defdepth 24 -nolisten tcp -nobanner -auth
/var/dt/A:0


# svcs -xv cde-login

svc:/application/graphical-login/cde-login:default (CDE login)

State: online since Thu May 07 15:10:43 2009

See: man -M /usr/dt/share/man -s 1 dtlogin

See: /var/svc/log/application-graphical-login-cde-login:default.log

Impact: None.

#



The default installation doesn't makes the X Server listen on the TCP port:



# netstat -naf inet | grep 6000

#



and this is indeed a noticeable security feature, but sometimes it's
also useful having the X Server available and responsive on TCP.


X properties are defined in the /application/x11/x11-server service;
and we can see all the properties with the following command:


# svccfg -s /application/x11/x11-server listprop
options                       application
options/default_depth         integer  24
options/server                astring  /usr/openwin/bin/Xsun
options/server_args           astring
options/stability             astring  Evolving
options/value_authorization   astring  solaris.smf.manage.x11
options/tcp_listen            boolean  false
fs-local                      dependency
fs-local/entities             fmri     svc:/system/filesystem/local
fs-local/grouping             astring  require_all
fs-local/restart_on           astring  none
fs-local/type                 astring  service
network-service               dependency
network-service/entities      fmri     svc:/network/service
network-service/grouping      astring  require_all
network-service/restart_on    astring  none
network-service/type          astring  service
name-services                 dependency
name-services/entities        fmri     svc:/milestone/name-services
name-services/grouping        astring  require_all
name-services/restart_on      astring  refresh
name-services/type            astring  service
general                       framework
general/action_authorization  astring  solaris.smf.manage.x11
general/entity_stability      astring  Evolving
start                         method
start/exec                    astring  "/lib/svc/method/x11-server -d 0 -c %i %m"
start/timeout_seconds         count    0
start/type                    astring  method
stop                          method
stop/exec                     astring  ":kill -TERM"
stop/timeout_seconds          count    10
stop/type                     astring  method
tm_common_name                template
tm_common_name/C              ustring  "X Window System server"
tm_man_Xserver                template
tm_man_Xserver/manpath        astring  /usr/openwin/share/man
tm_man_Xserver/section        astring  1
tm_man_Xserver/title          astring  Xserver
tm_man_Xsun                   template
tm_man_Xsun/manpath           astring  /usr/openwin/share/man
tm_man_Xsun/section           astring  1
tm_man_Xsun/title             astring  Xsun
tm_man_Xorg                   template
tm_man_Xorg/manpath           astring  /usr/X11/share/man
tm_man_Xorg/section           astring  1
tm_man_Xorg/title             astring  Xorg


In particular the switch that controls whether or not the X server has to listen on the TCP is:



# svccfg -s /application/x11/x11-server listprop options/tcp_listen

options/tcp_listen boolean false

#



So in this case we would like to enable with the following command:



# svccfg -s svc:/application/x11/x11-server setprop options/tcp_listen = true

# svccfg -s /application/x11/x11-server listprop options/tcp_listen

options/tcp_listen boolean true

#



and stop/start the cde-login service to make the change effective:



# svcadm disable cde-login

# svcadm enable cde-login



and now we see the different behaviour:



# ps -aef | grep Xsun

root 4844 4834 1 15:22:07 ? 0:00 /usr/openwin/bin/Xsun :0 -defdepth 24 -nobanner -auth /var/dt/A:0-N_aqCj

#



and also that the service is listening on the tcp port:



# netstat -naf inet | grep 6000

\*.6000 \*.\* 0 0 49152 0 LISTEN

\*.6000 \*.\* 0 0 49152 0 LISTEN

#



now it displays that the server is listening also on the TCP port 6000, and we can connect to X from outside.


About

Marco Milo-Oracle

Search

Archives
« May 2016
SunMonTueWedThuFriSat
1
2
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today