Tuesday May 03, 2016

Zones with latest Java

Java, Solaris 11 and Solaris Zones


Java is seamlessly integrated in the Solaris 11 IPS packaging system, therefore you can use the repository commands to manage the installation and configuration of Java.


Install Java 7 (JRE and JDK)


# pkg install jre-7 jdk-7

Install Java 8 (JRE and JDK)


# pkg install jre-8 jdk-8

Considerations


These commands, installs the incorporation version of Java defined within the Solaris 11 base version or SRU (Support Repository Update). This version however doesn't contain all the latest update to Java; to receive the latest updates to Java, you need to specify in the packaging system that for the Java packages you don't want to be limited to the SRU versions:



# pkg change-facet version-lock.consolidation/java-8/java-8-incorporation=false

# pkg update jre-8 jdk-8



# pkg change-facet version-lock.consolidation/java-7/java-7-incorporation=false

# pkg update jre-7 jdk-7


and this will bring your system up to date, with the latest Java version.




FWIW, expanding a bit this concept, if you need to select the latest
Java IPS package from your repository when automating the
installation of multiple zones, I've found it helpful the following
template file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE auto_install SYSTEM
"file:///usr/share/install/ai.dtd.1">

<auto_install>
    <ai_instance name="zone_default">
        <target>
            <logical>
                <zpool name="rpool">
                    <filesystem name="export"
mountpoint="/export"/>

                    <filesystem name="export/home"/>
                    <be name="solaris">
                        <options>
                            <option name="compression"
value="on"/>

                        </options>
                    </be>
                </zpool>
            </logical>
        </target>

        <software type="IPS">
            <destination>
                

            </destination>
            <!-- Install required software packages: -->
            <software_data action="install">
               
<name>pkg:/group/system/solaris-small-server</name>

                <name>pkg:/system/locale</name>
                <name>pkg:/editor/vim</name>
                <name>pkg:/text/gnu-sed</name>
                <name>pkg:/network/telnet</name>
               
<name>pkg:/developer/java/jdk-7</name>

               
<name>pkg:/system/fault-management/snmp-notify</name>

            </software_data>
        </software>
    </ai_instance>
</auto_install>


Which allows you to unlock the facet for your Java installation, directly during the zone creation.

Tuesday Apr 26, 2016

Renaming a zone and changing the mountpoint

Zones are great, since they allow you to run and manage in isolated containers all of your application... but sometimes, just at the end of the entire installation, you realize that probably you could have picked up better naming conventions for the zones and of the zpools/datasets. So you start scratching your head repeating yourself that reinstalling everything from scratch, is not an option... So, here's the scenario, my zones are both hosted on a zpool named BADPOOL, which I'm going to rename to ZonesPool:


# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
BADPOOL    15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -
rpool  63.5G  16.8G  46.7G  26%  1.00x  ONLINE  -
# zfs list |grep BADPOOL
BADPOOL                                          5.87G  9.76G    31K  legacy
BADPOOL/old                                    2.93G  9.76G    32K  /zones/old
BADPOOL/old/rpool                              2.93G  9.76G    31K  /rpool
BADPOOL/old/rpool/ROOT                         1.93G  9.76G    31K  legacy
BADPOOL/old/rpool/ROOT/solaris-15              1.93G  9.76G  1.73G  /zones/old/root
BADPOOL/old/rpool/ROOT/solaris-15/var           206M  9.76G   206M  /zones/old/root/var
BADPOOL/old/rpool/VARSHARE                     1.13M  9.76G  1.07M  /var/share
BADPOOL/old/rpool/VARSHARE/pkg                   63K  9.76G    32K  /var/share/pkg
BADPOOL/old/rpool/VARSHARE/pkg/repositories      31K  9.76G    31K  /var/share/pkg/repositories
BADPOOL/old/rpool/app                          1022M  9.76G  1022M  /app
BADPOOL/old/rpool/export                        120K  9.76G    32K  /export
BADPOOL/old/rpool/export/home                    88K  9.76G  55.5K  /export/home
BADPOOL/old/rpool/export/home/admin            32.5K  9.76G  32.5K  /export/home/admin
BADPOOL/bad                                    2.94G  9.76G    32K  /zones/bad
BADPOOL/bad/rpool                              2.94G  9.76G    31K  /rpool
BADPOOL/bad/rpool/ROOT                         1.92G  9.76G    31K  legacy
BADPOOL/bad/rpool/ROOT/solaris-15              1.92G  9.76G  1.72G  /zones/bad/root
BADPOOL/bad/rpool/ROOT/solaris-15/var           204M  9.76G   204M  /zones/bad/root/var
BADPOOL/bad/rpool/VARSHARE                     1.13M  9.76G  1.07M  /var/share
BADPOOL/bad/rpool/VARSHARE/pkg                   63K  9.76G    32K  /var/share/pkg
BADPOOL/bad/rpool/VARSHARE/pkg/repositories      31K  9.76G    31K  /var/share/pkg/repositories
BADPOOL/bad/rpool/app                          1.02G  9.76G  1.02G  /app
BADPOOL/bad/rpool/export                        110K  9.76G    32K  /export
BADPOOL/bad/rpool/export/home                  78.5K  9.76G    46K  /export/home
BADPOOL/bad/rpool/export/home/admin            32.5K  9.76G  32.5K  /export/home/admin

Current zone names are old and bad, and I'd like to rename them to this and that; first of all, of course the zones should be at least down:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global          running     /                          solaris    shared
   - old             installed   /zones/old                 solaris    shared
   - bad             installed   /zones/bad                 solaris    shared

Now, let's first deal with the zpool part. In ZFS there's no way of renaming a zpool which is already 'imported', the only way to do that is to export the pool and re-import it with the new, correct name:


# zpool list BADPOOL
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
BADPOOL   15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -
# zpool export BADPOOL
# zpool import BADPOOL ZonesPool
# zpool list ZonesPool
NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
ZonesPool    15.9G  5.87G  10.0G  36%  1.00x  ONLINE  -

And that was easy ;-) But the various dataset are still reflecting the previous naming, with old and bad names and mountpoints:


# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/old                                     2.93G  9.76G    32K  /zones/old
ZonesPool/old/rpool                               2.93G  9.76G    31K  /rpool
ZonesPool/old/rpool/ROOT                          1.93G  9.76G    31K  legacy
ZonesPool/old/rpool/ROOT/solaris-15               1.93G  9.76G  1.73G  /
ZonesPool/old/rpool/ROOT/solaris-15/var            206M  9.76G   206M  /var
ZonesPool/old/rpool/VARSHARE                      1.13M  9.76G  1.07M  /var/share
ZonesPool/old/rpool/VARSHARE/pkg                    63K  9.76G    32K  /var/share/pkg
ZonesPool/old/rpool/VARSHARE/pkg/repositories       31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/old/rpool/app                           1022M  9.76G  1022M  /app
ZonesPool/old/rpool/export                         120K  9.76G    32K  /export
ZonesPool/old/rpool/export/home                     88K  9.76G  55.5K  /export/home
ZonesPool/old/rpool/export/home/admin             32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/bad                                     2.94G  9.76G    32K  /zones/bad
ZonesPool/bad/rpool                               2.94G  9.76G    31K  /rpool
ZonesPool/bad/rpool/ROOT                          1.92G  9.76G    31K  legacy
ZonesPool/bad/rpool/ROOT/solaris-15               1.92G  9.76G  1.72G  /
ZonesPool/bad/rpool/ROOT/solaris-15/var            204M  9.76G   204M  /var
ZonesPool/bad/rpool/VARSHARE                      1.13M  9.76G  1.07M  /var/share
ZonesPool/bad/rpool/VARSHARE/pkg                    63K  9.76G    32K  /var/share/pkg
ZonesPool/bad/rpool/VARSHARE/pkg/repositories       31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/bad/rpool/app                           1.02G  9.76G  1.02G  /app
ZonesPool/bad/rpool/export                         110K  9.76G    32K  /export
ZonesPool/bad/rpool/export/home                   78.5K  9.76G    46K  /export/home
ZonesPool/bad/rpool/export/home/admin             32.5K  9.76G  32.5K  /export/home/admin

So we need to rename the datasets:


# zfs rename ZonesPool/old ZonesPool/this
# zfs rename ZonesPool/bad ZonesPool/that
# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/this                                      2.93G  9.76G    32K  /zones/old
ZonesPool/this/rpool                                2.93G  9.76G    31K  /rpool
ZonesPool/this/rpool/ROOT                           1.93G  9.76G    31K  legacy
ZonesPool/this/rpool/ROOT/solaris-15                1.93G  9.76G  1.73G  /
ZonesPool/this/rpool/ROOT/solaris-15/var             206M  9.76G   206M  /var
ZonesPool/this/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/this/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/this/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/this/rpool/app                            1022M  9.76G  1022M  /app
ZonesPool/this/rpool/export                          120K  9.76G    32K  /export
ZonesPool/this/rpool/export/home                      88K  9.76G  55.5K  /export/home
ZonesPool/this/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/that                                      2.94G  9.76G    32K  /zones/bad
ZonesPool/that/rpool                                2.94G  9.76G    31K  /rpool
ZonesPool/that/rpool/ROOT                           1.92G  9.76G    31K  legacy
ZonesPool/that/rpool/ROOT/solaris-15                1.92G  9.76G  1.72G  /
ZonesPool/that/rpool/ROOT/solaris-15/var             204M  9.76G   204M  /var
ZonesPool/that/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/that/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/that/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/that/rpool/app                            1.02G  9.76G  1.02G  /app
ZonesPool/that/rpool/export                          110K  9.76G    32K  /export
ZonesPool/that/rpool/export/home                    78.5K  9.76G    46K  /export/home
ZonesPool/that/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin

As well as the mount points:


# zfs set mountpoint=/zones/this ZonesPool/this
# zfs set mountpoint=/zones/that ZonesPool/that
# zfs list |grep ZonesPool
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
ZonesPool                                           5.87G  9.76G    31K  legacy
ZonesPool/this                                      2.93G  9.76G    32K  /zones/this
ZonesPool/this/rpool                                2.93G  9.76G    31K  /rpool
ZonesPool/this/rpool/ROOT                           1.93G  9.76G    31K  legacy
ZonesPool/this/rpool/ROOT/solaris-15                1.93G  9.76G  1.73G  /
ZonesPool/this/rpool/ROOT/solaris-15/var             206M  9.76G   206M  /var
ZonesPool/this/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/this/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/this/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/this/rpool/app                            1022M  9.76G  1022M  /app
ZonesPool/this/rpool/export                          120K  9.76G    32K  /export
ZonesPool/this/rpool/export/home                      88K  9.76G  55.5K  /export/home
ZonesPool/this/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin
ZonesPool/that                                      2.94G  9.76G    32K  /zones/that
ZonesPool/that/rpool                                2.94G  9.76G    31K  /rpool
ZonesPool/that/rpool/ROOT                           1.92G  9.76G    31K  legacy
ZonesPool/that/rpool/ROOT/solaris-15                1.92G  9.76G  1.72G  /
ZonesPool/that/rpool/ROOT/solaris-15/var             204M  9.76G   204M  /var
ZonesPool/that/rpool/VARSHARE                       1.13M  9.76G  1.07M  /var/share
ZonesPool/that/rpool/VARSHARE/pkg                     63K  9.76G    32K  /var/share/pkg
ZonesPool/that/rpool/VARSHARE/pkg/repositories        31K  9.76G    31K  /var/share/pkg/repositories
ZonesPool/that/rpool/app                            1.02G  9.76G  1.02G  /app
ZonesPool/that/rpool/export                          110K  9.76G    32K  /export
ZonesPool/that/rpool/export/home                    78.5K  9.76G    46K  /export/home
ZonesPool/that/rpool/export/home/admin              32.5K  9.76G  32.5K  /export/home/admin

Now that we have the filesystems in place, we still need to 'refine' the zones, as in the zones configuration, we still have the old names and definitions:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - old             installed   /zones/old                 solaris    shared
   - bad             installed   /zones/bad                 solaris    shared
# zoneadm -z old rename this
# zoneadm -z bad rename that
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             installed   /zones/old                 solaris    shared
   - that             installed   /zones/bad                 solaris    shared

After changing the names of the zones, we have to change now their PATH, but again, also this operation cannot be done while the zone is in the 'installed' state and it is attached to a live system; therefore we should first forcibly detach the zone (we'll have to use the -F option to force the dismount since the dataset on which the zone was built is not there anymore):


# zoneadm -z this detach -F
# zoneadm -z that detach -F
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             configured  /zones/old                 solaris    shared
   - that             configured  /zones/bad                 solaris    shared

Now that both zones are detached, we can change the zone path:


# zonecfg -z this info zonepath
zonepath: /zones/old
# zonecfg -z that info zonepath
zonepath: /zones/bad
# zonecfg -z this set zonepath=/zones/this
# zonecfg -z this set zonepath=/zones/this
# zonecfg -z this info zonepath
zonepath: /zones/this
# zonecfg -z that info zonepath
zonepath: /zones/that

And verify that the change has been correctly made, but the zones are (of course), still in the 'configured' state:


# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this configured  /zones/this                 solaris    shared
   - that configured  /zones/that                solaris    shared

We're now ready to re-attach the zones to the live system:


# zoneadm -z this attach
Progress being logged to /var/log/zones/zoneadm.20160426T151603Z.this.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: FE/this/rpool/ROOT/solaris-15
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image. (zone:this)

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/this/root/var/log/zones/zoneadm.20160426T151603Z.this.attach
# zoneadm -z that attach
Progress being logged to /var/log/zones/zoneadm.20160426T153312Z.that.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: FE/that/rpool/ROOT/solaris-15
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image. (zone:that)

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/that/root/var/log/zones/zoneadm.20160426T153312Z.that.attach
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - this             installed   /zones/this                  solaris    shared
   - that             installed   /zones/that                  solaris    shared


Finally, we need to boot the zones:


# zoneadm -z this boot
# zoneadm -z that boot
# zoneadm list -civ
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   6 this             running     /zones/this                  solaris    shared
   7 that             running     /zones/that                  solaris    shared
# zlogin this zonename
this
# zlogin that zonename
that

And this concludes our journey through zones, zpools and datasets ;-)

Tuesday Apr 12, 2016

Sendmail SMF services in Solaris

Sendmail SMF services in Solaris


With the latest versions of Solaris (10u6 and 11.x) the classic sendmail program has been split in two separate daemons


root@host1 # svcs -a |grep sendmail
online         20:31:17 svc:/network/smtp:sendmail
online         20:33:53 svc:/network/sendmail-client:default
root@host1 # svcs -p smtp:sendmail sendmail-client:default
STATE          STIME    FMRI
online         20:31:17 svc:/network/smtp:sendmail
               20:31:18    24564 sendmail
online         20:33:53 svc:/network/sendmail-client:default
               20:33:53    24574 sendmail
root@host1 # ps -aef|grep sendmail
    root 24595 24233   0 21:01:37 pts/1       0:00 grep sendmail
    root 24564     1   0 20:31:18 ?           0:00 /usr/lib/inet/sendmail -bd -q15m
   smmsp 24574     1   0 20:33:54 ?           0:00 /usr/lib/inet/sendmail -Ac -q15m


The first one is the real Message Transfer Agent (MTA), whereas the second one handles the client queues used by the local Message Submission Programs (MSP).


In an ideal world, with all the internet hosts up and running, and with all the connections between them working properly, the difference by these two instances won't be so immediate, but what happens in the real world?


In the real world, it could simply happen that the MTA program could be down for some reason (scheduled maintenance or unexpected issues):


root@host1 # svcadm disable smtp:sendmail sendmail-client
root@host1 # svcs -a |grep -i sendmail
disabled       21:27:33 svc:/network/smtp:sendmail
disabled       21:27:33 svc:/network/sendmail-client:default


but a generic MSPs (like mail or mailx) could still be allowed to submit the email:


root@host1 # mailq -Ac
/var/spool/clientmqueue is empty
                Total requests: 0
root@host1 # echo "Test message with mailx." | mailx -s "Test with both DOWN" user1@host1
user1@host1... Connecting to [127.0.0.1] via relay...
user1@host1... Deferred: Connection refused by [127.0.0.1]

root@host1 # echo "Test message with mailx." | mailx -s "Test with both DOWN" user2@host2
user2@host2... Connecting to [127.0.0.1] via relay...
user2@host2... Deferred: Connection refused by [127.0.0.1]


But in this case, since the MTA cannot deliver the message, the email is kept in the MTA-client queues:


root@host1 # mailq -Ac
                /var/spool/clientmqueue (2 requests)
-----Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient-----------
u3CJUgsu024771       25 Tue Apr 12 21:30 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user1@host1
u3CJTMqr024760       25 Tue Apr 12 21:29 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user2@host2
                Total requests: 2


At this point, even if we enable the MTA and check again the queues, we will still see the same messages queued:


root@host1 # svcadm enable smtp:sendmail
root@host1 # svcs -a | grep sendmail
disabled       21:27:33 svc:/network/sendmail-client:default
online         21:32:54 svc:/network/smtp:sendmail
root@host1 # mailq -A
c
               /var/spool/clientmqueue (2 requests)
-----Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient-----------
u3CJUgsu024771       25 Tue Apr 12 21:30 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user1@host1
u3CJTMqr024760       25 Tue Apr 12 21:29 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user2@host2
                Total requests: 2


The difference is that if now we try to send email messages, these will be immediately delivered, both locally, and remotely:


root@host1 # echo "Test message with mailx." | mailx -s "Test smtp:sendmail UP and sendmail-client DOWN" user1@host1
root@host1 # echo "Test message with mailx." | mailx -s "Test smtp:sendmail UP and sendmail-client DOWN" user2@host2


user1@host1 will see:


user1@host1 $ mail
From root@host1 Tue Apr 12 22:20:33 2016
Date: Tue, 12 Apr 2016 22:20:32 +0200 (CEST)
From: Super-User <root@host1>
Message-Id: <201604122020.u3CKKWP5024867@host1>
To: user1@host1
Subject: Test smtp:sendmail UP and sendmail-client DOWN
Content-Length: 25

Test message with mailx.

? d
user1@host1 $


and same thing will happen for user2@host2:


user2@host2 $ mail
From root@host1 Tue Apr 12 22:20:39 2016
Date: Tue, 12 Apr 2016 22:20:38 +0200 (CEST)
From: Super-User <root@host1>
Message-Id: <201604122020.u3CKKcmI024873@host1>
To: user2@host2
Subject: Test smtp:sendmail UP and sendmail-client DOWN
Content-Length: 25

Test message with mailx.

? d
user2@host2 $

but the messages submitted previously will still be in the client mail queues:


root@host1 # mailq -Ac
               /var/spool/clientmqueue (2 requests)
-----Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient-----------
u3CJUgsu024771       25 Tue Apr 12 21:30 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user1@host1
u3CJTMqr024760       25 Tue Apr 12 21:29 root
                 (Deferred: Connection refused by [127.0.0.1])
                                         user2@host2
                Total requests: 2


Only if we enable the service which is taking care of the client queues, these emails will be delivered:


root@host1 # svcadm enable sendmail-client
root@host1 # svcadm refresh sendmail-client
root@host1 # mailq -Ac
/var/spool/clientmqueue is empty
                Total requests: 0
root@host1 #


And we can get a confirmation from the two users;


user1@host1 $ mail
From root@host1 Tue Apr 12 21:31:48 2016
Date: Tue, 12 Apr 2016 21:29:22 +0200 (CEST)
From: Super-User <root@host1>
Message-Id: <201604121930.
u3CJUgsu024771@host1>
To: user1@host1
Subject: Test with both DOWN
Content-Length: 25

Test message with mailx.

?


and:


user2@host2 $ mail
From root@host1 Tue Apr 12 21:31:48 2016
Date: Tue, 12 Apr 2016 21:29:22 +0200 (CEST)
From: Super-User <root@host1>
Message-Id: <201604121929.u3CJTMqr024760@host1>
To: user2@host2
Subject: Test with both DOWN
Content-Length: 25

Test message with mailx.

?

... Piece of cake ;-)

Thursday Mar 17, 2016

CPU Clock Speed in Solaris

Sometimes in a multi-cpu/core environment, you might want to check which is the actual speed of the various cores, so a good starting point is the cpu_info module of the kstat command output, which combined with some awk can display exactly what you're looking for; here is a sample of from an x86_64 machine (actually an old, but still faithful x4150):

# kstat -m cpu_info 5 | awk '/instance/{printf "CPU: %3s ---> ", $4}; /current_clock_Hz/ {print $2/1000000 " MHz"}'

CPU:   0 ---> 2003 MHz

CPU:   1 ---> 2003 MHz

CPU:   2 ---> 2003 MHz

CPU:   3 ---> 2003 MHz

CPU:   4 ---> 2003 MHz

CPU:   5 ---> 2003 MHz

CPU:   6 ---> 2003 MHz

CPU:   7 ---> 3166 MHz

CPU:   0 ---> 2003 MHz

CPU:   1 ---> 2003 MHz

CPU:   2 ---> 2003 MHz

CPU:   3 ---> 2003 MHz

CPU:   4 ---> 2003 MHz

CPU:   5 ---> 2003 MHz

CPU:   6 ---> 2003 MHz

CPU:   7 ---> 3166 MHz

Of course, you might want to skip the first interval... but you'll have a 'rolling update' with the current clock speed in MHz of your CPU/cores.

Tuesday Apr 17, 2012

Remote Desktop from Solaris to Windows

Building rdesktop on Solaris 10

[Read More]

Sunday Jan 09, 2011

trackerd

I've upgraded my laptop to the latest Oracle Solaris 11 Express (snv_151a X86) and at a first glance, I've to say that seems a good step forward respect to my previous OpenSolaris... but ... Like all good nerds, I was exploring the new system, playing with configurations and installing the typical nerd software I need, while I stumbled on a process (eating a lot of CPU and RAM): /usr/bin/trackerd that I've never seen on my previous OpenSolaris installation...
Nothing special, is not a virus or an E.T.: is just the default GNOME indexing/tracking tool that from this release is installed and enabled by default:


root-AT-vesuvio:~# pkg info tracker
          Name: library/desktop/search/tracker
       Summary: Desktop search tool
   Description: Desktop search tool
      Category: Applications/System Utilities
         State: Installed
     Publisher: solaris
       Version: 0.5-DOT-11
 Build Release: 5.11
        Branch: 0.151.0.1
Packaging Date: Fri Nov 05 05:52:57 2010
          Size: 3.09 MB
          FMRI: pkg://solaris/library/desktop/search/tracker@0.5.11,5.11-0.151.0.1:20101105T055257Z
root@vesuvio:~#


Since I'm very conscious about my CPU clock cycles/RAM bits, and my nerd software doesn't like CPU/MEM spikes that could be easily triggered from that software I simply removed the package:


root@vesuvio:~# pkg uninstall tracker
                Packages to remove:     1
           Create boot environment:    No
               Services to restart:     2
PHASE                                        ACTIONS
Removal Phase                                373/373

PHASE                                          ITEMS
Package State Update Phase                       1/1
Package Cache Update Phase                       1/1
Image State Update Phase                         2/2
root@vesuvio:~#



I admit that this solution may sound a bit 'extreme', but I really don't like/use this piece of software. I do not like that kind of programs running in background, browsing and crawling the directories of your HD to index the content of your documents, pictures, emails etc. This could be a nice feature to have on an average end-user desktop/station, not for a laptop that I mainly use as my nerd-lab test bench ;-)


People interested in using this tool can find plenty of ways of throttling down the CPU/MEM resources, excluding directories or assigning specific paths to monitor, etc...



  • Tracker Project home page on GNOME

  • HOWTO that explains how to customize the tracker daemon behaviour

Monday May 31, 2010

Sun ... set.

I had a dream. It has been great, it has been fun: it has been SUN.


Now the Oracle says: ... ibis ... redibis ... non ... morieris in bello


[... you shall go ... you shall return ... not ... die in the war]


Our job will to place a comma in the right place be to have the correct sentence.


Kick butts and have fun!


Sunnies! ;-)

Thursday May 07, 2009

Make X listen on external TCP ports (Solaris and OpenSolaris)


In Solaris 10 and OpenSolaris X Server is enabled per default and controlled via SMF (Service Management Facility):



# ps -aef | grep Xsun
root 4767 4764 0 15:10:44 ? 0:01
/usr/openwin/bin/Xsun :0 -defdepth 24 -nolisten tcp -nobanner -auth
/var/dt/A:0


# svcs -xv cde-login

svc:/application/graphical-login/cde-login:default (CDE login)

State: online since Thu May 07 15:10:43 2009

See: man -M /usr/dt/share/man -s 1 dtlogin

See: /var/svc/log/application-graphical-login-cde-login:default.log

Impact: None.

#



The default installation doesn't makes the X Server listen on the TCP port:



# netstat -naf inet | grep 6000

#



and this is indeed a noticeable security feature, but sometimes it's
also useful having the X Server available and responsive on TCP.


X properties are defined in the /application/x11/x11-server service;
and we can see all the properties with the following command:


# svccfg -s /application/x11/x11-server listprop
options                       application
options/default_depth         integer  24
options/server                astring  /usr/openwin/bin/Xsun
options/server_args           astring
options/stability             astring  Evolving
options/value_authorization   astring  solaris.smf.manage.x11
options/tcp_listen            boolean  false
fs-local                      dependency
fs-local/entities             fmri     svc:/system/filesystem/local
fs-local/grouping             astring  require_all
fs-local/restart_on           astring  none
fs-local/type                 astring  service
network-service               dependency
network-service/entities      fmri     svc:/network/service
network-service/grouping      astring  require_all
network-service/restart_on    astring  none
network-service/type          astring  service
name-services                 dependency
name-services/entities        fmri     svc:/milestone/name-services
name-services/grouping        astring  require_all
name-services/restart_on      astring  refresh
name-services/type            astring  service
general                       framework
general/action_authorization  astring  solaris.smf.manage.x11
general/entity_stability      astring  Evolving
start                         method
start/exec                    astring  "/lib/svc/method/x11-server -d 0 -c %i %m"
start/timeout_seconds         count    0
start/type                    astring  method
stop                          method
stop/exec                     astring  ":kill -TERM"
stop/timeout_seconds          count    10
stop/type                     astring  method
tm_common_name                template
tm_common_name/C              ustring  "X Window System server"
tm_man_Xserver                template
tm_man_Xserver/manpath        astring  /usr/openwin/share/man
tm_man_Xserver/section        astring  1
tm_man_Xserver/title          astring  Xserver
tm_man_Xsun                   template
tm_man_Xsun/manpath           astring  /usr/openwin/share/man
tm_man_Xsun/section           astring  1
tm_man_Xsun/title             astring  Xsun
tm_man_Xorg                   template
tm_man_Xorg/manpath           astring  /usr/X11/share/man
tm_man_Xorg/section           astring  1
tm_man_Xorg/title             astring  Xorg


In particular the switch that controls whether or not the X server has to listen on the TCP is:



# svccfg -s /application/x11/x11-server listprop options/tcp_listen

options/tcp_listen boolean false

#



So in this case we would like to enable with the following command:



# svccfg -s svc:/application/x11/x11-server setprop options/tcp_listen = true

# svccfg -s /application/x11/x11-server listprop options/tcp_listen

options/tcp_listen boolean true

#



and stop/start the cde-login service to make the change effective:



# svcadm disable cde-login

# svcadm enable cde-login



and now we see the different behaviour:



# ps -aef | grep Xsun

root 4844 4834 1 15:22:07 ? 0:00 /usr/openwin/bin/Xsun :0 -defdepth 24 -nobanner -auth /var/dt/A:0-N_aqCj

#



and also that the service is listening on the tcp port:



# netstat -naf inet | grep 6000

\*.6000 \*.\* 0 0 49152 0 LISTEN

\*.6000 \*.\* 0 0 49152 0 LISTEN

#



now it displays that the server is listening also on the TCP port 6000, and we can connect to X from outside.


Thursday Nov 13, 2008

Two years @Sun.COM

After our new product launch, I believe this wonderful system is another tool added to our already vast portfolio of products we can sell to customers. But I'm still convinced that customers would better buy not only wonderful products (hopefully ours). Product themselves may not be enough.

Sun may have the very best the market can offer at a given moment in time, but sooner or later there could always be someone else with a better idea: that's life, that's part of the game.

So, we (SUN) must make the difference!

This difference could be providing customers with complete solutions, bottom-up services. We should not only provide the hardware bricks, but also expertise, skills, guidance and leadership to build Customer's infrastructure... I don't wanna be only "another brick in the wall". I would like that Customers, before making their own decisions ask Sun for ideas, because they trust us as technological partners and as people.

About

Marco Milo-Oracle

Search

Categories
Archives
« May 2016
SunMonTueWedThuFriSat
1
2
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today