Mittwoch Mai 28, 2014

Solaris 11.2: Time based access limitations

Let's assume you want to limit ssh login for user junior to a certain timespan, let's say weekdays between 13:10 and 17:00. With Solaris 11.2 it's really easy to limit access to certain services based on times.


Sonntag Mai 25, 2014

New Solaris 11.2 beta features: SMF stencils

As much as there is often a lot discussion about configuration items inside the SMF repository (like the hostname), it brings an important advantage: It introduces the concept of dependencies to configuration changes. What services have be restarted when i change a configuration item. Do you remember all the services that are dependent on the hostname and need a restart after changing it? SMF solves this by putting the information about dependencies into it configuration. You define it with the manifests.

However, as much configuration you may put into SMF, most applications still insists to get it's configuration inside the traditional configuration files, like the resolv.conf for the resolver or the puppet.conf for Puppet. So you need a way to take the information out of the SMF repository and generate a config file with it. In the past the way to do so, was some scripting inside the start method that generated the config file before the service started.

Solaris 11.2 offers a new feature in this area. It introduces a generic method to enable you to create config files from SMF properties. It's called SMF stencils.

(read more)

Samstag Mai 24, 2014

A glimpse into Solaris 11.2 specific Puppet components

Now you have a working Puppet testbed in your Solaris 11.2 beta installation it's time to try some Solaris specific stuff. Oracle a number of additional stuff in order to control Solaris specifics like boot environments, VNICs or SMF. You can find the respective code at


Basic Puppet installation with Solaris 11.2 beta

At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry.

However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish.

Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately.


Less known Solaris features: synchronous svcadm

Sometimes small options are really useful. When you enable a service, for example the Apache HTTPD, you enter svcadm enable apache22. This command immediately return. There is no direct feedback at the command return, if the service has started. So you often see people doing something like that.


Less known Solaris features: pwait

This is a nifty small tool that i'm using quite often in scripts that stop something and do some tasks afterwards and i don't want to hassle around with the contract file system. It's not a cool feature, but it's useful and relatively less known. An example: As i wrote long ago, you should never use kill -9 because often the normal kill is intercepted by the application and it starts to do some clean up tasks first before really stopping the process. So just because kill has returned, it doesn't imply that the process is away. How do you wait for process to disappear?


Less known Solaris features: ptime

Long time readers of my blog know that i'm preferring prstat over top at any time. The micro state accounting in prstat gives you a much deeper insight. Using a tool not capable to use microstate accounting is like looking a video in 240p instead of 4k ultra hd (to stay at this: dtrace is like 8k ;) ). prstat is doing a really useful job in telling you what's happening at the moment in a processes.

However sometimes it's interesting to know, what happened in the past since the startup of the process. And there is a tool that is doing this. With ptimes -m you can lookup the information of the micro state accounting since the creation of the process.


Checking VXFS QIO usage

This isn't something new however given that i saw this problem at a customer just last week, i would like to point to something you have to keep in mind when using Oracle DB with the Veritas Filesystem.

When you have strange performance problem when using VXFS please ensure that the database isn't sitting around in the POSIX inode r/w-lock with it's database writes. Check if Quick-I/O (solves the issue) or ODM (solves the issue as well) is really activated. QIO is not activated by just using the FILESYSTEMIO_OPTION=setall or by mounting the filesystem with the mount option qio. You have to do more and it looks like this is sometimes forgotten by people setting up or migrating a system.


Creating a zpool configuration out of a bunch of F40/80 cards

In a some customer situation i'm using a number of Oracle Sun Flash Accelerator F40 PCIe Cards or F80 PCIe cards to create flash storage areas inside a server. For example i had 8 F40 cards in a server by using a SPARC M10 and a PCIe Expansion Box which enables you to connect up to 11 F40/F40 cards per expansion box.

The configuration with 8 F80 cards for example is a configuration i'm using on very special occasions and for special purposes, in this case it was a self-written application of a customer needing a lot flash storage inside the server. I won't disclose more. On the other side: I'm sizing quite frequently systems with two F80 cards for "separated ZIL" purposes . Either if you use the SSD as data storage or as separated ZIL: When you do mirroring you have to ensure that mirrors are not using mirror halves on the same card.

From the systems perspective you see four disk devices per F40/F80 card with 100 respective 200 GB capacity per disk and thus you can just add them to your zpool configuration. However configuring the system was a little bit unpractical. The problem: It's not that easy to create a configuration that ensures that no mirror has it's two vdevs on a single F40/F80 card. Perhaps there is an easier way, however I didn't found it so far.

It's a little bit hard to find acceptable disk pairs when you are looking on PCI trees like /devices/pci@8000/pci@4/pci@0/pci@8/pci@0/pci@0/pci@0/pci@1/pci@0/pci@1/pciex1000,7e@0/iport@80:scsi. Well, at two cards it's not that hard, but still not a nice job. After doing this manually a few times, i thought that at 8 or 22 cards doing this manually is a job for people who killed baby seals, listen to Justin Bieber or equivalent horrible things.

But i didn't committed to such crimes and this problem is nothing that a little bit of shell-fu can't solve. You can do it in a single line of shell. Well ... a kind of a single line of shell.





Top Tags
« Mai 2014