By user13366125 on Mai 28, 2014
Let's assume you want to limit ssh login for user junior to a certain timespan, let's say weekdays between 13:10 and 17:00. With Solaris 11.2 it's really easy to limit access to certain services based on times.
As much as there is often a lot discussion about configuration items inside the SMF repository (like the hostname), it brings an important advantage: It introduces the concept of dependencies to configuration changes. What services have be restarted when i change a configuration item. Do you remember all the services that are dependent on the hostname and need a restart after changing it? SMF solves this by putting the information about dependencies into it configuration. You define it with the manifests.
However, as much configuration you may put into SMF, most applications still insists to get it's configuration inside the traditional configuration files, like the resolv.conf for the resolver or the puppet.conf for Puppet. So you need a way to take the information out of the SMF repository and generate a config file with it. In the past the way to do so, was some scripting inside the start method that generated the config file before the service started.
Solaris 11.2 offers a new feature in this area. It introduces a generic method to enable you to create config files from SMF properties. It's called SMF stencils.
Now you have a working Puppet testbed in your Solaris 11.2 beta installation it's time to try some Solaris specific stuff. Oracle a number of additional stuff in order to control Solaris specifics like boot environments, VNICs or SMF. You can find the respective code at java.net.
At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry.
However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish.
Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately.
Sometimes small options are really useful. When you enable a service, for example the Apache HTTPD, you enter svcadm enable apache22. This command immediately return. There is no direct feedback at the command return, if the service has started. So you often see people doing something like that.
This is a nifty small tool that i'm using quite often in scripts that stop something and do some tasks afterwards and i don't want to hassle around with the contract file system. It's not a cool feature, but it's useful and relatively less known. An example: As i wrote long ago, you should never use
kill -9 because often the normal kill is intercepted by the application and it starts to do some clean up tasks first before really stopping the process. So just because
kill has returned, it doesn't imply that the process is away. How do you wait for process to disappear?
Long time readers of my blog know that i'm preferring
top at any time. The micro state accounting in prstat gives you a much deeper insight. Using a tool not capable to use microstate accounting is like looking a video in 240p instead of 4k ultra hd (to stay at this: dtrace is like 8k ;) ). prstat is doing a really useful job in telling you what's happening at the moment in a processes.
However sometimes it's interesting to know, what happened in the past since the startup of the process. And there is a tool that is doing this. With
ptimes -m you can lookup the information of the micro state accounting since the creation of the process.
This isn't something new however given that i saw this problem at a customer just last week, i would like to point to something you have to keep in mind when using Oracle DB with the Veritas Filesystem.
When you have strange performance problem when using VXFS please ensure that the database isn't sitting around in the POSIX inode r/w-lock with it's database writes. Check if Quick-I/O (solves the issue) or ODM (solves the issue as well) is really activated. QIO is not activated by just using the
FILESYSTEMIO_OPTION=setall or by mounting the filesystem with the mount option
qio. You have to do more and it looks like this is sometimes forgotten by people setting up or migrating a system.
In a some customer situation i'm using a number of Oracle Sun Flash Accelerator F40 PCIe Cards or F80 PCIe cards to create flash storage areas inside a server. For example i had 8 F40 cards in a server by using a SPARC M10 and a PCIe Expansion Box which enables you to connect up to 11 F40/F40 cards per expansion box.
The configuration with 8 F80 cards for example is a configuration i'm using on very special occasions and for special purposes, in this case it was a self-written application of a customer needing a lot flash storage inside the server. I won't disclose more. On the other side: I'm sizing quite frequently systems with two F80 cards for "separated ZIL" purposes . Either if you use the SSD as data storage or as separated ZIL: When you do mirroring you have to ensure that mirrors are not using mirror halves on the same card.
From the systems perspective you see four disk devices per F40/F80 card with 100 respective 200 GB capacity per disk and thus you can just add them to your zpool configuration. However configuring the system was a little bit unpractical. The problem: It's not that easy to create a configuration that ensures that no mirror has it's two vdevs on a single F40/F80 card. Perhaps there is an easier way, however I didn't found it so far.
It's a little bit hard to find acceptable disk pairs when you are looking on PCI trees like
/devices/pci@8000/pci@4/pci@0/pci@8/pci@0/pci@0/pci@0/pci@1/pci@0/pci@1/pciex1000,7e@0/iport@80:scsi. Well, at two cards it's not that hard, but still not a nice job. After doing this manually a few times, i thought that at 8 or 22 cards doing this manually is a job for people who killed baby seals, listen to Justin Bieber or equivalent horrible things.
But i didn't committed to such crimes and this problem is nothing that a little bit of shell-fu can't solve. You can do it in a single line of shell. Well ... a kind of a single line of shell.
For a long time the maximum number of groups a user could belong to was 16, albeit there was a way to get 32. In Solaris 11 and recent versions of Solaris 10, the maximum number of groups a user could belong to is 1024 (which is the same limit Windows sets in this regard). It's easy to set the new limit.
After a reboot, this change will be active. But why isn't this the default? There are good reasons for it. I will show you one of them in this entry. Like thinking that two digits for the year or using a signed 32-bit integer for storing the system time, the issue has it's root cause in a decision made a long time ago … in this example the moment in the past is at least 25 years ago. And often just changing something, breaks stuff that is really old, but still in use.
Experienced Solaris users, who tuned their Solaris System for up to 32 groups per user, already know the component that will be broken by having more than 16 users, because a message at the next boot of the system after the change in
/etc/system that next startup will deliver a warning.
However, as i already said, there is a a solution for this problem since Solaris 11.1. This blog entry will show the workaround in action.
You have allowed junior to edit the httpd.conf and you are capable to monitor the changes with pfedit. However there is a little problem. She or he can't restart the apache demon to make the new config active. When junior tries to restart the service, he or she just gets a "permission denied".
Read more at c0t0d0s0.org.
You have allowed junior to edit the httpd.conf and and some nice evening, you are sitting at home. Then: You get alerts on your mobile: Webserver down. You log into the server. You check the httpd.conf. You see an error. You correct it. You look into the change log. Nothing. You ask your colleagues, who made this change. Nobody. Dang. As always. Classic "Whodunit".
Okay, in order to prevent this for future changes, you want to record this kind of information. And working with pfedit is really useful in order to do so.
It's a really nifty feature: Let's assume, you have a config file in your system and you want to allow your junior fellow admin to edit it from time to time, but don't want him to pass any further rights to him, because this machine is too important.
Solaris 11.1 has an interesting feature to delegate the privilege to edit just a file. The tool enabling this is called pfedit.
Brendan Gregg has written an interesting piece about finding performance problems: "The USE method addresses shortcomings in other commonly used methodologies".
It's a good paper, however ... well let's say, I don't understand why so many people find it especially cool or especially good, because at the end it isn't something really new. Don't understand me wrong: It's good. But not extraordinarily good. Like many methodologies it's basically just codified common sense with a personal spin. So I would prefer to say "My-personal-way-of-doing-stuff" instead of calling it methodology. There is nothing new in it. Just a lot of common sense.
I really think that performance analysis is not so much about a "methodology" you can simply follow that will lead you magically to a result. It's about a mindset how to tackle problems, it's about being structured in the approach, it's about "being prepared", it's a lot about knowing stuff.
As I do performance analysis quite frequently, I have created my own "methodology", or to be more correct ... my own mindset of doing such stuff. I don't call it method or methodologies. Perhaps it's useful for some or the other ... so i write it down here.