Thursday May 17, 2007

Best Practices - Using Probes, Pre and Post actions

Sun Connection allows uploading scripts as local components. There are several types of local components, 3 of them are Probes, Pre action and Post action.

Pre action runs prior to deploying a package. It can be used to stop services, notify users on the machine etc.
Post actions are being executed after the package was deployed. They can be used to restart services, clean temporary files, extract any tar balls that were deployed etc.
Probes are a Boolean conditions – if the probe returns "Yes" – the job will continue. If it returns "No" – the job will stop. Probes are used to determine requirements for deployment: enough disk space in certain partitions, enough RAM, low CPU load, no users logged in etc.

In general, Probes, Pre and Post actions can be written in Shell, Python Perl etc. The only limitation is that the target machine needs to have this interpreter installed. When writing the script for those actions, always state the interpreter in the beginning, otherwise the execution will fail. For example, the first line would be:

With regards to exit codes – it is the same for all the scripts:  exit with a value of "0" will indicate all went well, and exit with any other code would indicate a failure. For probes, exit 0 means condition is met, meaning - continue with the job.

One important thing about Probes is that unlike the other components, Probes actually run also in simulation mode. The reason for that is that in simulation you want to check if the environment is ready for the job. For that reason, be very careful when writing probes – make sure they don’t change anything in the environment as those changes will take place in simulation mode.

Few tips:
1) Always write the script and test it outside Sun Connection first. This will speed up debugging of errors
2) Always assume that the script can be executed twice, so before changing anything – check if it was changed already
3) When running those scripts, the standard output and standard error (stdout and stderr) will go to the log file. This log file can be viewed per machine through the console.
4) The log will be available only when the job is completed

If you have any questions or suggestions for future best practices, please feel free to contact me at Eran.Steiner-AT-Sun-DOT-com.

Happy patching!

Eran Steiner
Field Enablement Team


Friday May 04, 2007

More Buried Treasure: N1 SPS Developer Guidelines on BigAdmin

Digging around on BigAdmin can really pay off where N1 SPS is concerned. In my last post, I talked about the N1 SPS Usage Best Practices Guide (PDF). For this post, I'll highlight the N1 SPS Developer Guidelines (PDF) document.

This guide describes a lot of key information about how to work with the N1 SPS XML schema for containers, components, and plans. Personal highlights for me:

  • Pro/Con tables describing the advantages and disadvantages for using key features of the N1 SPS XML schema
  • Good section describing how to use variables in components and plans
  • Detailed discussion of how to use the <execNative> element to execute commands native to the target operating system
  • Command line samples for common or useful commands
Definitely worth a look, as is the IT Management Hub on BigAdmnin.

Friday Apr 27, 2007

Hidden Jewel: N1 SPS Usage Best Practices on BigAdmin

BigAdmin can be a tough place to find information. It's big, it's community driven, and it's trying to satisfy the needs of a variety of customers. It's your best friend -- and, sometimes, your best friend can be incredibly frustrating to work with.

 So let me be your BigAdmin search engine, and point you to a great document you might not know about -- the N1 SPS Usage Best Practices Guide (PDF). Peter Charpentier and Toli Kuznets, the authors, have compiled a bunch of great information here to help you get the most of out N1 SPS software. And it's short and sweet -- 31 pages total. Not even long enough to put you to sleep.

 The two sections of this guide that got me interested?

  • CLUI Command Deciphering (p. 12, despite what it says in the table of contents) - This section breaks down the N1 SPS command line syntax, mapping out the architecture of the CLI.
  • Modeling (p. 17) - This section describes two approaches to modeling your applications in N1 SPS -- essentially, how you break down your application into distinct parts, then capture those parts as SPS objects.
This guide's been available on BigAdmin for a while, and perhaps many of you have seen it already. But, for those who haven't, there's a lot of good information there, and on the IT Management Hub on BigAdmin.


Friday Apr 20, 2007

Best Practices - Improving the cache

Sun Connection automatically manages downloads from Sun, RedHat and Suse. When a job is sent to 100 machines, each machine requests the proper patch or RPM from the management server. The management server then goes once to the internet, downloads the patch or RPM, caches it and provides it to all the machines.

This cache size is limited to make sure the disk is not getting full. The default value, however, is relatively small – only 512 megs, so if you have a big baseline to download, it's probably not going to be enough. The following procedure will allow you to increase this cache size so you can optimize the cache operation.

Increasing the cache size in Sun Connection 1.1

All server components have "uce.rc" file with default values, and a ".uce.rc" file with values that were customized by the user. The location of the configuration file is:

By default - Linux $UCEDIR is /usr/local/uce/ and Solaris $UCEDIR is /opt/SUNWuce/.
Never change the file "uce.rc". You would want to copy the relevant line from "uce.rc" into ".uce.rc" and modify it there.
The relevant line is:
( all ) ( invisible.server.__general.cache_size, 512000 );

You can easily copy this line with the following command:
# cd /usr/local/uce/server/cgi-bin/
# grep general.cache_size uce.rc >> .uce.rc
Before performing this, make sure you don’t have this line already in .uce.rc.
In addition, make sure you have enough disk space available in /usr/local/uce/server/ (or /opt/SUNWuce/server in Solaris) - that's where the cache is being stored.

Then, change the value in .uce.rc - for example, in order to get about 2.5 gigs of cache, change it to:
( all ) ( invisible.server.__general.cache_size, 2500000 );

It is recommended to have anywhere between 2.5 to 5 or even 10 gigs of cache.

You would then want to restart the server:
If the management server is installed on a Solaris machine:
# svcadm disable SUNWuce/server

Wait for the service to be offline:
# svcs –a | grep SUNWuce | grep server
disabled 10:47:21 svc:/application/SUNWuce/server:default

Restart the service:
# svcadm enable SUNWuce/server

If the management server is installed on a Linux machine:
# /etc/init.d/uce_server stop
# /etc/init.d/uce_server start

If you have any questions or suggestions for future best practices, please feel free to contact me at Eran.Steiner-AT-Sun-DOT-com.

Happy patching!

Eran Steiner
Field Enablement Team




« July 2016