Friday Sep 07, 2007

Using Python API scripts with Sun Connection

Did you know that you can use Python API scripts to automate Sun Connection tasks?

There's a new article in BigAdmin, Getting Started With the Sun Connection Satellite Python API,
which describes how to install and configure the Sun Connection Python
API software. It also includes a pointer to sample API scripts for both
the Solaris OS and Linux.

To find more API documentation, including  classes, use the pydoc script. The
pydoc module is bundled with Python and the pydoc script is usually
installed at the same place where the python interpreter is located:

$ cat /usr/sfw/bin/pydoc

import pydoc

Use the pydoc command to get the information that you want.

For Solaris OS

$ pydoc /opt/SUNWuce/api/python/lib/PyOsApi/
$ pydoc /opt/SUNWuce/api/python/lib/PyOsApi/
$ pydoc /opt/SUNWuce/api/python/lib/PyOsApi/
$ pydoc /opt/SUNWuce/api/python/lib/PyOsApi/DataStructures/

For Linux OS

$ pydoc /opt/local/uce/api/python/lib/PyOsApi/
$ pydoc /opt/local/uce/api/python/lib/PyOsApi/
$ pydoc /opt/local/uce/api/python/lib/PyOsApi/
$ pydoc /opt/local/uce/api/python/lib/PyOsApi/DataStructures/

For example, use this command to find the available API classes for the Solaris OS: 
# pydoc /opt/SUNWuce/api/python/lib/PyOsApi/

Tuesday Aug 21, 2007

Portable demo and development environment

Today I would like to share with you all my portable demo/development environment. I thought it would be nice to share this and also show how some of the work is being done. First of all, what gear do I have:

  • Toshiba Tecra M5 with 3GB of RAM
  • Solaris Nevada build 69
  • Latest FRKIT to get all the drivers working properly
  • QEMU compiled with the kernel accelerator 

Then I realized that it would be nice to be able to show the real stuff to customer, but also for development so I started to setup the following. I also wanted to make sure that I was running with the GA code of everything and not canned demos or minimized editions of the software. 

With this setup I run Solaris Nevada as the main operating system. After this I have installed and set up QEMU as a PC emulator, where I can install Solaris 10. In this emulated PC where Solaris 10 is running, I install SunConnection 1.1.1, since it is neither running nor supported to run under Nevada. This gives me the ability and possibility to try new patches, patch schemes, downloads, etc etc... Very very useful. If the image would break, I can then just re-install it in a whim. I have allocated 7GB of space for this. QEMU uses a disk image which I can make copies of as backups of my Solaris 10 environment. I can also copy this image to colleagues or to other hardware and then it can run there once QEMU is installed. Very useful tool, to be able to virtualize PCs and using only opensource and freeware tools.

N1 Service Provisioning System:
For N1SPS, I run this inside Nevada with multiple Zones. I have create 2 very sparse zones, which are not even sharing /opt. Then I have installed the RA under /opt2 with SSH. With this I can then easily try and install various setups, re-targets, and of course development of new code. I have also of course the SPS modeler for NetBeans installed so I can easily update and write plugins. I just have to mention that it is completely unsupported to run SPS 6.0 under Nevada. Since these zones are so sparse, they can also be quickly re-installed in case of issues or so. It takes a total of 30 minutes to setup the entire environment.

I am now working on installing and setting up SunMC 4.0 beta on my laptop also to test and develop a bit. I have not decided to do this with QEMU or directly inside Nevada. The main idea behind all of this is to be able to travel and not rely on machines in a datacenter. Mobility is the key here.

Happy developing and demoing,




Friday Jul 27, 2007

Handy N1 SPS Import-Export Feature

The N1 Service Provisioning System just got a lot easier to use,
thanks in part to a great new import-export feature. Now, you can very
simply copy a whole raft of artifacts at once between your SPS master
servers. You can roll a bunch of SPS artifacts up into what's called a "bundle
" on, say, master servers your test environment. What SPS artifacts might these be? They might be things like host types,
components, component types, folders, and any plans you want. 
And you can then get access to these SPS artifacts from another master server. How is this done? Well, you declare a list of  "search criteria" that
represent these SPS artifacts. This list of search criteria forms the bundle template. For example, a bundle template might contain criteria for searching for component types and plans. Then, with a few simple clicks, you can save this bundle template, and export it into a bundle jar. On another master server, import that bundle jar. Once you've imported the bundle jar to a master server, the SPS artifacts are now held on that master server. This means of course that you can easily copy several SPS artifacts from one master server to another, which is great for testing things out before putting them in live environments. This new feature will make things easier for those customers who want to really be able to test complex things out on in one or several SPS environments before going live. Learn how to do this using the command line. This feature is already available through the command line in the recent N1 Service Provisioning System 5.2 Update 2 release. In the upcoming N1 Service Provisioning System 6.0 release, you can also use this feature through the new, improved browser user interface.




« June 2016