Monday Feb 15, 2010

Appliances and ROI

Overall I think it should be cheaper and in a customer's best
interest  to use appliances.  It gets to the issue of where knowledge
can be applied most effectively.   Of course this assumes that the
appliances:

    a)  solve the problems of the market
    b)  are partitioned (in function or pricing model) in a way that makes them a reasonable fit
    c)  have a roadmap that address challenges as (or before) they become important to customers
    d)  can interact with other appliances and non-appliances in standardized ways to customize overall capability

An
appliance is like a contract between a customer and a vendor and
implies a level of trust that the vendor will embed some part of the
skill set of a highly trained specialist by incorporating lifecycle
best practices (structure, operations, performance, security, etc.) in
the appliance therefore making the benefit achievable by someone
without the specialized skills.


The initial outlay for an
appliance may be more expensive than the constituent elements, however,
if an appliance is well crafted and is a good fit for the customer,
then ROI should be achieved in a reasonable amount of time.  When
comparing the ROI of an appliance and a bespoke system, you should
consider not only the benefits, but the cost avoidance that the
appliance enables.


Areas to look for ROI include:


Installation/Setup time
- an appliance should be much more efficient to set up than the
equivalent non-appliance.  There are several ways that a good appliance
can streamline this phase including standardizing and documenting
interfaces with the environment, finding the right balance of packaged
vs configured at install time, enabling hands-free installs, etc.  Not
only does a good setup process reduce the labor costs when compared to
a bespoke system, it also reduces the time to get the system
productive, thereby enabling it to start generating revenue earlier.


Throughput/Performance
- an appliance has the potential to embody peformance and capacity
planning knowledge from a broad range of environments.  If it is
designed with peformance in mind, it should be able to achieve strong
performance numbers compared to a bespoke system.  Higher throughput
enables the system to generate more revenue, and pushes off the need to
add capacity.


Day-to-Day Operations - an appliance
should require less skill to operate than a bespoke system since
operational best practices are included in the appliance.  This such as
log maintenance, state change, provisioning users/roles, etc. should
all be included in the appliance.  It should not be necessary to write
scripts around the constituent elements to glue them together. 
Specialized knowledge can be applied elsewhere, making the
operational/administrative cost of the appliance less than the
equivalent bespoke system.


Support and Maintenance -
support of an appliance should turn out to be much less expensive than
an equivalent bespoke system since the elements and configuration of
the appliance will be well understood by the vendor.  Not only will
this reduce downtime, but when there is a problem, it will reduce the
time required to analyze and fix the problem. 



As
with any contract, understanding the fine print is important.   An
appliance has to be designed to address these things, has to be priced
competitively, and has to be matched to the right type of problem.  If
it is, it should be a much more compelling option than building your
own.

Tuesday Sep 02, 2008

Experiences with OpenSolaris

On a previous post, a commenter suggested that instead of a NAS device, I try OpenSolaris + ZFS. I thought it was an interesting concept, so I decided to build a prototype on my x86 PC using the 2008.05 release to see how it might go. Here is a summary of my experience:



  • The OpenSolaris install process is much more intuitive (and less flexible) than the Solaris process I remember (+1)
  • It was nice not having to partition the boot drive manually or accept crazy old-school default (+1)
  • I didn't find a GUI based ZFS tool in the administrative menus, so off to the man pages (-1)
  • The pkg tool was interesting and a significant improvement in theory (+1)
  • The netbeans package was out of date (6.0 vs 6.1 or 6.5 beta) (-1)
  • The downloaded netbeans bundle and openesb bundle couldn't find Java until after I installed the old netbeans via pkg (-1, but not sure who loses the point)
  • I also wanted to create a bootable USB stick so I didn't have to allocate a drive to OS, but couldn't get the system to boot from the stick (-1)
  • I also wanted to install Subversion, so I selected it in the pkg tool and everything hung up. Upon reboot, the system couldn't start anything and was trashed (-10)
  • I tried everything again from scratch and subversion package seemed to hose it again, so I'm back to the DVD again (-100)


At that point I came to the conclusion that it probably is not a stable enough platform for my home office NAS solution, but could be a good developer platform for someone who knows Solaris and has spare time. I'll probably install it again on this PC since I didn't use the Windows OS on it for anything significant, but until someone tells me why I have to rebuild the OS after failing to install a source control package, I'm not willing to let my productivity depend on it.


In all fairness, OpenSolaris is not intended to be a production OS, so I wasn't expecting miracles. On the other hand, this is the sort of thing that might discourage someone enough to go somewhere else for a dev platform or a unix playground...


Sunday Aug 24, 2008

home office efficiency

So, my girlfriend just started taking some college classes again. Since we live in a small loft in SF, space is in high demand, and we are now sharing a desk -- it's a pretty big desk. In her previous role as a restaurant manager, lack of a solid shared computing infrastructure has not been too much of a problem. We could always share a photo or music library or use my desktop as a print server.


However, suddenly her computing demands (whether she realizes it or not) have become much more complex. Just some of the things that would help:



  • fully networked multi-function printer
  • NAS storage with push-button backup capabilities
  • secure remote access
  • 2nd office chair (check)
  • paper file storage
  • whiteboard


and the list goes on.


So, a couple printers look interesting. The Samsung CLX-3175 seem to do most of what I want (which doesn't include printing photos). Ink prices are a sore spot with me, so I might consider the B/W equivalent. I really like the scan to USB stick feature, but would hope that if on the network, anyone could scan to their own system.


For NAS, I would like to get a 4 or 5 bay like the Synology. This looks like the most robust feature set of the ones I've seen.


Friday Aug 22, 2008

Interesting software development topics

So, our groups current main project ("Dynamic Infrastructure") is moving closer to becoming "Open Dynamic Infrastructure", and we have refocused on solid design and getting to core of the framework up to par. This has led to some very interesting meetings where infrastructure, software dev, and process guys all try to negotiate a common language (stressful, challenging, frustrating, and fun).


About 2/3rds of the team was in Broomfield, CO this week meeting with a very large potential customer preparing for some a big "cloud" computing initiative. I think they liked what they heard, and hopefully we can move forward on this. The rest of the week was spent fleshing out some design. It's cool to see how different each of us thinks about a problem/solution.


Anyway, it made me realize that I need some more mainstream background in software development (a.k.a. design patterns). I've dusted off the J2EE patterns book and another Java Design Patterns one, and ordered one I thought would be a good filler.


If you have any suggestions on good patterns books that focus on highly-scalable, highly-distributed software design, let me know.


I've also revisited some old blogs that I really like. .:Manageability:. is one of my favorites. Carlos Perez talks about some really good software design concepts. The site also has a great summary of open source Java projects. Thanks Carlos!


I also found this nice summary of Web2.0 concepts. I particularly like the following recap of key concepts/patterns on the last page:




web2.0.patterns.tiff

Monday Aug 04, 2008

Dynamic Infrastructure Background

I refer to Dynamic Infrastructure in quite a few of my posts, so I created this post with some background info as well as pointers to other sites with deeper content. I'll probably link to this post quite a bit in other posts, so hopefully it is useful.




What is Dynamic Infrastructure?




It has been a long running project to capture some of our largest and most advanced business (as opposed to scientific) customer's infrastructure and service usages patterns, and translate that into a general framework for automating lifecycle operations based on what we learned.


It has been a combination of requirements gathering, architecture and evangelization, as well as an exercise in prototyping and reference implementations. Currently we are close to completion of our 3rd generation reference implementation. The roadmap has approximately looked like:



di_roadmap_3


Some of the first work was very tool specific to tackle basic provisioning functionality for OS, software, and network. As the initiative has evolved, we have been focused more on building a framework that provides generalized interfaces for provisioning, configuration management, monitoring, reporting, persistence, etc. The interfaces provide common terminology and dictate how adapters will interact with each other by channeling them through the framework.


The first cut of this framework is about to go open source. Hopefully the legalities and logistics will be completed by the end of August.


How Is It Different?


Well first of all, it is not a product based suite like a Tivoli or BMC. In a sense it is an open core that these and other products or tools can integrate with to bring a "best of breed" feel to the datacenter toolset. So, if you want to use VMWare for Windows provisioning, xVM Ops Center for Solaris provisioning, N1 Service Provisioning System for App Server and Web Server provisioning, Intelliden for network device provisioning, Atrium or N(i)2 for enterprise CMDB, OpenView for SNMP monitoring, ... you can. The only criteria is that they have (or can be wrapped) with an API and they implement one or more of the relevant adapters for Dynamic Infrastructure.


What is the Long Term Vision?


Think about it this way. You have extreme virtualization and a driving homogeneous force in infrastructure. This means you'll have a small number of tools to manage systems, OS, and virtualization, but a huge number of things to manage (think 1500 IP addresses per rack).


The purpose of all this virtualization is to create and personalize a business environment for your "customer". Above the OS and OS virtualization layer is where this personalization really becomes valuable, but the toolset is broader, and there are many more customizations that are changing more rapidly (think 4 different flavors of app server, 1000 different applications to go in them).


So, to provide high value to your customers in the new enterprise, you need to manage at least an order of magnitude greater number of scarce resources like IP addresses due to virtualization, and you need to take on their application deployment tools and lifecycle management so you can live up to their expectations.


The long term vision (at least mine) of Dynamic Infrastructure is to provide a single interaction point (Dynamic Infrastructure) and a few clearly defined roles (adapter developer, solution developer, operator, administrator), along with a comprehensive language for describing the workflows, events, and policies in order to provide extreme automation of IT while leveraging best of breed tools for any particular environment.


For more info, see the DI Wiki.


Tuesday Jul 22, 2008

New BigAdmin article posted

Last month I submitted an article to BigAdmin titled "Exception Handling for Sun N1 Service Provisioning System: Developer Techniques". It looks like it was posted while I was on vacation. Have a look and let me know what you think.

Monday Jul 21, 2008

State of Dynamic Infrastructure

Well, I just returned from a much needed vacation, and thought it would be nice to capture where I think we are with our Dynamic Infrastructure project. Prior to vacation, we closed the books on the "3.0" release which has some basic capability to link together a service model stored in N(i)2 with the N1SPS provisioning system.


There were several key design principles demonstrated in this release including:



  • abstraction of a CMDB (henceforth called a DCRM) as a set of interfaces and abstract classes

  • definition of a model map schema that defines how a service model integrates with existing DCRM elements

  • definition of tasks and status as a set of interfaces and abstract classes

  • definition of a provisioning scenario descriptor schema

  • implementation of an N(i)2 DCRM adaptor

  • implementation of an N1SPS Provisioning adaptor


The goal was to show a streamlined operator role that minimizes the number of decisions at deployment time by placing them in the service model. The operator can now deploy complex services consisting of many layers of the stack (OS, virtualized OS, app server, application, etc.) while only selecting the proper template, and specifying how many instances to create.


The model contains the structure and operational process necessary to complete the provisioning activity and update the DCRM with the new instance. This is pretty exciting stuff, but there is quite a bit of work ahead...


Just before vacation, I spent a great deal of time discussing DI with customers. The reception shows that we are on the right track.


At this point, the team will probably go back to design mode and work on the next big areas. We haven't quite defined the scope of the next phase, but I suspect some of the big areas will be:



  • refactoring / abstracting the provisioning adapter interface

  • getting a registration system in place for adapters and working on the comms between them

  • creating one or two more reference provisioning adaptors (network and storage come to mind)

  • prototyping a solution developer interface that enables easier model development


That's about all I know so far. Now I need to catch up on what the team has been doing while I was out.



Revisiting blog posting


It's been quite a while since my last post, but I'm going to try to keep things up to date, and a little more focused. We'll see how it goes. I thought a little about what was holding me back from posting in the past, and there were several reasons that are probably similar to those cited by others. In any case, here are a few that are near the top of this list, and how I intend to adjust them:



  • can't seem to find time - I will set aside an hour per week to write a post. This will be the minimum blogging activity.

  • don't know what to write about - I will focus mainly on ideas destined for the Dynamic Infrastructure wiki. Other topics might come up, but I will try to stick to closely related to datacenter automation and operations.

  • concerned about sensitive information - OpenDI will hopefully be opensourced soon, so much of this concern will be relieved.

  • tool problems - worst case, I'll use the built-in submission tool, but I've dusted off my ecto license and will get it working as soon as possible.

I've also updated my theme to one that I think is a little more mainstream and easier to look at.

Tuesday May 22, 2007

Email filtering - rules for sanity

I use Mac mail as my primary mail interface these days.  I've tried a few other, but have settled here until something really compelling comes along.  It's not perfect, but it is capable enough.  I recently sat down and ruminated on how to handle the various mail channels that I consume and came up with a strategy that is working pretty well for me, so I thought I would share it.

The most important goals were:

  1. To limit the messages in my inbox to those that are of daily interest or critical nature
  2. To group some emails according to subscription topic
  3. To group some emails according to level of interest
  4. To make it easy to periodically enhance the rules with ease
  5. To make it easy to periodically delete less interesting, non-archive worthy content

Basically I slice and dice mails in a couple different ways:

  1. Did they originate from within my email domain (proximity based zoning)?
  2. Are they from specific groups or people of higher interest than others (interest based zoning)?

And come up with the following categories:

  • Junk aliases - mail channels that I can't get away from for one reason or another, but are just bulk and of zero interest
  • Direct addressed subscriptions - subscriptions that address directly to my email address
  • Alias addressed subscriptions - subscriptions that address to an alias
  • Interest zone 1 - top priority individuals and aliases (my peers and direct management)
  • Interest zone 2 - direct addressed to me in the To: or CC: field originating inside sun.com
  • Interest zone 3 - messages from outside sun.com that should stay in the inbox
  • Interest zone 4 - other channels of interest (people or aliases) worthy of the inbox
  • Interest zone 5 - from sun.com, but I'm not in the To: or CC: and doesn't match something above
  • Interest zone 6 - from outside sun.com, but addressed To: or CC: me, and not matching something above
  • Interest zone 7 - probably junk, catches everything that hasn't matched something above

The above categorizations result in Zones 1-4 going to the inbox, Zones 5-7 each going to their own folder named for the zone.  For Z5-7, I generally review Z5 daily, Z6 every few days, and Z7 weekly or less.  If I find something that consistently merits more attention I modify one of the earlier rules to catch it.  Eventually I can delete Zones 5-7 with little effort or worry, and be confident that my inbox (Z1-4) is worthy of archival.

The rules to implement this strategy are approximated below.  The ordering is important as one rule might assume that the previous rule has limited it's field of candidate messages.

  1. Junk aliases (one rule):
    1. delete message
    2. stop processing rules
  2. Subscriptions that are addressed directly to me and that I want to folder (one rule per folder):
    1. move them to appropriate folder
    2. stop processing rules
  3. Zone 1 - Top priority  (one rule):
    1. look for mails from key individuals (my peers and 2 levels of management are in this zone)
    2. look for mails with any recipient matching key aliases (my workgroup is the only one in this zone)
    3. color code the message
    4. stop processing rules
  4. Zone 2 - addressed to me from sun.com (one rule):
    1. look for mails that are addressed To: me and From: \*@sun.com
    2. color code the message
    3. stop processing rules
  5. Subscriptions that are addressed to an alias and that I want to folder (one rule per folder):
    1. look for mails with Any Recipient matching the alias name(s)
    2. move them to the appropriate folder
    3. stop processing rules
  6. Zone 3 - external and important (one rule):
    1. look for mails with From containing specific people or domains
    2. color code the message
    3. stop processing rules
  7. Zone 4 - high interest people and aliases (one rule):
    1. look for messages from specific people or aliases
    2. color code the message
    3. stop processing other rules
  8. Zone 5 - internal but not addressed to me (one rule):
    1. look for messages from sun.com (not addressed to me is derived from previous rules)
    2. move message to zone 5 folder
    3. stop processing rules
  9. Zone 6 - external but properly addressed (one rule):
    1. look for messages where To: or CC: is me
    2. and From: doesn't contain sun.com
    3. move message to zone 6 folder
    4. stop processing rules
  10. Zone 7 - probably junk (one rule):
    1. look for messages which aren't addressed to me
    2. and are From: outside sun.com
    3. move message to zone 7 folder
    4. stop processing rules

Thursday May 17, 2007

DI Engineering Putback Progress!

One of the important aspects of the DI initiative has been to work with engineering teams to transfer generally applicable enhancements to Sun products from the DI team to the engineering organizations so they can be incorporated in off-the-shelf products.  These putback elements are driven from customer requirements, and should provide additional value in the products they are implemented in.

We've had some good progress on a couple of fronts:

  • N1SPS - N1SPS has taken the com.sun.tools plugin and will incorporate the components and plans into the com.sun.solaris plugin for the next release.  This includes quite a few components, plans, and types related to system resources such as network interfaces, IPMP groups, default routes, local users/groups, Solaris zones, and most importantly the N1SPS Remote Agent.  In addition, they have addressed several DI specific bugs and RFEs for the 6.0 release.  The impact is that roughly 80% of the DI codebase will be supported through mainstream channels!  Special thanks to Dan Ritzman, Doan Nguyen, and Prasad Pai.
  • JES Application Server - The JES App Server team has been collaborating with us from the beginning to improve their integration with N1SPS.  The JES App Server plugin for N1SPS was turned over to them from N1 a little over a year ago.  They have done an excellent job of responding to DI needs and releasing fixes for us in an early-access fashion.  The N1SPS 6.0 plugin release will include enhancements that allow the app server to be easily described and deployed as comprehensive topologies rather than bits and pieces.  Special thanks to Peter Charpentier, Lily Hou, and Jane Young.

Testing a new blog editor (Qumana)

Well, I'm always searching for a more efficient way to post blogs.  Some day this might motivate me to post more than one per quarter.  I'm posting this with Qumana which has some very nice WYSIWYG capabilities, runs on Mac and PC, has HTML support, etc.

So far, the only thing that doesn't seem to work is that it hasn't picked up my categories.  Not sure why, but we'll see where it puts them and see if I can work with Qumana to sort it out.

It has some interesting features like supporting adds funneled through their company.  I'm tempted to put one here, but I have to sort out my philosophical position on this first...

Monday Mar 12, 2007

San Francisco restaurant wine deals

Here are some places that have wine deals:
  • Bacar - Monday's 5:30 - 7pm half-price wine by the glass in the bar
  • Perry's - half-price bottles Monday nights
  • Maverick - 40% off bottles Monday nights
  • Levende - half-price bottles for dinner guests on Tuesday nights
I'll update the entry if I find more....

Thursday Nov 16, 2006

Testing the Performancing add-on blog editor

I am a novice blogger to say the least.  Most of the time I'm not sure what to talk about.  When I do, I fret about confidentiality, then probably get distracted and forget to post a blog entry. 

I have also had a serious challenge finding a blog client that I like.  I've tried BlogEd, Ecto, and now Performancing.  Ecto is OK, I think I lost interest when Sun changed to URL for the posts, and I didn't get the memo.  I wasn't really watching the mail alias for Sun bloggers, but I'm sure it was posted there at some point.  Oh well, I got through that one, and am now back in the game. 

Performancing seems to have all the major elements, but less of the fluffy stuff that ecto has (no music I'm listening to, etc.).  It's integrated into Firefox 2.0, so that's a nice convenience.  I guess I'm just going to try a few of the features here and see how well it works. 

The first thing I had to do was update to 3.5.  The earlier release had some problems with backspacing in the editor window.  This appears to be fixed.

Bold, Italic, Underline, strikethru

Well, those things seem to work, but you can't see which ones are activated in the toolbar (no depth).  That's sort of inconvenient.

larger font, smaller font

That also seemed to work, but there is no indicator to tell you the current font size.  Not a big deal, but would be nice.

Here's a link to Jason's blog
This required that I type the text first, highlight it, then enter the URL.  It would be nice to have the option to enter the URL description in the popup too.

Well, here's a picture of me from a recent trip to Europe...



For images, it supports image upload or URL references.

Here are the bullets and numbering:
  1. test
  2. 1
  3. 2
  4. 3
  • test
  • dot
  • dot
  • dot
centered
right
left

Well, most of the features seem to work.  The buttons all have the 3D problem, can't tell if they are depressed or not.  There is also a code editor view that might be nice if you are doing fancy stuff.

All things considered, this seems to be a practical blog editor for the price ($0).  Now let's see if it uploads entries....



powered by performancing firefox

Saturday Jul 15, 2006

Kudos to N1 Engineering, Sustaining and Marketing

I have been working on a particularly fun section of code in N1SPS. I am trying to create a parallel Solaris Container deployment strategy that requires minimal operational interaction. Basically, a configuration manifest like:
ZONE_NAME=test-zone11
ZONE_PARENT=t2000-11-1
ZONE_MGMT_IP_ADDR=10.15.200.111
local_zone_base_path=/zones
ZONE_INTERFACES=ipge0:10.15.200.111/24
ZONE_FULL_ROOT=false
ZONE_INHERIT_LOFS_DIR=rw:/logs/foo:/logs rw:/xyz/foo:/xyz
ZONE_CREATE_LOFS_DIRS=true
ZONE_INHERIT_PKGS_DIR=
ZONE_PASSWORD=1d2jxiU7dXp7k
ZONE_TZ=US/Arizona
ZONE_TIMEHOST=localhost
ZONE_TERM=xterm
ZONE_AUTOBOOT=true
ZONE_DESTROY_EXISTING=true
ZONE_DOMAIN=xyz.com
ZONE_NAMESERVER=ns.xyz.com
ZONE_DEFAULTROUTER=10.15.200.1
CSZ_GO_BOOT_NOW=true

ZONE_NAME=test-zone12
ZONE_PARENT=t2000-11-1
ZONE_MGMT_IP_ADDR=10.15.200.112
local_zone_base_path=/zones
ZONE_INTERFACES=ipge0:10.15.200.112/24
ZONE_FULL_ROOT=false
ZONE_INHERIT_LOFS_DIR=rw:/logs/foo:/logs rw:/xyz/foo:/xyz
ZONE_CREATE_LOFS_DIRS=true
ZONE_INHERIT_PKGS_DIR=
ZONE_PASSWORD=1d2jxiU7dXp7k
ZONE_TZ=US/Pacific
ZONE_TIMEHOST=localhost
ZONE_TERM=xterm
ZONE_AUTOBOOT=true
ZONE_DESTROY_EXISTING=true
ZONE_DOMAIN=xyz.com
ZONE_NAMESERVER=ns.xyz.com
ZONE_DEFAULTROUTER=10.15.200.1
CSZ_GO_BOOT_NOW=false

#ZONE_NAME=test-zone13
#ZONE_PARENT=t2000-mgmt-2
#ZONE_MGMT_IP_ADDR=10.15.200.113
#local_zone_base_path=/zones
#ZONE_INTERFACES=ipge0:10.15.200.113/24
#ZONE_FULL_ROOT=false
#ZONE_INHERIT_LOFS_DIR=rw:/logs/foo:/logs rw:/xyz/foo:/xyz
#ZONE_CREATE_LOFS_DIRS=true
#ZONE_INHERIT_PKGS_DIR=
#ZONE_PASSWORD=1d2jxiU7dXp7k
#ZONE_TZ=US/Central
#ZONE_TIMEHOST=localhost
#ZONE_TERM=xterm
#ZONE_AUTOBOOT=true
#ZONE_DESTROY_EXISTING=true
#ZONE_DOMAIN=xyz.com
#ZONE_NAMESERVER=ns.xyz.com
#ZONE_DEFAULTROUTER=10.15.200.1
#CSZ_GO_BOOT_NOW=false
The format of the manifest is not important, in fact, it could easily be a graphical front end, a spreadsheet, LDAP entries, etc. The important thing is that the data can be predefined by people or systems. At this point, the role of the operator is limited to entering:
    The location of the manifest file The root password for the new zone The n1sps user password for the new zone
Anyway, I ran into a fairly serious challenge. Since we are deploying an OS instance that will have it's own N1SPS remote agent, the appropriate mechanism in the plan/component language is to use a targetable component. This is a component that creates an N1SPS host upon successful completion of the install block. In this case, we want to make the new host a physical host so we can later put an agent on it. The code to make a component targetable looks something like:

     

Unfortunately, I found that it was not currently possible to install a targetable component that creates a physical host on a virtual host target. I spoke with engineering and N1 marketing, and we all decided that there was no real reason this shouldn't be possible, and due to the time sensitive nature of this initiative, they would expedite a fix. The fix was available the next day in an internal only, unqualified form, and the patch will be pushed through QA, and be available to the general N1SPS community in approximately 1 month. Much thanks for keeping me moving forward goes to Anshul, Doan, Ilango, and others who helped out!

Thursday Jul 13, 2006

HostsEntry

Here is a component that manages entries in /etc/hosts. I use this to satisfy prerequisites for various other components.
One benefit to using a component for this code is that you can look at a host through N1SPS, and see if this component has been installed, or you can look at the component and see where it is installed in the environment. You can also see the values that were used for the variables. All this without touching the system.



<?xml version="1.0" encoding="UTF-8"?>

<!-- generated by N1 SPS -->
<component
platform='system#Solaris - any version'
xmlns='http://www.sun.com/schema/SPS'
name='HostsEntryCT' version='5.2'
description='Backing component for HostEntry component type'
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
softwareVendor='Sun Microsystems'
path='/com/sun/tools/types'
xsi:schemaLocation='http://www.sun.com/schema/SPS component.xsd'
installPath=":[installPath]"
>

<varList>
<var name="host" default="" prompt="Hostname of host to add"/>
<var name="ip" default="" prompt="IP address of host to add"/>
<var name="alias" default="" prompt="one or more space separated aliases for the host (ex: hostname.domainname)"/>
<var access='PRIVATE' modifier='FINAL' name="installName" default='HostsEntry-:[host]-:[ip]'/>
<var access='PRIVATE' modifier='FINAL' name='installPath' default=':[target(/):sys.raDataDir]:[/]com.sun.tools:[/]hosts_entries' prompt='Path where component will be installed'></var>
<var access='PRIVATE' name='hostsFile' default='/etc/inet/hosts' prompt='Location of local hosts file' modifier="FINAL"></var>
<var access='PRIVATE' default='/usr/bin/getent' name='getent' modifier='FINAL'></var>
</varList>
<installList>
<installSteps returns='true' name='default'>
<paramList>
<param name="forceNewEntry" displayMode="BOOLEAN" prompt="Set to true to overwrite existing entry for hostname or IP address"/>
</paramList>
<varList>
<var name="returnStatus" default=""/>
</varList>
<if>
<condition>
<or>
<equals value1=":[host]" value2=""/>
<equals value1=":[ip]" value2=""/>
</or>
</condition>
<then>
<raise message="Value of host and IP must be non-NULL"/>
</then>
</if>

<execNative userToRunAs='root'>
<assignStatus varName="returnStatus"/>
<inputText><![CDATA[

host=":[host]"
ip=":[ip]"
hostAliases=":[alias]"
forceNewEntry=":[forceNewEntry]"
DATESTAMP="`date +%m%d%y%H%M%S`"
tmp=`getent hosts $host`
hostFound=$?
ray1=( $tmp )
tmp=`getent hosts $ip`
ipFound=$?
ray2=( $tmp )
echo "hostFound = $hostFound"
echo "ipFound = $ipFound"
echo "host1 = ${ray1[1]}, ip1 = ${ray1[0]}"
echo "host2 = ${ray2[1]}, ip2 = ${ray2[0]}"

if [[ ($hostFound == 0 && $ipFound == 0) && ((${ray1[0]} == ${ray2[0]}) && (${ray1[1]} == ${ray2[1]})) ]] ; then
# Don't need to do anything. Entry is correct.
echo "Entry already exists and matches."
exit 0
fi
if [[ "$hostFound" == "0" || "$ipFound" == "0" ]] ; then
# Need to check the forceNewEntry flag
if [[ "$forceNewEntry" == "true" ]] ; then
# Need to add the entry
if [[ $hostFound == 0 && $ipFound == 0 ]] ; then
cat /etc/inet/hosts | sed -e "/${ray1[0]}[ \\t]\*${ray1[1]}[ \\t#]/d" > /tmp/hosts.$$.interim2
cat /tmp/hosts.$$.interim2 | sed -e "/${ray2[0]}[ \\t]\*${ray2[1]}[ \\t#]/d" > /tmp/hosts.$$.interim

elif [[ $hostFound == 0 ]] ; then
cat /etc/inet/hosts | sed -e "/${ray1[0]}[ \\t]\*${ray1[1]}[ \\t#]/d" > /tmp/hosts.$$.interim

else [[ $ipFound == 0 ]]

cat /etc/inet/hosts | sed -e "/${ray2[0]}[ \\t]\*${ray2[1]}[ \\t#]/d" > /tmp/hosts.$$.interim
fi

cp /etc/inet/hosts /etc/inet/hosts.$DATESTAMP
echo "Forced change of :[hostsFile] entry for :[host]"
echo "$ip $host $hostAliases # Added by N1SPS $DATESTAMP" >> /tmp/hosts.$$.interim
cp /tmp/hosts.$$.interim :[hostsFile]
rm /tmp/hosts.$$.\*
exit 0
else
# Fail
echo "Entry for either host or IP exists. Use forceNewEntry flag to overwrite."
exit 1
fi
fi
# Need to add the entry
cp :[hostsFile] :[hostsFile].$DATESTAMP
cp :[hostsFile] /tmp/hosts.$$.interim
echo "$ip $host $hostAliases # Added by N1SPS $DATESTAMP" >> /tmp/hosts.$$.interim
cp /tmp/hosts.$$.interim :[hostsFile]
rm /tmp/hosts.$$.interim
exit 0


]]></inputText>
<exec cmd='/bin/bash'>
<arg value='-x'></arg>
</exec>
</execNative>
<return value=":[returnStatus]"/>
</installSteps>
<installSteps name="markOnly">
<paramList>
<param name="forceInstall" default="false" prompt="Force install if no matching entry" displayMode="BOOLEAN"/>
</paramList>

<if>
<condition>
<or>
<equals value1=":[host]" value2=""/>
<equals value1=":[ip]" value2=""/>
</or>
</condition>
<then>
<raise message="Value of host and IP must be non-NULL"/>
</then>
</if>
<try>
<block>

<execNative userToRunAs='root'>

<inputText><![CDATA[

host=":[host]"
ip=":[ip]"
hostAliases=":[alias]"
forceInstall=":[forceInstall]"

if [[ "$forceInstall" == "true" || "$forceInstall" == "TRUE" ]] ; then
exit 0
fi
tmp=`getent hosts $host`
hostFound=$?
ray1=( $tmp )
tmp=`getent hosts $ip`
ipFound=$?
ray2=( $tmp )
echo "hostFound = $hostFound"
echo "ipFound = $ipFound"
echo "host1 = ${ray1[1]}, ip1 = ${ray1[0]}"
echo "host2 = ${ray2[1]}, ip2 = ${ray2[0]}"

if [[ ($hostFound == 0 && $ipFound == 0) && ((${ray1[0]} == ${ray2[0]}) && (${ray1[1]} == ${ray2[1]})) ]] ; then
# Don't need to do anything. Entry is correct.
echo "Entry already exists and matches."
exit 0
fi
]]>
</inputText>
<exec cmd='/bin/bash'>
<arg value='-x'></arg>
</exec>
<successCriteria status="0"/>
</execNative>
</block>
<catch>
<raise message="markOnly install failed because no matching hosts entry was found. Use the forceInstall parameter, or the default install control."/>
</catch>
</try>

</installSteps>
</installList>
<uninstallList>
<uninstallSteps returns='true' name='default'>

<varList>
<var name="returnStatus" default=""/>
</varList>

<execNative userToRunAs='root'>
<assignStatus varName="returnStatus"/>
<inputText><![CDATA[

host=":[host]"
ip=":[ip]"

DATESTAMP="`date +%m%d%y%H%M%S`"
cp :[hostsFile] :[hostsFile].$DATESTAMP
cat :[hostsFile].$DATESTAMP | sed -e "/:[ip][ \\t]\*:[host][ \\t#]/d" > :[hostsFile]
exit 0


]]></inputText>
<exec cmd='/bin/bash'>
<arg value='-x'></arg>
</exec>
</execNative>
<return value=":[returnStatus]"/>
</uninstallSteps>
<uninstallSteps name="markOnly"></uninstallSteps>
</uninstallList>
</component>

Let's take a look at what's going on here...
The varList defines the host (name), ip address, and one or more aliases to be added to the /etc/hosts file. It also defines some variables that are used within N1SPS to help with the component installation.

<varList>
<var name="host" default="" prompt="Hostname of host to add"/>
<var name="ip" default="" prompt="IP address of host to add"/>
<var name="alias" default="" prompt="one or more space separated aliases for the host (ex: hostname.domainname)"/>
<var access='PRIVATE' modifier='FINAL' name="installName" default='HostsEntry-:[host]-:[ip]'/>
<var access='PRIVATE' modifier='FINAL' name='installPath' default=':[target(/):sys.raDataDir]:[/]com.sun.tools:[/]hosts_entries' prompt='Path where component will be installed'></var>
<var access='PRIVATE' name='hostsFile' default='/etc/inet/hosts' prompt='Location of local hosts file' modifier="FINAL"></var>
<var access='PRIVATE' default='/usr/bin/getent' name='getent' modifier='FINAL'></var>
</varList>

The installList defines the various installation controls that you can run. I use a default install control, and you would generally find a "markOnly" install control. The markOnly install typically only removes the component from the host from an N1SPS context rather than performing any actions on the target system. This is sort of a failsafe incase your default uninstall action doesn't work. The default install control does a few things to ensure that we don't do something silly like blanket overwrite an existing entry. There is a "forceNewEntry" parameter that determines if you would like to overwrite an entry. It also checks to make sure that values were set for the host and ip variables (however, it doesn't currently check to see if they are valid). Once that is done, it enters an "execNative" block. The execNative is a way to call out to the OS with some native shell or command. In the execNative, I do a getent to see if the host or ip already exists in one of the configured name services. If so, I check the forceNewEntry value to determine if I should proceed. If so, a backup copy of /etc/hosts is made, and the entry is added. The "markOnly" install is useful if you already have an entry, but you want to sync your component landscape with reality, or you will be adding an entry later via another mechanism. It doesn't do anything other than check for non-null values, and install the component. The default uninstall makes a backup, and deletes the appropriate line using sed. The "markOnly" uninstall just removes the component from the system without modifying /etc/hosts.

About

This is a space for me to post things I am thinking about. Most content will be related to datacenter operations and automation.

Search

Top Tags
Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today