Wednesday Sep 24, 2008

NWAM model (and phase 0.5)

The Network Auto-Magic project, which aims to simplify and automate network configuration needs to do a couple of things:

  • catch, handle (and sometimes generate) network-related events, handling transitioning from one network environment to another etc.
  • provide a data repository for configuration preferences for various network environments, with a set of default preferences that lean towards automatic configuration (e.g. use DHCP to get an address)
  • provide a set of user interfaces to allow for manual configuration as well as inspection of network state

This high-level description maps quite nicely onto the set of core components we plan to deliver for phase 1 of NWAM:

  • nwamd - the core daemon at the heart of network autoconfiguration. It is this daemon's job to handle network-related events, assess network conditions and respond to changes in conditions
  • libnwam is the library that provides storage of NWAM-related configuration
  • Finally commandline tools such as nwamcfg and nwamadm and the GUI component of NWAM allow user interaction with network configuration

In a follow-up set of blog posts, I'm going to try and describe these components, starting with the set of events that nwamd needs to monitor and respond to and how that is (and will be) done.

In the meantime, why not give NWAM phase 0.5 a try - it alleviates many of the usability issues that applied to NWAM and works really nicely. Kudos to Jim and Darren (and Michael for his work on spec'ing out a set of candidate solutions to usability issues).

Tuesday Jul 29, 2008

Using Mercurial during the development process

With guidance from Anurag, I've been trying to figure out how the NWAM development process changes with the advent of Mercurial as the source code management tool for OpenSolaris. I've tried to detail the process I'm using here in case it's of use to others. The basic idea is to have a development repository in which we make changes (and will eventually push to the main NWAM repository on opensolaris.org), along with a build repository, to which we pull our changes to build/test. Here are the steps:

  1. clone a development repository
    # hg clone ssh://anon@hg.opensolaris.org/hg/nwam/nwam1 /path2/dev_ws
    
  2. make changes by editing files
  3. commit these changes in the development repository
  4. # hg commit
    
  5. clone/pull the above development changes to a build/test repository
    # hg clone /path2/dev_ws /path2/build_ws
    
    To ensure latest changes are there:
    # cd /path2/build_ws
    # hg update
    
    Or if rather than cloning, you need to update an existing build repository:
    # cd /path2/build_ws
    # hg pull -u /path2/dev_ws
    
  6. clone usr/closed separately, adding it to build repository (external-to-Sun builds need to download the closed binaries at this point instead I believe).
    # cd /path2/build_ws/usr
    # hg clone ssh://elpaso.sfbay//export/clone-hg/usr/closed
    
  7. build/test
  8. push dev changes, replacing yourname with your opensolaris.org account name. SSH keys need to be set up with youro OpenSolaris account for this to work properly, and your account needs to be on the list of contributors.
  9. # cd /path2/dev_ws
    # hg push ssh://yourname@hg.opensolaris.org/hg/nwam/nwam1
    

Thursday Oct 18, 2007

NWAM Phase 1 SMF restarter work

We've been working on refreshing the Phase 1 design for NWAM (see here) in the light of what we've learnt during implementation so far. I was tasked with reworking nwamd (1M) to become an SMF delegated restarter, and have tried to describe the design here. As the name suggests, a delegated restarter's job is to take over many of the duties that the master SMF restarter, svc.startd, carries out in managing child instances. In our case, these child instances are Network Configuration Units (NCUs) representing network links and IP interfaces, environments or locations (which specify which name service to use, etc) and External Network Modifiers (which are external-to-NWAM entities that carry out network configuration, such as a VPN client). The reason a delegated restarter is required is that these entitities don't really fit into the standard start/stop/refresh service model - for example a link needs to respond to events such as a WiFi connection dropping, an IP interface needs to respond to events such as acquiring an IP address through DHCP etc. Since it's nwamd's job to monitor for such events, making it a delegated restarter makes sense - it can then handle such events and run appropriate methods and transition the relevant services through appropriate states.

A delegated restarter has to handle events from the master restarter, svc.startd, and these include:

  • Notification from svc.startd of new instances to handle (these will be instances that specify the delegated restarter as their restarter, you can see examples of how this is done in any inetd service manifest, e.g. echo.xml)
  • Notification that an administrator has initiated an action, e.g. disable, enable
  • Notification that the dependencies of an instance are satisfied, i.e. it is ready to run

One area we're particularly interested in is feedback on the naming of instances representing links and IP interfaces. I've put some suggestions in the design doc mentioned above, but nothing is cast in stone at this stage of course, so any feedback on nwam-discuss@opensolaris.org is welcome. At present, my suggestion is to have all NWAM-managed services grouped under the "network/auto" FMRI, with separate services for NCUs, environments and ENMs, i.e.

  • svc:/network/auto/ncu
  • svc:/network/auto/env
  • svc:/network/auto/enm

note that the use of env(ironment) to describe name service and other configuration may change, this is being discussed on nwam-discuss at present.

In terms of NCUs in particular, my suggestion is to prefix the instance names by their type/class, i.e. for link NCUs, the prefix is "link-", for IPv4 interfaces "ipv4-", and for IPv6 "ipv6-". I had been thinking about having different services for links and IP interfaces, but then realized that in order to use the same instance name for IPv4 and IPv6 interface instances, then i'd need separate IPv4 and IPv6 services. So it seems better to have all NCUs grouped by service, and prefix them, allowing the user to run "svcs \*bfe" say to see all bfe instances

As an example, here's the service listing for the prototype restarter on my laptop, which has a builtin "iwi" wireless card and a builtin "bfe" wired interface:

# svcs ncu
STATE STIME FMRI
online 7:51:13 svc:/network/auto/ncu:link-bfe0
online 7:51:16 svc:/network/auto/ncu:link-iwi0
online 7:51:21 svc:/network/auto/ncu:ipv4-iwi0
offline 7:51:13 svc:/network/auto/ncu:ipv6-bfe0
offline 7:51:15 svc:/network/auto/ncu:ipv4-bfe0
offline 7:51:16 svc:/network/auto/ncu:ipv6-iwi0

Note that once Clearview vanity naming integrates, the link name ("iwi0" or "bfe0" or whatever) will be dropped in favour of the vanity names for those links.

So, is this reasonable? Is there a better name than "auto"? Let us know!

About

user12820842

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today