NWAM Phase 1 SMF restarter work
By user12820842 on Oct 18, 2007
We've been working on refreshing the Phase 1 design for NWAM (see here) in the light of what we've learnt during implementation so far. I was tasked with reworking nwamd (1M) to become an SMF delegated restarter, and have tried to describe the design here. As the name suggests, a delegated restarter's job is to take over many of the duties that the master SMF restarter, svc.startd, carries out in managing child instances. In our case, these child instances are Network Configuration Units (NCUs) representing network links and IP interfaces, environments or locations (which specify which name service to use, etc) and External Network Modifiers (which are external-to-NWAM entities that carry out network configuration, such as a VPN client). The reason a delegated restarter is required is that these entitities don't really fit into the standard start/stop/refresh service model - for example a link needs to respond to events such as a WiFi connection dropping, an IP interface needs to respond to events such as acquiring an IP address through DHCP etc. Since it's nwamd's job to monitor for such events, making it a delegated restarter makes sense - it can then handle such events and run appropriate methods and transition the relevant services through appropriate states.
A delegated restarter has to handle events from the master restarter, svc.startd, and these include:
- Notification from svc.startd of new instances to handle (these will be instances that specify the delegated restarter as their restarter, you can see examples of how this is done in any inetd service manifest, e.g. echo.xml)
- Notification that an administrator has initiated an action, e.g. disable, enable
- Notification that the dependencies of an instance are satisfied, i.e. it is ready to run
One area we're particularly interested in is feedback on the naming of instances representing links and IP interfaces. I've put some suggestions in the design doc mentioned above, but nothing is cast in stone at this stage of course, so any feedback on email@example.com is welcome. At present, my suggestion is to have all NWAM-managed services grouped under the "network/auto" FMRI, with separate services for NCUs, environments and ENMs, i.e.
note that the use of env(ironment) to describe name service and other configuration may change, this is being discussed on nwam-discuss at present.
In terms of NCUs in particular, my suggestion is to prefix the instance names by their type/class, i.e. for link NCUs, the prefix is "link-", for IPv4 interfaces "ipv4-", and for IPv6 "ipv6-". I had been thinking about having different services for links and IP interfaces, but then realized that in order to use the same instance name for IPv4 and IPv6 interface instances, then i'd need separate IPv4 and IPv6 services. So it seems better to have all NCUs grouped by service, and prefix them, allowing the user to run "svcs \*bfe" say to see all bfe instances
As an example, here's the service listing for the prototype restarter on my laptop, which has a builtin "iwi" wireless card and a builtin "bfe" wired interface:
# svcs ncu
STATE STIME FMRI
online 7:51:13 svc:/network/auto/ncu:link-bfe0
online 7:51:16 svc:/network/auto/ncu:link-iwi0
online 7:51:21 svc:/network/auto/ncu:ipv4-iwi0
offline 7:51:13 svc:/network/auto/ncu:ipv6-bfe0
offline 7:51:15 svc:/network/auto/ncu:ipv4-bfe0
offline 7:51:16 svc:/network/auto/ncu:ipv6-iwi0
Note that once Clearview vanity naming integrates, the link name ("iwi0" or "bfe0" or whatever) will be dropped in favour of the vanity names for those links.
So, is this reasonable? Is there a better name than "auto"? Let us know!