Wednesday Jan 13, 2010

new beta of globalsan iscsi initiator for macos

I followed the example of Constantin and set up Time Machine + iscsi backups from a Mac Mini running Snow Leopard to a zvol on our home opensolaris server.  We had quite a few stability problems -- mostly hangs and laggy behavior relating to suspend/resume/sleep on the mac -- which I described in a post on the SNS forums.

Fortunately they appear to be fixed by the newest beta,, of the globalsan iscsi initiator; if you've tried this out and it didn't work for you, try the newer code..

Friday Sep 19, 2008

Knowing when a crash dump is complete

If you're doing kernel development on solaris, every so often you'll crash a machine.  It will save a crash dump to the dump device and then on reboot, the dumpadm service (svc:/system/dumpadm:default) will invoke savecore to copy from the dump device to the filesystem.

Since SMF goes out of its way to let services start in parallel, and dumpadm is one of those services, it might not be done by the time you log in to start investigating the crime scene.

It's annoying to have to periodically poll to see if it's done (sometimes it takes a while) so you can do something like:

  cd  /var/crash/$(hostname); svcadm -v enable -s dumpadm; mdb -k N

and then go about your business until you see an mdb prompt in that window.

This relies on the the -s option to svcadm (which causes it to wait until the service is done attempting to start) as well as the fact that enabling an already-enabled service is a no-op.

Tuesday Nov 06, 2007

Three strikes, you're out (but it's only the first inning of a preseason game)

The indiana prototype was released with the name "opensolaris developer preview".  Having installed it and (briefly) attempted to get work done on it, I'm even more convinced the name was vastly premature.

 opensolaris. nope.  it's the prototype output of a small implementation team and hasn't been through the rigor and wringer of the development process.  It's a fascinating and impressive start, but.. it's not opensolaris yet, and I'm skeptical of some of the design choices (admittedly, I always am..)

 developer.  I'm a developer so it should be for me to use?  I installed it.  I tried to run bugster to file a bug.  Nope, no java.  Tried to build some simple C programs.  Nope, No C compiler, no make.  Tried to add the binary nvidia driver so the display didn't look crappy -- got that working but it was an adventure -- pkgadd(1m) is busted so I had to pkgadd -R on a different system, rsync the bits over, and re-run the package's postinstall script a few times before it "took".

If you want to play with the new packaging system, make sure you have another system running SXCE handy to actually do builds.  And if you see something unexpected, please file bugs early and often.   If you actually want to get work done, you're better off with SXDE/SXCE until further notice.

preview.  When I hear "preview" I think "movie sneak preview" .. getting a look at the almost-final production before they come out.  SXCE & SXDE are previews.  The Indiana prototype is like the raw footage viewed by the production team the day after it's shot before the bloopers have been edited out.

Tim Foster's critique misses the point of the engineering objections.  Attaching this name to a pre-alpha snapshot demo which is simply not even close to done is a mistake.  It might make sense when Indiana is a little better baked, but IMHO, the current name attached to the current bits hurts the "brand".  IMHO the name should have been held in reserve until it was ready for community endorsement. 


Thursday Nov 01, 2007

Looking good, save for the name.

Ran into a few bugs installing the Indiana prototype.  

1) the installer got confused when I attempted to add the user "sommerfeld".  (a 8-character username limit is a figment of useradd's imagination).    I had to reboot and try again. 

 2) the lack of the nvidia binary driver in the distribution meant that it didn't cope with a 1920x1200 display.

but otherwise it installed with a zfs root in almost no time flat from CD (system refused to boot from a USB key).

It still needs a name change, though..

Premature naming.

So, a preview of the new packaging & install technology produced by Project Indiana was just released. I'm shortly going to be installing it on a spare system in my office just to give it a shot.

Unfortunately, it's being called the "OpenSolaris Developer Preview" and is being portrayed as a distinctly special binary distribution on the opensolaris home page. The name is unfortunate for a number of reasons:

  1. The vast majority of the changes have not yet received the typical design and architecture review received by Solaris components
  2. There is not yet community consensus that OpenSolaris should have a reference binary distribution
  3. There is certainly not yet consensus that the Indiana technology is the right tool for the job.

I hope the folks who chose this name despite ample warning that it would cause trouble quickly reconsider. And I hope that the poor choice of name doesn't deter people from giving it a try. But the choice of names is forcing something of a constitutional crisis within opensolaris.

Wednesday Nov 16, 2005

The End-to-end argument meets ZFS

I'm really a networking&security type at heart.  Why am I excited about ZFS?

Back when I was studying for a degree in computer science, I took what was then (and probably still is) the best undergraduate course in MIT's CS department: Computer Systems Engineering, better known as "6.033" or just "'033".

A major part of the course was a series of case studies -- we would read an important paper on a system, write a short analysis, and then discuss the system in class.

One of the key papers presented was Saltzer, Reed, and Clark's "End to End Arguments in System Design"

I'll quote the abstract:

This paper presents a design principle that helps guide placement of
functions among the modules of a distributed computer system.  This
principle, called the end-to-end argument, suggests that functions
placed at low levels of a system may be redundant or of little value
when compared with the cost of providing them at that low level.
Examples discussed in the paper include bit error recovery, security
using encryption, duplicate message suppression, recovery from system
crashes, and delivery acknowledgement.  Low level mechanisms to
support these functions are justified only as performance

The paper has spawned a lot of debate and more than a few followups over the years, and interminable arguments about what counts as an end, but overall I think it's held up pretty well.

Fast forward to a couple years ago when I first saw a high level overview of the ZFS design.  I immediately thought of this paper.

ZFS applies the end-to-end principle to filesystem design.  

End-to-end is normally applied to distributed systems, where two distinct "ends" are communicating with each other, often in real time or with relatively short delays.

Here, the "ends" are separated mainly by time: one "end" writes data to the filesystem, and the other "end" expects to get the exact same data back in the future.  (And the "middle" is the storage subsystem, which these days is itself a complex distributed system).

By placing the functionality required for robustness at a relatively high layer within the storage stack, ZFS can perform these functions with reduced overall system cost; you can use a much simpler disk subsystem to get a desired level of performance, availability and reliability.

For instance, the filesystem knows for sure which disk blocks are in use.  The disk doesn't.  If you replace a disk in a mirror or Raid-Z group, ZFS only needs to copy the blocks that are currently in use to the new disk; when lower layers are responsible for redundancy, you have to copy the whole
disk.  With the upper layer responsible for redundancy, the repair takes less time, and your window of exposure to an additional failure can be significantly shorter.

I'm hoping this leads to simpler (and cheaper) storage hardware in the long run -- JBODs seem to be ideal for ZFS, and you can take the battery-backed NVRAM out of the raid controllers and give it to the lumberjacks.

Technorati Tag:

Thursday Oct 20, 2005

packaging svk

So, Adam, never fear..

I have two bits of tech in hand which will make deploying svk on solaris for development purposes pretty painless.

1) NetBSD's pkgsrc will build packages on solaris and handle chasing down the dozens of dependencies.  Currently it has SVK 1.00, but I've got diffs to the pkgsrc config to take it to 1.05 under review right now (three packages needed to be upgraded and two more needed to be added.  took me about an hour and a half last night).

2) there's a "gensolpkg" inside pkgsrc which will create solaris/SVR4-format packages.   it's a little rusty as it still assumes the 9-character package name limit, but that's easily repaired, and I should probably commit that fix as well..

toss them all into a single packagestream-format blob and we're all set.

Only real misfeature at this point is that pkgsrc insists on building its own copy of perl.   But given that we lock down most aspects of the build/development environment, and occasionally get hurt when we don't, this might be another case where we should just take the hit of another copy.


Wednesday Jun 08, 2005

On the conversion of working systems into warm bricks...

Operating systems development communities wind up inventing and using a fair bit of slang.  The existing Solaris development community within Sun tends to use one particular metaphor a fair bit: the brick.  That's what you get when you take your test machine, add your latest test bits, and, well, something goes wrong in a big way and your system (whether a low end PC or high end multiprocessor) winds up having all of the capability of a Warm Brick, at least until you get  a chance to reinstall it. 

Typical usage: "Oops, I bricked it."   "Hey, when you brickify a test machine, at least reinstall a good build on it before you move on..", and "Bugs in the packaging scripts may still result in brickification".

(Note: members of another OS development community have been known to use "brick" as short for  "throw a brick at".   As far as I can tell, these usages are completely unrelated).

Wednesday Apr 20, 2005

The N-brain rule

Glenn Brunette's post on the Two Man Rule reminded me of a topic I've been meaning to write about for a while. 

One of the successful teams on Junkyard Wars/Scrapheap Challenge was the NERDS.  They had a "two brain rule" for design changes -- no team member could unilaterally make a design change mid-build without another team member agreeing to the change.

Inside the solaris development community, we have a fairly strict N-brain rule, with N being 3 (or greater) depending on the scale and nature of the change.
This applies to all changes, regardless of how small.

N changes depending on a bunch of factors:
  • How big of a change it is
  • How obvious a change it is
  • When in the release cycle it's happening -- late in the release cycle we require two code reviewers.
Our development processes are "shrink to fit"; but the minimal case has N=3. 

Brain #1 is the engineer responsible for the change.
Brain #2 is the code reviewer.
Brain #3 is the "RTI" (Request to Integrate) approver.

In short, unless you can convince two (or more) other people that your change makes sense, is worth making, and is fully baked, it has no business being in a general purpose system like Solaris.

Changes with more significant impact require other levels of review.. more on that later.

UPDATE:  to clarify based on a question I got out of band: in the reduced case shown above, none of the people in the review process are managers.

Sunday Apr 17, 2005

And it's already starting to pay off

The RFE I integrated a week & change ago has already started paying off: it just caught an upgrade script bug in a project that hasn't even integrated into solaris...

Tuesday Jan 25, 2005

Another production use of the infamous circularity detector..

Bryan asks:

    "Is this the only use of this infamous technique in shipping production code?"


It took me less than a minute to find a place where the same technique was used in GNU Emacs, in src/print.c:print_object(), in the C implementation of the LISP s-expression printing code.   In LISP systems, the s-expression printer gets used from debuggers and the like; as with dtrace, it's generally bad form to get stuck chasing your tail printing out a broken data structure, so this hack is used a fair bit..

The emacs implementation is slightly different; instead of moving the scout out ahead, it instead has a "halftail" pointer lagging behind, moving at half the speed of the main pointer.  Actually, in the dtrace code in question, the half-tail approach might be slightly more efficient in the normal case where the data
structures aren't circular and the thing you're looking for is usually in the list.




Top Tags
« April 2014