Branded Container Hurdle...
By mrbill on Jul 26, 2008
We ran into an interesting hurdle with our branded containers project. Once we had everything up and running, we decided to test patching the global zone (Solaris 10) and patching within the local zone (Solaris 8). We downloaded both "Recommended Patch Clusters" from SunSolve, and applied them.
The Solaris 10 patch installation went without a hitch (as expected), but adding patches to the Solaris 8 branded container failed. The error message from patchadd was that there wasn't enough disk space to install the patches. None of the patches would install. We did some checking, and "df -h" showed that we had 22GB of free disk space on the partition containing the zone. Weird. So we looked at patchadd, and it was running "df" on the /var/sadm directory to see if there was enough space to hold the files necessary to back out the patches. "df -k /var/sadm" failed with a message "special device and mount point are the same for loopback file system". So "df -k" worked fine, but "df -k [something]" would fail. Interesting.
I popped a note off to the Branded Containers team at Sun, and received an answer within 10 minutes. Apparently I was running into a known (and fixed) bug:
Status: RELEASED Patch Id: 137749-01 fixes BugID 6653043 ... Description: When running A solaris 8 zone within Migration Assistant (etude), it is possible to have a loopback filesystem mount in which the underlying mount is not visible in the zone. The actual issue is addressed very simply in the Solaris 10 df by saving the best match while doing the walk backwards through the mount table.
This bug manifests itself if your branded zone is using filesystems that have loopback filesystems (lofs) underneath. Since we were mounting our disk space under /z/[zonename]_root and then using lofs to mount it into the zonepath (/zones/[zonename]), this one bit us. Even our application and data spaces were mounted under /z (with zonename and resource name included in the mountpoint name) and then imported into the zones with zonecfg "add fs". A little more research revealed that the error message that we were receiving was actually removed from the Solaris 8 source code as part of the fix.
- if (EQ(match->mte_mount->mnt_fstype, MNTTYPE_LOFS) && - EQ(match->mte_mount->mnt_mountp, - match->mte_mount->mnt_special)) - errmsg(ERR_FATAL, - "special device and mount point are the same for loopback file system");
Now comes a more interesting question, if I need a patch to make adding patches work, how can I install that patch? The answer is to use the "patchadd -d" option to tell patchadd not to try and save the old versions of the files being patched (in this case, just the "df" binary).
Interesting note, patch 137749-01 is not contained in the latest Solaris 8 recommended patch cluster, or in the Solaris 8 Branded Containers package. This patch must be downloaded separately and applied by hand to the Solaris 8 branded container.
One of the Branded Containers support team guys did file an RFE to get the patch included in the software distribution for Solaris 8 Branded Containers for a future release.
Hopefully there is enough info in this blahg to help others who might Google the error message or symptom, and provide a quick path to the answer.