Wednesday Nov 25, 2009

Oracle Database 11g Release 2 is now available for Solaris (SPARC and x86)

Oracle Database 11g Release 2 for Solaris is now available for download at the Oracle Technology Network Development. This includes 64 bit binaries for both SPARC and x86. This is pretty exciting news since this is the first time that x86 binaries for Solaris have been available at the same time as SPARC. A big thanks to our friends at Oracle for making this happen.

Let the upgrades and installations begin.

Technocrati Tags:

Monday Nov 23, 2009

Great Lakes OpenSolaris Users Group - Nov 2009

I would like to thank Chip Bennett and all of the fine folks from Laurus Technologies for hosting the November meeting of the Great Lakes OpenSolaris Users Group (GLUG), especially on such a short notice. It was a pleasure coming back and I enjoyed meeting up some some old friends and making some new ones.

We had a rather nice discussion around recent enhancements to ZFS. As promised, I have posted my slides for your review. Please let me know if you have any trouble downloading them or if you find any confusing or erroneous bits.

I appreciate all of the folks that turned out as well as those that connected to the webcast. I hope to see all of you again at a future meeting.

Sunday Nov 22, 2009

Taking ZFS deduplication for a test drive

Now that I have a working OpenSolaris build 128 system, I just had to take ZFS deduplication for a spin, to see if it was worth all of the hype.

Here is my test case: I have 2 directories of photos, totaling about 90MB each. And here's the trick - they are almost complete duplicates of each other. I downloaded all of the photos from the same camera on 2 different days. How many of you do that ? Yeah, me too.

Let's see what ZFS can figure out about all of this. If it is super smart we should end up with a total of 90MB of used space. That's what I'm hoping for.

The first step is to create the pool and turn on deduplication from the beginning.
# zpool create -f scooby -O dedup=on c2t2d0s2
This will use sha256 for determining if 2 blocks are the same. Since sha256 has such a low collision probability (something like 1x10\^-77), we will not turn on automatic verification. If we were using an algorithm like fletcher4 which has a higher collision rate we should also perform a complete block compare before allowing the block removal (dedup=fletcher4,verify)

Now copy the first 180MB (remember, this is 2 sets of 90MB which are nearly identical sets of photos).
# zfs create scooby/doo
# cp -r /pix/Alaska\* /scooby/doo
And the second set.
# zfs create scooby/snack
# cp -r /pix/Alaska\* /scooby/snack
And finally the third set.
# zfs create scooby/dooby
# cp -r /pix/Alaska\* /scooby/dooby
Let's make sure there are in fact three copies of the photos.
# df -k | grep scooby
scooby               74230572      25 73706399     1%    /scooby
scooby/doo           74230572  174626 73706399     1%    /scooby/doo
scooby/snack         74230572  174626 73706399     1%    /scooby/snack
scooby/dooby         74230572  174625 73706399     1%    /scooby/dooby


OK, so far so good. But I can't quite tell if the deduplication is actually doing anything. With all that free space, it's sort of hard to see. Let's look at the pool properties.
# zpool get all scooby
NAME    PROPERTY       VALUE       SOURCE
scooby  size           71.5G       -
scooby  capacity       0%          -
scooby  altroot        -           default
scooby  health         ONLINE      -
scooby  guid           5341682982744598523  default
scooby  version        22          default
scooby  bootfs         -           default
scooby  delegation     on          default
scooby  autoreplace    off         default
scooby  cachefile      -           default
scooby  failmode       wait        default
scooby  listsnapshots  off         default
scooby  autoexpand     off         default
scooby  dedupratio     5.98x       -
scooby  free           71.4G       -
scooby  allocated      86.8M       -
Now this is telling us something.

First notice the allocated space. Just shy of 90MB. But there's 522MB of data (174MB x 3). But only 87MB used out of the pool. That's a good start.

Now take a look at the dedupratio. Almost 6. And that's exactly what we would expect, if ZFS is as good as we are lead to believe. 3 sets of 2 duplicate directories is 6 total copies of the same set of photos. And ZFS caught every one of them.

So if you want to do this yourself, point your OpenSolaris package manager at the dev repository and wait for build 128 packages to show up. If you need instructions on using the OpenSolaris dev repository, point the browser of your choice at http://pkg.opensolaris.org/dev/en/index.shtml. And if you can't wait for the packages to show up, you can always .

Technocrati Tags:
<script type="text/javascript"> var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831"; </script> <script type="text/javascript" src="http://www.statcounter.com/counter/counter.js"></script>

VirtualBox 3.1 Beta 2 now available for download

The second beta release of VirtualBox 3.1 is now available for testing. You can download the binaries for all platforms at http://download.virtualbox.org/virtualbox/3.1.0_BETA2/

In addition to the new features introduced with Beta 1 (live migration, branched snapshots, new OpenSolaris USB support, 2D video acceleration, network changes while VM is running), the following changes have gone into this release.
  • Fixed snapshots of VMs with empty DVD attachments (3.1.0 Beta 1 regression)
  • Fixed snapshots of VMs with multiple attached disk drives (3.1.0 Beta 1 regression)
  • GUI: fixed a crash in the snapshots widget
  • VMM: guest SMP fixes in rare cases
  • VMM: fixed kernel panic with older Linux kernels, e.g. CentOS 5.3 (APIC; 3.0.12 + 3.1.0 Beta 1 regression)
  • 3D support: fixed Final frame of Compiz animation not updated to the screen (public ticket #4653, partly fixed, needs more work)
  • 2D/3D support, Windows: fix gl support test failure when VBox installation path contains spaces
  • 2D support: fix saved state restore failure when 2D is active and used by the guest (Playing a Video)
  • Creating a new VM sometimes failed (3.1.0 Beta 1 regression)
  • VBoxManage clonehd was broken if the same image was cloned concurrently
  • added VBoxManage usage information for Floppy/DVD images
  • Linux Additions: properly disable the seamless mode if the guest doesn't support it
  • Windows Additions: fixed high usage of VBoxService on Windows guests (3.1.0 Beta 1 regression)
  • Windows Additions: VBox Credential Provider fixes
  • Windows guests: fixed invisible mouse pointer when VM was restored from a saved state.
  • VirtIO: performance improvements
  • EFI: 32-bpp VGA driver was added.


Please refer to the VirtualBox 3.1 Beta 2 Changelog for a complete list of changes and enhancements. Also please read the known issues before installing this beta release.

Important Note: Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes.

Report problems or issues at the VirtualBox Beta Forum.

Technocrati Tags:

Tuesday Nov 17, 2009

VirtualBox 3.0.12 is now available

VirtualBox 3.0.12 has been released for all platforms. 3.0.12 is a maintenance release.

Changes in VirtualBox 3.0.12 include
  • VMM: reduced IO-APIC overhead for 32 bits Windows NT/2000/XP/2003 guests; requires 64 bits support (VT-x only)
  • VMM: fixed double timer interrupt delivery on old Linux kernels using IO-APIC (caused guest time to run at double speed)
  • VMM: reinitialize VT-x and AMD-V after host suspend or hibernate; some BIOSes forget this (Windows hosts only)
  • VMM: fix loading of saved state when RAM preallocation is enabled
  • BIOS: ignore unknown shutdown codes instead of causing a guru meditation
  • GUI: never start a VM on a single click into the selector window
  • Serial: reduce the probability of lost bytes if the host end is connected to a raw file
  • VMDK: fix handling of split image variants and fix a 3.0.10 regression
  • VRDP: fixed occasional VRDP server crash
  • Network: even if the virtual network cable was disconnected, some guests were able to send / receive packets (E1000)
  • Network: even if the virtual network cable was disconnected, the PCNet card received some spurious packets which might confuse the guest
  • Shared folders: fixed changing case of file names
  • Windows Additions: fix crash in seamless mode
  • Linux Additions: fix writing to files opened in O_APPEND mode
  • Solaris Additions: fix regression in guest additions driver which among other things caused lost guest property updates and periodic error messages being written to the system log
For a complete list of the changes in this release, please take a look at the VirtualBox Changelog.

Technocrati Tags:

Saturday Nov 14, 2009

VirtualBox 3.1 Beta 1 now available for download

The first beta release of VirtualBox 3.1 is now available. You can download the binaries at http://download.virtualbox.org/virtualbox/3.1.0_BETA1/

Version 3.1 will be a major update. The following major new features have been added:
  • Migration of a VM session from one system to another (Teleportation)
  • VM states can be restored from an arbitrary snaphot (Branched snapshots)
  • New snapshots can be taken from other snapshots (allows for forking VMs)
  • 2D video acceleration for Windows guests using the host video hardware for overlay stretching and colour conversion
  • The network attachment type can be changed while a VM is running
  • Experimental support for new USB features in OpenSolaris hosts (nv124 and later)
  • Signi´Čücant performance improvements for PAE and AMD64 guests (VT-x and AMD-V only; normal (non-nested) paging)
  • Experimental support for EFI (Extended Firmware Interface)

A particularly nice new feature in the user interface is a red/green visual indicator on critical resource allocation controls.




These are relative to the capacity of the system and not what resources are currently free, but should prevent accidental overallocation of resources like CPU or memory.

Important: You should reinstall the guest additions after upgrading to the beta release.

If you experience problems after upgrading to the beta release (such as SMP Solaris getting upset with heavy network load) you can reinstall an older version of VirtualBox. Just remember to reinstall the guest additions after the downgrade.

If your VMs are in an Inaccessible state check the XML configuration files. In the details panel you should see something like
Could not load the settings file 'D:
\\blah\\.VirtualBox\\Machines\\Chapterhouse (Solaris 10)\\Chapterhouse (Solaris 10).xml'

Element '{http://innotek.de/VirtualBox-settings}CpuIdTree': This element is not expected.
Result Code:          VBOX_E_XML_ERROR  (0x80BB000A)
Component:            VirtualBox
Interface:            IVirtualBox {3fab53a-199b-4526-a91a-93ff62e456b8}
Using the editor of your choice, open the XML file in the error message and remove the <CpuIdTree\\> tag. Save the file and hit the refresh button in the VirtualBox window and you should be able to restart your VM.

Please refer to the VirtualBox 3.1 Beta 1 Changelog for a complete list of changes and enhancements. Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes. Please read the known issues before installing this beta release.

Please use our 'VirtualBox Beta Feedback' forum at http://forums.virtualbox.org/viewforum.php?f=15 to report any problems with the Beta.

Technocrati Tags:

Friday Oct 30, 2009

VirtualBox 3.0.10 is now available

VirtualBox 3.0.10 has been released for all platforms. 3.0.10 is a maintenance release and contains quite a few performance and stability improvements.

# psrinfo -v
Status of virtual processor 0 as of: 10/30/2009 14:33:18
  on-line since 10/30/2009 08:18:50.
  The i386 processor operates at 2613 MHz,
	and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 10/30/2009 14:33:18
  on-line since 10/30/2009 08:18:52.
  The i386 processor operates at 2613 MHz,
	and has an i387 compatible floating point processor.
Status of virtual processor 2 as of: 10/30/2009 14:33:18
  on-line since 10/30/2009 08:18:52.
  The i386 processor operates at 2613 MHz,
	and has an i387 compatible floating point processor.
Status of virtual processor 3 as of: 10/30/2009 14:33:18
  on-line since 10/30/2009 08:18:52.
  The i386 processor operates at 2613 MHz,
	and has an i387 compatible floating point processor.


As you can see my 4 CPU Solaris guest is running just fine. I was encouraged after 3.0.8 but under heavy network loads the SMP guest would lock up. I've been playing youtube videos while doing live upgrades for the last couple of hours and everything seems to be running as expected.

Changes in VirtualBox 3.0.10 include
  • VMM: guest SMP stability fixes
  • VMM: fixed guru meditation with nested paging and SMP guests
  • VMM: changed VT-x/AMD-V usage to detect other active hypervisors; necessary for e.g. Windows 7 XP compatibility mode (Windows & Mac OS X hosts only)
  • VMM: guru meditation during SCO OpenServer installation and reboot (VT-x only)
  • VMM: fixed accessed bit handling in certain cases
  • VMM: fixed VPID flushing (VT-x only)
  • VMM: fixed broken nested paging for 64 bits guests on 32 bits hosts (AMD-V only)
  • VMM: fixed loading of old saved states/snapshots
  • Mac OS X hosts: fixed memory leaks
  • Mac OS X hosts (Snow Leopard): fixed redraw problem in a dual screen setup
  • Windows hosts: installer updates for Windows 7
  • Solaris hosts: out of memory handled incorrectly
  • Solaris hosts: the previous fix for #5077 broke the DVD host support on Solaris 10 (VBox 3.0.8 regression)
  • Linux hosts: fixed module compilation against Linux 2.6.32rc4 and later
  • Guest Additions: fixed possible guest OS kernel memory exhaustion
  • Guest Additions: fixed stability issues with SMP guests
  • Windows Additions: fixed color depth issue with low resolution hosts, netbooks, etc.
  • Windows Additions: fixed NO_MORE_FILES error when saving to shared folders
  • Windows Additions: fixed subdirectory creation on shared folders
  • Linux Additions: sendfile() returned -EOVERFLOW when executed on a shared folder
  • Linux Additions: fixed incorrect disk usage value (non-Windows hosts only)
  • Linux installer: register the module sources at DKMS even if the package provides proper modules for the current running kernel
  • 3D support: removed invalid OpenGL assertion
  • Network: fixed the Am79C973 PCNet emulation for QNX (and probably other) guests
  • VMDK: fix handling of split image variants
  • VHD: do not delay updating the footer when expanding the image to prevent image inconsistency
  • USB: stability fix for some USB 2.0 devices
  • GUI: added a search index to the .chm help file
  • GUI/Windows hosts: fixed CapsLock handling on French keyboards
  • Shared clipboard/X11 hosts: fixed a crash when clipboard initialisation failed
If you are running on a Solaris or OpenSolaris host you should reboot your systems after reloading the kernel driver as soon as it is convenient. This is a temporary situation that is expected to be resolved soon.

For a complete list of the changes in this release, please take a look at the VirtualBox Changelog.

Technocrati Tags:

Wednesday Oct 21, 2009

Pun for the day: Truculent

    Truculent (n) - A borrowed vehicle, normally from a neighbor or family member.

    Usage: I've returned that truculent me last week.

Tuesday Oct 20, 2009

Pun for the day: Biathlon

    Biathlon (v) - Choosing an AMD motherboard over Intel for your next home computer project.

Monday Oct 19, 2009

Pun for the day: Iconoclast

    Iconoclast (n) - Your most recent Twitter profile picture.

Saturday Oct 17, 2009

Pun for the day: Latency

    Latency (adv) - A warning, generally issued to teenagers.

    Usage: You can go to the movies with your friends, but come home latency that you will be grounded for the next two weeks.

Friday Oct 16, 2009

Pun for the day: IO Channel

    IO Channel (n) - A new cable TV network dedicated to actors who made careers out of playing peripheral characters.

Thursday Oct 15, 2009

Pun for the day: Hypervisor

    hypervisor (n) - the type of boss that wants regular status reports. Hourly.

Wednesday Oct 14, 2009

Pun for the day: Paravirtualization

    paravirtualization (n) - what happens when you run both VMware and VirtualBox in your environment.

Friday Oct 09, 2009

What's New in Solaris 10 10/09


Solaris 10 10/09 (u8) is now available for download at http://sun.com/solaris/get.jsp. DVD ISO images (full and segments that can be reassembled after download) are available for both SPARC and x86.

Here are a few of the new features in this release that caught my attention.

Packaging and Patching

Improved performance of SVR4 package commands: Improvements have been made in the SVR4 package commands (pkgadd, pkgrm, pkginfo et al). The impact of these can be seen in drastically reduced zone installation time. How much of an improvement you ask (and you know I have to answer with some data, right) ?
# cat /etc/release; uname -a

                        Solaris 10 5/09 s10x_u7wos_08 X86
           Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
                        Use is subject to license terms.
                             Assembled 30 March 2009
SunOS chapterhouse 5.10 Generic_141415-09 i86pc i386 i86pc

# time zoneadm -z zone1 install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <2905> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1453> packages on the zone.
Initialized <1453> packages on zone.
Zone  is initialized.
Installation of these packages generated errors: 
The file  contains a log of the zone installation.

real    5m48.476s
user    0m45.538s
sys     2m9.222s
#  cat /etc/release; uname -a

                       Solaris 10 10/09 s10x_u8wos_08a X86
           Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
                        Use is subject to license terms.
                           Assembled 16 September 2009
SunOS corrin 5.10 Generic_141445-09 i86pc i386 i86pc

# time zoneadm -z zone1 install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <2915> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1432> packages on the zone.
Initialized <1432> packages on zone.
Zone  is initialized.
Installation of these packages generated errors: 
The file  contains a log of the zone installation.

real    3m4.677s
user    0m44.593s
sys     0m48.003s
OK, that's pretty impressive. A zone installation on Solaris 10 10/09 takes about half of the time as it does on Solaris 10 5/09. It is also worth noting the rather large reduction in the amount of system time (48 seconds vs 129 seconds) too.

Zones parallel patching: Before Solaris 10 10/09 the patching process was single threaded which could lead to prolonged patching time on a system with several nonglobal zones. Starting with this update you can specify the number of threads to be used to patch a system with zones. Enable this feature by assigning a value to num_proc in /etc/patch/pdo.conf. The maximum value is capped at 1.5 times the number of on-line CPUs, but can be limited by a lower value of num_proc.

This feature is also available by applying Solaris patches 119254-66 (SPARC) or 119255-66 (x86).

For more information on the effects of zone parallel patching, see Container Guru Jeff Victor's excellent Patching Zones Goes Zoom.

ZFS Enhancements

Flash archive install into a ZFS root filesystem: ZFS support for the root file system was introduced in Solaris 10 10/08 but the install tools did not work with flash archives. Solaris 10 10/09 provides the ability to install a flash archive created from an existing ZFS root system. This capability is also provided by patches 119534-15 + 124630-26 (SPARC) or 119535-15 + 124631-27 (x86) that can be applied to a Solaris 10 10/08 or later system. There are still a few limitations such as the the flash source must be from a ZFS root system and you cannot use differential archives. More information can be found in Installing a ZFS Root File System (Flash Archive Installation).

Set ZFS properties on the initial zpool file system: Prior to Solaris 10 10/09, ZFS file system properties could only be set once the initial file system was created. This would make it impossible to create a pool with same name as an existing mounted file system or to be able to have replication or compression from the time the pool is created. In Solaris 10 10/09 you can specify any ZFS file system property using zpool -O.
 zpool create -O mountpoint=/data,copies=3,compression=on datapool c1t1d0 c1t2d0
ZFS Read Cache (L2ARC): You now have the ability to add persistent read ahead caches to a ZFS zpool. This can improve the read performance of ZFS as well as reducing the ZFS memory footprint.

L2ARC devices are added as cache vdevs to a pool. In the following example we will create a pool of 2 mirrored devices, 2 cache devices and a spare.
 
# zpool create datapool mirror c1t1d0 c1t2d0 cache c1t3d0 c1t4d0 spare c1t5d0

# zpool status datapool
  pool: datapool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        datapool    ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
        cache
          c1t3d0    ONLINE       0     0     0
          c1t4d0    ONLINE       0     0     0
        spares
          c1t5d0    AVAIL

errors: No known data errors
So what do ZFS cache devices do ? Rather than go into a lengthy explanation of the L2ARC, I would rather refer you to Fishworks developer Brendan Gregg's excellent treatment of the subject.

Unlike the intent log (ZIL), L2ARC cache devices can be added and removed dynamically.
# zpool remove datapool c1t3d0
# zpool remove datapool c1t4d0

# zpool status datapool
  pool: datapool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        datapool    ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
        spares
          c1t5d0    AVAIL

errors: No known data errors


# zpool add datapool cache c1t3d0

# zpool status datapool
  pool: datapool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        datapool    ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
        cache
          c1t3d0    ONLINE       0     0     0
        spares
          c1t5d0    AVAIL

errors: No known data errors
New cache control properties: Two new ZFS properties are introduced with Solaris 10 10/09. These control what what is stored (nothing, data + metadata, or metadata only) in the ARC (memory) and L2ARC (external) caches. These new properties are
  • primarycache - controls what is stored in the memory resident ARC cache
  • secondarycache - controls what is stored in the L2ARC
and they can take the values
  • none - the caches are not used
  • metadata - only file system metadata is cached
  • all - both file system data and the metadata is stored in the associated cache
# zpool create -O primarycache=metadata -O secondarycache=all datapool c1t1d0 c1t2d0 cache c1t3d0 
There are workloads such as databases that perform better or make more efficient use of memory if the system is not competing with the caches that the applications are maintaining themselves.

User and group quotas:ZFS has always had quotas and reservations but they were applied at the file system level. To achieve user or group quotas would require creating additional file systems which might make administration more complex. Starting with Solaris 10 10/09 you can apply both user and group quotas to a file system much like you would with UFS. The ZFS file system must be at version 15 or later and the zpool must be at version 4 or later.

Let's create a file system and see if we are at the proper versions to set quotas.
# zfs create rpool/newdata
# chown bobn:local /rpool/newdata

# zpool get version rpool
NAME   PROPERTY  VALUE    SOURCE
rpool  version   18       default


# zpool upgrade -v
This system is currently running ZFS pool version 18.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  snapshot user holds
For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.


# zfs get version rpool/newdata
NAME           PROPERTY  VALUE    SOURCE
rpool/newdata  version   4 

# zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifier (FUID)
 4   userquota, groupquota properties

For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/zpl/N

Where 'N' is the version number.
Excellent. Now let's set a user and group quota and see what happens. We'll set a group quota of 1GB and a user quota at 2GB.
# zfs set groupquota@local=1g rpool/newdata
# zfs set userquota@bobn=2g rpool/newdata

# su - bobn

% mkfile 500M /rpool/newdata/file1
% mkfile 500M /rpool/newdata/file2
% mkfile 500M /rpool/newdata/file3
file3: initialized 40370176 of 524288000 bytes: Disc quota exceeded

As expected, we have exceeded our group quota. Let's change the group of the existing files and see if we can proceed to our user quota.
% rm /rpool/newdata/file3
% chgrp sales /rpool/newdata/file1 /rpool/newdata/file2
% mkfile 500m /rpool/newdata/file3
Could not open /rpool/newdata/disk3: Disc quota exceeded

Whoa! What's going on here ? Relax - ZFS does things asynchronously unless told otherwise. And we should have noticed this when the mkfile for file3 actually started. ZFS wasn't quite caught up with the current usage. A good sync should do the trick.
% sync
% mkfile 500M /rpool/newdata/file3
% mkfile 500M /rpool/newdata/file4
% mkfile 500M /rpool/newdata/file5
/rpool/newdata/disk5: initialized 140247040 of 524288000 bytes: Disc quota exceeded

Great. We now have user and group quotas. How can I find out what I have used against my quota ? There are two new ZFS properties, userused and groupused that will show what the group or user is currently consuming.
% zfs get userquota@bobn,userused@bobn rpool/newdata
NAME           PROPERTY        VALUE           SOURCE
rpool/newdata  userquota@bobn  2G              local
rpool/newdata  userused@bobn   1.95G           local

% zfs get groupquota@local,groupused@local rpool/newdata
NAME           PROPERTY          VALUE             SOURCE
rpool/newdata  groupquota@local  1G                local
rpool/newdata  groupused@local   1000M             local

% zfs get groupquota@sales,groupused@sales rpool/newdata
NAME           PROPERTY          VALUE             SOURCE
rpool/newdata  groupquota@sales  none              local
rpool/newdata  groupused@sales   1000M             local

% zfs get groupquota@scooby,groupused@scooby rpool/newdata
NAME           PROPERTY           VALUE              SOURCE
rpool/newdata  groupquota@scooby  -                  -
rpool/newdata  groupused@scooby   -   
New space usage properties: Four new usage properties have been added to ZFS file systems.
  • usedbychildren (usedchild) - this is the amount of space that is used by all of the children of the specified dataset
  • usedbydataset (usedds) - this is the total amount of space that would be freed if this dataset and it's snapshots and reservations were destroyed
  • usedbyrefreservation (usedrefreserv) - this is the amount of space that would be freed if the dataset's reservations were to be removed
  • usertbysnapshots (usedsnap) - the total amount of space that would be freed if all of the snapshots of this dataset were deleted.
# zfs get all datapool | grep used
datapool  used                  5.39G                  -
datapool  usedbysnapshots       19K                    -
datapool  usedbydataset         26K                    -
datapool  usedbychildren        5.39G                  -
datapool  usedbyrefreservation  0                      -


These new properties can also be viewed in a nice tabular form using zfs list -o space.
# zfs list -r -o space datapool
NAME           AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
datapool        480M  5.39G       19K     26K              0      5.39G
datapool@now       -    19K         -       -              -          -
datapool/fs1    480M   400M         0    400M              0          0
datapool/fs2   1.47G  1.00G         0   1.00G              0          0
datapool/fs3    480M    21K         0     21K              0          0
datapool/fs4   2.47G      0         0       0              0          0
datapool/vol1  1.47G     1G         0     16K          1024M          0

Miscellaneous

Support for 2TB boot disks: Solaris 10 10/09 supports a disk Volume Table of Contents (VTOC) of up to 2TB in size. The previous maximum VTOC size was 1TB. On x86 systems you must be running Solaris with a 64bit kernel and have at least 1GB of memory to use a VTOC larger that 1TB.

pcitool: A new command for Solaris that can assign interrupts to specific threads or display the current interrupt routing. This command is available for both SPARC and x86.

New iSCSI initiator SMF service: svc:/network/iscsi/initiator:default is a new Service Management Facility (SMF) service to control discovery and enumeration of iSCSI devices early in the boot process. Other boot services that may require iSCSI services can add dependencies to insure that the devices are available before being needed.

Device Drivers

The following device drivers are either new to Solaris or have had some new features or chipsets added.
  • MPxIO support for the LSI 6180 Controller
  • LSI MPT 2.0 SAS 2.0 controllers (mpt_sas)
  • Broadcom NetXTreme II gigabit Ethernet (bcm5716c and bcm5716s) controllers
  • Interrupt remapping for Intel VT-x enabled processors
  • Support for SATA AHCI tape
  • Sun StorageTek 6Gb/s SAS RAID controller and LSI MegaRAID 92xx (mt_sas)
  • Intel 82598 and 82599 10Gb/s PCIe Ethernet controller

Open Source Software Updates

The following open source packages have been updated for Solaris 10 10/09.
  • NTP 4.2.5
  • PostgreSQL versions 8.1.17, 8.2.13 and 8.3.7
  • Samba 3.0.35

For more information

A complete list of new features and changes can be found in the Solaris 10 10/09 Release Notes and the What's New in Solaris 10 10/09 documentation at docs.sun.com.

Technocrati Tags:
About

Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

Please follow me on Twitter Facebook or send me email

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today