Wednesday Jan 06, 2010

ZFS resliver performance improved!

I'm not being lucky with the Western Digital 1Tb disks in my home server.

That is to say I'm extremely pleased I have ZFS as both have now failed and in doing so corrupted data which ZFS has detected (although the users detected the problem first as the performance of the drive became so poor).One of the biggest irritations about replacing drives, apart from having to shut the system down as I don't have hot swap hardware is waiting for the pool to resliver. Previously this has taken in excess of 24 hours to do.

However yesterday's resilver was after I had upgraded to build 130 which has some improvements to the resilver code:


: pearson FSS 1 $; zpool status tank
  pool: tank
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: resilver completed after 6h28m with 0 errors on Wed Jan  6 02:04:17 2010
config:

        NAME           STATE     READ WRITE CKSUM
        tank           ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            c21t0d0s7  ONLINE       0     0     0
            c20t1d0s7  ONLINE       0     0     0
          mirror-1     ONLINE       0     0     0
            c21t1d0    ONLINE       0     0     0  225G resilvered
            c20t0d0    ONLINE       0     0     0

errors: No known data errors
: pearson FSS 2 $; 
Only 6 ½ hours for 225G which while not close to the theoretical maximum is way better than 24 hours and the system was usable while this was going on.

Tuesday Dec 23, 2008

Reading random disk blocks with format

Occasionally it is useful to be able to read blocks from disks when there is no label on the disk. Since most applications won't be able to open the device as you need to use O_NDELAY flag to the open system call.

Luckily it is possible to use format to read arbitrary disk blocks so you don't have to resort to writing a special application. The trick is to use the read analysis option to format and then restrict the blocks that you want to read down to the the blocks that you are interested in. Then once read use the print buffer command to output the data:

# format

Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
          /pci@1f,4000/scsi@3/sd@0,0
Specify disk (enter its number): 0
selecting c0t0d0
[disk formatted]
Warning: Current Disk has mounted partitions.


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit
format> ana


ANALYZE MENU:
        read     - read only test   (doesn't harm SunOS)
        refresh  - read then write  (doesn't harm data)
        test     - pattern testing  (doesn't harm data)
        write    - write then read      (corrupts data)
        compare  - write, read, compare (corrupts data)
        purge    - write, read, write   (corrupts data)
        verify   - write entire disk, then verify (corrupts data)
        print    - display data buffer
        setup    - set analysis parameters
        config   - show analysis parameters
        !<cmd>   - execute <cmd> , then return
        quit
analyze> set
Analyze entire disk[yes]? no
Enter starting block number[0, 0/0/0]: 0
Enter ending block number[0, 0/0/0]: 0
Loop continuously[no]? 
Enter number of passes[2]: 1
Repair defective blocks[yes]? 
Stop after first error[no]? yes
Use random bit patterns[no]? 
Enter number of blocks per transfer[1, 0/0/1]: 
Verify media after formatting[yes]? 
Enable extended messages[no]? 
Restore defect list[yes]? 
Restore disk label[yes]? 

analyze> read
Ready to analyze (won't harm SunOS). This takes a long time, 
but is interruptable with CTRL-C. Continue? y

        pass 0
   0/0/0  

Total of 0 defective blocks repaired.
analyze> print
0x53554e39  0x2e304720  0x63796c20  0x34393234  0x20616c74  0x20322068  
0x64203237  0x20736563  0x20313333  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000001  0x00000000  0x00000000  0x00080002  
0x00000003  0x00010005  0x00000000  0x00010000  0x00010000  0x00010000  
0x00010000  0x00010000  0x00000000  0x00000000  0x00000000  0x600ddeee  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x1518133e  0x00000000  0x00000001  
0x133c0002  0x001b0085  0x00000000  0x00000000  0x00fdc0a1  0x00001217  
0x00100e03  0x00000000  0x010dcea4  0x00000000  0x00000000  0x00000000  
0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  0x00000000  
0x00000000  0xdabe5344  
analyze> 


Now I know those at the back will recongise this as a Solaris SPARC disk label. With vtoc, that give away VTOC_SANITY values of 0x600ddee being in the middle after the ASCII label

Wednesday Nov 05, 2008

Throttling disks

The disk drivers in Solaris support SCSI tagged queuing and have done for a long time. This enables them to send more than one command to a LUN (logical Unit) at a time. The number of commands that can be sent in parallel is limited, throttled, by the disk drivers so that they never send more commands than the LUN can cope with. While it is possible for the LUN to respond with a “queue full” SCSI status to tell the driver that it can not cope with any more commands there are significant problems with relying on this approach:

  • Devices connected via fibre channel have to negotiate onto the loop to return the queue full status. This can mean that by the time the device manages to return queue full the host can have sent many more commands. This risks that the LUN can end up with a situation it can not cope with and typically results in the LUN resetting.

  • If the LUN is being accessed from more than one host it is possible for it to return Queue full on the very first command. This makes it hard for the host to know when it will be safe to send a command since there are none outstanding from that host.

  • If the LUN is part of many LUNs on a single target it may share the total pool of commands that can be accepted by all the LUNS and so again could respond with “queue full” on the first command to a LUN.

  • In the two cases above the total number of commands a single host can send a single LUN will vary depending on conditions that the host simply can not know making adaptive algorithms unreliable.

All the above issues result in people taking the safest option and setting the throttle for a device as low as required so that the LUN never needs to send queue full. In some cases as low as 1. This is bad when limited to an individual LUN, it is terrible when done globally on the entire system.

As soon as you get to the point where you hit the throttle two things happen:

  1. You are no longer transferring data over the interconnect (fibre channel, parallel scsi or iscsi) for writes. This data has to wait until another command can complete before it can be transferred. This then reduces the throughput of the device. You writes can end up being throttled by reads and hence tend towards the speed of the spinning disk if the read has to go to the disk even though you may have a write cache.

  2. The command is queued on the waitq which will increase the latency still further if the queue becomes deep. See here for information about disksort's effect on latency.

Given that the system will regularly dump large numbers of commands on devices for short periods of time you want those commands to be handled as quickly as possible to minimized applications hanging while their IO is completed. If you want to observe the maximum number of commands sent to a device then there is a D script here to do that.

So the advice for configuring storage would be:

Wednesday Jul 02, 2008

What is the maximum number of commands queued to a LUN

This is not quite a one liner as I'm reusing the code from a previous post to print out the devices in a human readable form other wise it is just a one liner and was when I typed it in.

The question posed here was what is the maximum number of commands sent to a LUN at any one time? Clearly this will max out at the throttle for the device however what was interesting, since the customer had already tuned the throttle down and the problem had gone away was what was their configuration capable of sending to the LUN:

#!/usr/sbin/dtrace -qCs

#define SD_TO_DEVINFO(un) ((struct dev_info \*)((un)->un_sd->sd_dev))

#define DEV_NAME(un) \\
        stringof(`devnamesp[SD_TO_DEVINFO(un)->devi_major].dn_name) /\* ` \*/

#define DEV_INST(un) (SD_TO_DEVINFO(un)->devi_instance)

fbt:\*sd:\*sd_start_cmds:entry  { @[DEV_NAME(args[0]),DEV_INST(args[0])] = max(arg
s[0]->un_ncmds_in_driver) }
END {
        printa("%s%d %@d\\n", @);
}


This produces a nice list of disk devices and the maximum number of commands that have been sent to them at anyone time:

# dtrace -qCs  /var/tmp/max_sd.d -n 'tick-5sec { exit(0) }'
sd2 1
sd0 70

# 

Combine that with the dscript from the latency bubble posting earlier and you can drill down on where your IO is waiting.

Tuesday Mar 25, 2008

Automatic opening a USB disk on Sun Ray

One of my users today had a bit of a hissy fit today when she plugged in her USB thumb drive into the Sun Ray and it did nothing. That is it did nothing visible. Behind the scenes the drive had been mounted somewhere but there was no realistic way she could know this.

So I need a way to get the file browser to open when the drive is inserted. A quick google finds " "USB Drive" daemon for Sun Ray sessions" which looks like the answer. The problem I have with this is that it polls to see if there is something mounted. Given my users never log out this would mean this running on average every second. Also the 5 second delay just does not take into account the attention span of a teenager.

There has to be a better way.

My solution is to use dtrace to see when the file system has been mounted and then run nautilus with that directory.

The great thing about Solaris 10 and later is that I can give the script just the privilege that allows it to run dtrace without handing out access to the world. Then of course you can then give that privilege away.

So I came up with this script. Save it. Mine is in /usr/local which in turn is a symbolic link to /tank/fs/local. Then add an entry to /etc/security/exec_attr, subsisting the correct absolute (ie one with no symbolic links in it) path in the line.

Basic Solaris User:solaris:cmd:::/tank/fs/local/bin/utmountd:privs=dtrace_kernel

This gives the script just enough privileges to allow it to work. It then drops the extra privilege so that when it runs nautilus it has no extra privileges.

Then you just have to arrange for users to run the script when they login using:

pfexec /usr/local/bin/utmountd

I have done this by creating a file called /etc/dt/config/Xsession.d/utmountd that contains these lines:


pfexec /usr/local/bin/utmountd &
trap "kill $!" EXIT

I leave making this work for uses of CDE as an exercise for the reader.

Wednesday Feb 27, 2008

Latency Bubbles follow up

Following on from the latency bubbles in your IO posting. I have been asked two questions about this post privately:

  1. How can you map those long numbers in the output into readable entries, eg sd0.

  2. How can I confirm that disksort has been turned off?

The first one just requires another glob of D:

##pragma D option quiet

#define SD_TO_DEVINFO(un) ((struct dev_info \*)((un)->un_sd->sd_dev))

#define DEV_NAME(un) \\
        stringof(`devnamesp[SD_TO_DEVINFO(un)->devi_major].dn_name) /\* ` \*/

#define DEV_INST(un) (SD_TO_DEVINFO(un)->devi_instance)


fbt:ssd:ssdstrategy:entry,
fbt:sd:sdstrategy:entry
{
        bstart[(struct buf \*)arg0] = timestamp;
}

fbt:ssd:ssdintr:entry,
fbt:sd:sdintr:entry
/ arg0 != 0 /
{
        this->buf = (struct buf \*)((struct scsi_pkt \*)arg0)->pkt_private;
}

fbt:ssd:ssdintr:entry,
fbt:sd:sdintr:entry
/ this->buf /
{
        this->priv = (struct sd_xbuf \*) this->buf->b_private;
}

fbt:ssd:ssdintr:entry,
fbt:sd:sdintr:entry
/ this->priv /
{
        this->un = this->priv->xb_un;
}

fbt:ssd:ssdintr:entry,
fbt:sd:sdintr:entry
/ this->buf && bstart[this->buf] && this->un /
{
        @l[DEV_NAME(this->un), DEV_INST(this->un)] =
                lquantize((timestamp - bstart[this->buf])/1000000, 0,
                60000, 60000);
        @q[DEV_NAME(this->un), DEV_INST(this->un)] =
                quantize((timestamp - bstart[this->buf])/1000000);
                bstart[this->buf] = 0;
}


The second required a little bit of mdb. Yes you can also get the same from dtrace mdb gives the the immediate answer, firstly for all the disks that use the sd driver and then for instance 1:

# echo '\*sd_state::walk softstate | ::print -at "struct sd_lun" un_f_disksort_disabled' | mdb -k
300000ad46b unsigned un_f_disksort_disabled = 0
60000e23f2b unsigned un_f_disksort_disabled = 0
# echo '\*sd_state::softstate 1 | ::print -at "struct sd_lun" un_f_disksort_disabled' | mdb -k
300000ad46b unsigned un_f_disksort_disabled = 0

Friday Jan 11, 2008

Latency bubbles in your disk IO

The following was written in response to an email from a customer about monitoring IO in response to my scsi.d postings. Tim covers where disk IO requests can be queued in his posting titled “Where can I/O queue up in sd/ssdwhich I would recommend as a starting point.

The disk IO sub-systems are built to provide maximum through put which is most often the right thing. However the weakness of tuning for throughput is that occasionally you can get some bizarre behaviour when it comes to latency. The way that optimum IO bandwidth is achieved is by sorting each io by logical block address (LBA) and then issuing those in order to minimize head seek. This is documented in the disksort(9F) manual page.

So if you have a sequence of writes to blocks N, N+1, N+2, N-200, N+3, N+4, N+5,N+6, N+7 in that order and your LUN as a queue depth and therefore throttle of 2.1 The IO's will actually be delivered to the LUN in this order N, N+1, N+2, N+3, N+4, N+5,N+6, N+7, N-200. Hence there will be a significant latency applied to the IO going to LBA N-200 and in practice it is possible to have IO requests delayed on the waitq for many seconds (I have a pathological test case that can hold them there for the time it takes to perform an IO on nearly every block on the LUN, literally hours). You better hope that that IO was not your important one!

This issue only comes into play in the disk driver has reached the throttle for the device as up until that point each IO can be passed straight to the LUN for processing.2 Once the driver has reached the throttle for the LUN it begins queuing IO requests internally and by default will sort them to get maximum throughput. Clearly the lower the throttle the the sooner you get into this potential scenario.

Now for the good news. For most disk arrays sorting by LBA does not make much sense since the LUN will be made up of a number of drives and there will be a read cache and a write cache. So for these devices it makes sense to disable disksort and deliver the IO requests to the LUN in the order in which they are delivered to the disk driver. If you look in the source for sd.c you will see that we do this by default for most common arrays. To achieve this there is a flag, “disable disksort”, that can be set in sd.conf or ssd.conf depending on which driver is in use. See Micheal's blog entry about editing sd.conf. While you are reading that entry note you can use it to set the throttle for individual LUNS so you do not have to set [s]sd_max_throttle, which will penalize all devices rather than just the one you were aiming for. If you have just one that only has a small queue depth and you will see below why a small queue depth can be a really bad thing.

So how could you spot these latency bubbles?

It will come as no surprise that the answer is dtrace. Using my pathological test case, but with it set to run for only 10 minute to a single spindle, the following D produces a clear indication that all is not well:

fbt:ssd:ssdstrategy:entry,
fbt:sd:sdstrategy:entry
{
        start[(struct buf \*)arg0] = timestamp;
}

fbt:ssd:ssdintr:entry,
fbt:sd:sdintr:entry
/ start[(this->buf = (struct buf \*)((struct scsi_pkt \*)arg0)->pkt_private)] != 0 /
{
        this->un = ((struct sd_xbuf \*) this->buf->b_private)->xb_un;
        @[this->un] = lquantize((timestamp - start[this->buf])/1000000, 
                 60000, 600000, 60000);
        @q[this->un] = quantize((timestamp - start[this->buf])/1000000);
        
        start[this->buf] = 0;
}

This produces the following output3, the times are milliseconds:


dtrace: script 'ssdrwtime.d' matched 4 probes
\^C

 

    6597960853440
           value  ------------- Distribution ------------- count    
         < 60000 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 377204   
           60000 |                                         0        
          120000 |                                         0        
          180000 |                                         0        
          240000 |                                         0        
          300000 |                                         0        
          360000 |                                         0        
          420000 |                                         0        
          480000 |                                         2        
          540000 |                                         300      
       >= 600000 |                                         0        

 
    6597960853440
           value  ------------- Distribution ------------- count    
              -1 |                                         0        
               0 |                                         40       
               1 |                                         9        
               2 |                                         6        
               4 |                                         17       
               8 |                                         23       
              16 |                                         6        
              32 |                                         36       
              64 |@@                                       15407    
             128 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@   361660   
             256 |                                         0        
             512 |                                         0        
            1024 |                                         0        
            2048 |                                         0        
            4096 |                                         0        
            8192 |                                         0        
           16384 |                                         0        
           32768 |                                         0        
           65536 |                                         0        
          131072 |                                         0        
          262144 |                                         0        
          524288 |                                         302      
         1048576 |                                         0        

Now recall that my test case is particularly unpleasant but it demonstrates the point. 300 IO requests took over 9 minutes and they only actually got to complete as the test case was shutting down. While the vast majority of the IO requests complete in less than 256ms.


Now lets run the same pathological test with disksort disabled:

dtrace: script 'ssdrwtime.d' matched 4 probes
\^C

    6597960853440
           value  ------------- Distribution ------------- count    
         < 60000 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 543956   
           60000 |                                         0        


    6597960853440
           value  ------------- Distribution ------------- count    
              -1 |                                         0        
               0 |                                         30       
               1 |                                         21       
               2 |                                         30       
               4 |                                         0        
               8 |                                         0        
              16 |                                         50       
              32 |                                         3        
              64 |                                         384      
             128 |                                         505      
             256 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  531169   
             512 |@                                        11764    
            1024 |                                         0        

Notice that now the majority of the IO requests took longer now, falling in the 256ms bucket rather than the 128ms bucket but none of the IO requests took many minutes.


Now my test case is pathological but if you have drives with small queue depths and disk sort is still enabled you are open to some quite spectacular latency bubbles. To mitigate this my advice is:

  1. Don't ever set the global [s]sd_max_throttle in /etc/system. Use the [s]sd.conf file to set the appropriate throttle for each device.

  2. Consider what is more important to you. Throughput or latency. If is is latency or if your LUN is on a storage array then turn of disksort using the [s]sd.conf file.

  3. If you have pathological applications then understand that the IO subsystem can give you throughput or bounded latency, not both. So separate out the IO devices that need throughput from those for which latency is more important.

  4. Be aware that even “dumb” disk drives often implement disk sort internally so in some cases they can give a similar issues when they have a queue depth of greater than 24. In those cases you may find it better to throttle them down to a queue depth of 2 and disable disksort in [s]sd to get the most predictable latency all be it at the expense of throughput. If this is your issue then you can spot this either by using scsi.d directly or by modifying it to produce aggregations like those above. I'll leave that as an exercise for the reader.


1The queue depth of a LUN is the number of commands that it can handle at the same time. The throttle is usually set to the same number and it used by the disk driver to prevent it sending more commands than the device can cope with.

2Now the LUN itself may then re order the IO if it has more then two IO's in it's internal queue.

3Edited to remove output for other drives.

4With a queue depth of 2 the drive can not sort the IO requests. It has to have one as active and the next one waiting. When the active one completes the waiting one will be actioned before a new command can come from the initiator.

Thursday May 03, 2007

How many disk should a home server have (I'm sure that was a song).

My previous post failed to answer all the questions.

Specifically how many disks should a home server contain?

Now I will gloss over the obvious answer of zero, all your data should be on the net managed by a professional organisation, not least as I would not trust someone else with my photos however good they claim to be. Also any self respecting geek will have a server at home with storage, which at the moment means spinning rust.

Clearly you need more than one disk for redundancy and you have already worked out that the only sensible choice for a file system is ZFS, you really don't want to loose your data to corruption. It is also reasonable to assume that this system will have 6 drives or less. At the time of writing you can get a Seagate 750Gb SATA drive for £151.56 including VAT or a 320Gb for £43.99.

Here is the table showing the number of disks that can fail before you suffer data loss:

Number of disks

Mirror

Mirror with one hot spare

RaidZ

RaidZ with hot spare

Raidz2

Raidz2 with one hot spare

2

1

N/A

N/A

N/A

N/A

N/A

3

NA

2\*

1

N/A

N/A

N/A

4

1\*\*

N/A

1

2\*

2

N/A

5

NA

2\*

1

2\*

2

3

6

1\*\*

N/A

1

2\*

2

3

\* To not suffer data loss the second drive to fail has to not fail while the hot spare is being re silvered.

\*\* This is a worst case of both disks that form the mirror failing. It is possible that you could loose more than one drive and maintain the data.

Richard has some more numbers about mean time before data loss and the performance of various configurations from a more commercial point of view, including a 3 way mirror.

Now lets look at how much storage you get:

Number of disks of size X Gb

Mirror

Mirror with one hot spare

RaidZ

RaidZ with hot spare

Raidz2

Raidz2 with one hot spare

2

X

N/A

N/A

N/A

N/A

N/A

3

N/A

X

2X

N/A

N/A

N/A

4

2X

N/A

3X

2X

2X

N/A

5

N/A

2X

4X

3X

3X

2X

6

3X

N/A

5X

4X

4X

3X

The power consumption will be pretty much proportional to the number of drives, as will the noise and cost of purchase. For the Seagate drives I looked at the power consumption of the disks was identical for the 300Gb and 750Gb drives.

Since my data set would easily fit in a 320Gb disk (at the time of purchase) and that was the most economic at that point I chose the 2 way mirror. Also raidz2 was not available.

If I needed the space offered by 2X or more disks I would choose the RaidZ2 as that gives the best redundancy.

So the answer to the question is “it depends” but I hope the above will help you to understand your choices.

Tags:

Friday Apr 21, 2006

External usb disk drive

If I had not got caught out again by the M2 hanging during boot this would have been a breeze. Instead I was to struggle off and on for hours before finding the answer on my own blog. Doh.

Anyway I have bought a 160G USB 2.0 disk drive so that I can backup my laptops and have some extra space for things that it would be nice to keep but not on the cramped internal drives. Looks like a nice bit of kit in it's fanless enclosure.

Plugged the drive in and pointed zpool at the device as seen by volume manager and I now have a pool that lives on this disk with lots of file systems on it. I can see a need for a script to run zfs backup on each local file system redirected to a file system on the external box.

1846 # zfs list -r removable
NAME                   USED  AVAIL  REFER  MOUNTPOINT
removable             20.7G   131G  12.5K  /removable
removable/bike        4.88G   131G  4.88G  /removable/bike
removable/nv           137M   131G   137M  /removable/nv
removable/principia   9.50K   131G  9.50K  /removable/principia
removable/scratch        9K   131G     9K  /removable/scratch
removable/sigma       5.14G   131G  9.50K  /removable/sigma
removable/sigma/home  5.14G   131G  9.50K  /removable/sigma/home
removable/sigma/home/cjg  5.14G   131G   597M  /removable/sigma/home/cjg
removable/sigma/home/cjg/pics  4.55G   131G  4.55G  /removable/sigma/home/cjg/pics
removable/sigma_backup   556M   141G   556M  -
removable/users        586M   131G  9.50K  /removable/users
removable/users/cjg    586M   131G   586M  /removable/users/cjg
1847 #

Exporting the pool and then reimporting it on the other laptop all works as expected which is good and I hope is going to allow me to do the live upgrade from the OS image on that drive so it does not have to get slurped over the internet twice.

I did over achieve and manage to crash one system as plan A was to have a zvol for each laptop on the disk and use that as a backup mirror for the internal drive which could be offlined when not in use. Alas this just hung and appears to be known issue with putting zpools inside zpools. However the talk that USB storage and zfs don't work together does not appear to be close to the truth.

Tags:

About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today