I've built it. but now what?

ZFS on a box like the SunFire X4500 is way cool. But what if all you have is old, controller-based storage devices? George Wilson and I were wondering about that and thought it might be useful to do some experimentation down that line. So, we collected all of the currently unused storage in my lab and built a big ZFS farm. We've got a V480 with 7 T3B and 8 T3 bricks connected via Sanbox-1 switches, along with a couple of TB of IBM Shark storage recabled to be JBOD. I have a 3510 and maybe some Adaptec RAID storage that I can hook up eventually.

So, the server is up and running with a 3 racks of storage, keeping the lab nice and toasty. Now what?!

What might be the best way to manage the T3s in a ZFS world? As a first pass, I split each brick into 2 RAID5 LUNs with a shared spare drive. But, maybe I would be better off just creating a single stripe with no RAID in the T3 and let ZFS handle the mirroring. But, I've had a number of disk errors (these are all really, really, really old) that the T3 fixed on its own w/o bothering the ZFS pool. Maybe RAID5 in the brick is the right approach. I could argue either way.

Feel free to share your suggestions on what might be a good configuration here and why. I'm happy to test out several different approaches.

Comments:

Post a Comment:
Comments are closed for this entry.
About

Interesting bits about Solaris, Virtualization, and Ops Center

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today