Creating a zpool configuration out of a bunch of F40/80 cards
By user13366125 on Mai 24, 2014
In a some customer situation i'm using a number of Oracle Sun Flash Accelerator F40 PCIe Cards or F80 PCIe cards to create flash storage areas inside a server. For example i had 8 F40 cards in a server by using a SPARC M10 and a PCIe Expansion Box which enables you to connect up to 11 F40/F40 cards per expansion box.
The configuration with 8 F80 cards for example is a configuration i'm using on very special occasions and for special purposes, in this case it was a self-written application of a customer needing a lot flash storage inside the server. I won't disclose more. On the other side: I'm sizing quite frequently systems with two F80 cards for "separated ZIL" purposes . Either if you use the SSD as data storage or as separated ZIL: When you do mirroring you have to ensure that mirrors are not using mirror halves on the same card.
From the systems perspective you see four disk devices per F40/F80 card with 100 respective 200 GB capacity per disk and thus you can just add them to your zpool configuration. However configuring the system was a little bit unpractical. The problem: It's not that easy to create a configuration that ensures that no mirror has it's two vdevs on a single F40/F80 card. Perhaps there is an easier way, however I didn't found it so far.
It's a little bit hard to find acceptable disk pairs when you are looking on PCI trees like
/devices/pci@8000/pci@4/pci@0/pci@8/pci@0/pci@0/pci@0/pci@1/pci@0/pci@1/pciex1000,7e@0/iport@80:scsi. Well, at two cards it's not that hard, but still not a nice job. After doing this manually a few times, i thought that at 8 or 22 cards doing this manually is a job for people who killed baby seals, listen to Justin Bieber or equivalent horrible things.
But i didn't committed to such crimes and this problem is nothing that a little bit of shell-fu can't solve. You can do it in a single line of shell. Well ... a kind of a single line of shell.