After getting started by importing an Oracle Solaris image into your Oracle Cloud Infrastructure (OCI) tenant and launched an instance, you'll likely want to use other OCI resources to run your applications. In this post, I'll provide a quick cheat sheet to using the storage resources: block volumes and file storage. I'm not going to cover the basics of creating these objects in OCI, which is covered well by the documentation. This post just shows how to do the Solaris-specific things that the OCI console will otherwise tell you for Linux guests.
Once you've created a block volume and attached it to your Solaris guest in the OCI console, you still need to do a small amount of work on the Solaris guest to let it see the storage you've attached. The OCI Console will display the Linux iSCSI commands, you just need to translate them to the Solaris equivalents. Here's an example of the Linux commands:
sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -p 169.254.2.2:3260 sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -n node.startup -v automatic sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -p 169.254.2.2:3260 -l
While the Solaris command for these tasks is also iscsiadm, it uses a different command line design so we have to translate. Using the iqn string and address:port strings from the first command above, we define a static target, and then enable static discovery.
sudo iscsiadm add static-config iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144,169.254.2.2:3260 sudo iscsiadm modify discovery --static enable
At this point, the device will be available and can be seen in the format utility:
opc@instance-20180904-1408:~$ sudo format </dev/null Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c2d0 <QEMU HAR-QM0000-0001-48.83GB> /pci@0,0/pci-ide@1,1/ide@1/cmdk@0,0 1. c3t0d0 <ORACLE-BlockVolume-1.0-1.00TB> /email@example.com%3Ade785f06-2288-4f2f-8ef7-f0c2c25d61440001,1
And now it's trivial to create a ZFS pool on the storage:
opc@instance-20180904-1408:~$ sudo zpool create tank c3t0d0 opc@instance-20180904-1408:~$ sudo zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 errors: No known data errors
Since the iscsiadm command automatically persists the configuration you enter, the storage (and the ZFS pool) will still be visible after the guest is rebooted.
The OCI file storage service provides a file store that's accessible over NFS, and it's fully interoperable with clients using NFSv3. Once you've created a file system and associated a mount target, the OCI console will show you the commands to install the NFS client, create a mount point directory, and then mount the file system. The Solaris images we've provided already include the NFS client, so all you need to do is create a mount point directory and mount the file system. The Solaris commands for those tasks are the same as on Linux, so you can directly copy and paste those from the OCI console. To make the mount persistent, you'll either need to add the mount entry to /etc/vfstab or configure the automounter. I won't provide a tutorial on those topics here, but refer you to the Solaris documentation.