Thursday Jan 05, 2012

New ZFSSA firmware release is available in MOS


In case you have not been paying attention, the new 2011.1.1.0 software release for the ZFSSA is out and available for download inside the My Oracle Support website.

To find it, go to the "Patches & Updates" tab, and then do the advanced family search. Type in "ZFSSA" and it will take you right to it (choose 2011.1 in the next submenu).

You need to have your systems on 2010.3.2.1 or greater in order to upgrade to 2011.1.1, so be prepared.

It also includes a new OEM grid control plug-in for the ZFSSA.

Here are some details about it from the readme file: 

Sun ZFS Storage Software 2011.1.1.0 (ak-2011. major software update for Sun ZFS Storage Appliances contains numerous bug fixes and important firmware upgrades. Please carefully review all release notes below prior to updating.
Seven separate patches are provided for the 2011.1.1.0 release:


This release includes a variety of new features, including:

  • Improved RMAN support for Oracle Exadata
  • Improved ACL interoperability with SMB
  • Replication enhancements - including self-replication
  • InfiniBand enhancements - including better connectivity to Oracle Exalogic
  • Datalink configuration enhancements - including custom jumbogram MTUs
  • Improved fault diagnosis - including support for a variety of additional alerts
  • Per-share rstchown support


This release also includes major performance improvements, including:

  • Significant cluster rejoin performance improvements
  • Significant AD Domain Controller failover time improvements
  • Support for level-2 SMB Oplocks
  • Significant zpool import speed improvements
  • Significant NFS, iSER, iSCSI and Fibre Channel performance improvements due to elimination of data copying in critical datapaths
  • ZFS RAIDZ read performance improvements
  • Significant fairness improvements during ZFS resilver operations
  • Significant Ethernet VLAN performance improvements

Bug Fixes

This release includes numerous bug fixes, including:

  • Significant clustering stability fixes
  • ZFS aclmode support restored and enhanced
  • Assorted user interface and online help fixes
  • Significant ZFS, NFS, SMB and FMA stability fixes
  • Significant InfiniBand, iSER, iSCSI and Fibre Channel stability fixes
  • Important firmware updates

Wednesday Dec 28, 2011

New Storage Eye Charts

My new Storage Eye Chart is out. You can get it from the bookmark link on the right-hand side of this page.

Version 10 adds the Axiom and 2500M2 to a new page and also updates the ZFSSA with the new updates.

I hope everyone out there has a very happy New Year. See you in January. 

Tuesday Dec 06, 2011

New SSDs announced today

Thought you should know about the 3 new announcements for the ZFSSA.

--Write-flash-cache SSDs have gone from 18GB to 73GB each.
--New long-range transceivers for the 10GigE cards are now available
--3TB drives for the 7120 model are here today. The 3TB drives for the 7320 and 7420 are NOT here yet, but close.

Effective December 6, 2011, we are pleased to announce three new options for Oracle’s Sun ZFS Storage portfolio:
1. Availability of a 73GB Write Flash Cache for 7320 and 7420.  This new SSD features 4X the capacity and almost double the write throughput and IOPS performance of its
predecessor.  In comparison to the current 18GB SSD, this new 73GB SSD significantly enhances the system write speed.  As an example, a recent test on a particular 7420
system demonstrated a 7% improvement in system write performance while using half the number of SSDs.  The 73GB SSD is also available to our customers at a lower list
price point.  This is available as an ATO or X Option.
2. Availability of the standard Sun 10 GbE Long Range Transceiver for the current 1109A-Z 10GbE card as a configurable option for ZFS Storage Appliance.  This Long Range Transceiver enables 10 GbE optical connectivity for distances greater than 2 KM.
3. Availability of a new 7120 base model featuring integrated 11 x 3TB HDDs and a 73GB Write Flash Cache.  (Note that availability of the 3TB drive is limited to the 7120 base model internal storage only – it is not available in the disk shelves at this time.)

Additionally, we are announcing End-of-Life for the following two items:
1. 2TB drive-equipped base model of the 7120, with a Last Order Date of December 31, 2011.
2. 18GB Write Flash Cache, with a Last Order Date of January 10, 2012.

Tuesday Nov 08, 2011

Mobile app for Oracle Support

So many of you use MOS, and like to track your service tickets, etc.

Did you know that there are mobil apps for both the iPhone and for the Droid that allow you to interface with MOS on-the-go?

Check this out:

**Update: I have a Droid, and it seems the MOS link is only on the iPhone app, not the Droid app. At least, I sure can't seem to find it on mine. Just news. Disappointing. I will let everyone know if I find it or when it becomes available on the Droid.  

Wednesday Oct 26, 2011

VDEV - What is a VDEV and why should you care?

Ok, so we can finally talk VDEVs. Going back to my blog on disk calculations, I told you how the calculator works, and the way you can see how many drive spindles you would have for any particular RAID layout. Let's use an example of nine trays of 24 drives each, using 1TB drives.

 Yes, I know we no longer offer 1TB drives, but this is the graphic I had, so just roll with me. Now, if we were setting this up in the ZFSSA BUI, it would look like this:

 So that's all great and it all lines up, right? Well, the one thing the BUI doesn't show very well is the VDEVs. You can figure it out in your head if you know what you're doing, but the calculator can do it for you if you just add the "-v" option right after the .py command in the python string. Doing that for the above example will give you back this:

Notice the new column for VDEVs. Cool. So now I can see the breakdown of Virtual Devices that each type of RAID will create out of my physical devices (spindles). In this case, my nine trays of 24 spindles is 216 physical devices.  
-If I do something silly and make that a 'Stripe', then I would get 1 big virtual device made up of 216 physical devices.
-I could also make it a 'Mirror', which will give me 106 virtual devices, each made up of 2 physical devices.
-A RAIDz1 pool will give me 53 virtual devices, each with 4 physical devices to make my 3+1 stripes.
-Finally, for the sake of this conversation, a RAIDz2 choice will give me only 15 VDEVs, each with 14 physical drives that make 12+2 stripes. You don't get 14 data drives, you get 14 drives per stripe, so you need to remember that 2 of those are parity drives in a RAIDz2 stripe when you calculate your usable space.

Now, why do you care how many VDEVs you have? It's all about throughput.  Very simply stated, the more VDEVs you have, the more data can be pushed into the system by the most amount of users at once. Now, that's very simplistic, and it really depends on your workload. There are exceptions as I have found, but for the most part, more VDEVs will equal better throughput for small, random IO. This is why a Mirrored pool is almost always the best way to setup a high-throughput pool for small, random IO such as a database. look at all the VDEVs a mirrored pool gives you.

Think of it this way: Say you have a 256K block of data you want to write to the system, using a 128K record size. With a mirrored pool, ZFS will split your 256K file into 2 blocks of 128K each, and send it down to exactly 2 of the VDEVs to write out to 4 physical drives. Now, you still have a whopping 104 other VDEVs not doing anything, and they could all be handling other user's workflows all at the same time. Take the same example using a RAIDz1 pool. ZFS will have to break up your 256K block again into two 128K chunks and send it to 2 VDEVs, each with 4 physical drives, with each data drive of the 3+1 stripe getting about 43K. That's all fine, but while those 8 physical drives are working on that data, they can't do anything else, and you only have 51 other VDEVs to handle everyone else's workload.
As an extreme example, let's check out a RAIDz3 False pool. You only get 4 VDEVs, each with 53 drives, each in a 50+3 stripe. Writing that same 256K block with 128K record sizes will still split it over 2 VDEVs, and you only have 2 left for others to use at the same time. In other words, it will take the IOPs of 106 physical spindles to write that one stupid 256K block, while in the Mirrored pool, it would have only taken the IOPs of 4 physical spindles, leaving you with tons of other IOPs. 

Make sense?

Like I said, Mirroring is not always the best way to go. I've seen plenty of examples where we choose other pools over Mirrored after testing. That is the key. You need to test your workload with multiple pool types before picking one. If you don't have that luxury, make your best, educated guess based on the knowledge that in general, high throughput random IO does better with more VDEVs, and large, sequential files can do very well with larger stripes found in RAIDz2. 

As a side note, we recommend the RAIDz1 pool for our Exadata backups to a ZFSSA. After testing, we found that, yes, the mirrored pool did go a bit faster, but not enough to justify the drop in capacity. We also found that the RAIDz1 pool was about 20% faster for backups and restores then the RAIDz2 pool, so that speed difference didn't justify the extra capacity of RAIDz2. Now, some people may disagree and say they don't care about capacity, they want the fastest no matter what, and go with Mirrored even in this scenario. That's fine, and that's the beauty of the ZFSSA, where you are allowed to experiment with many choices and options and choose the right balance for your company and your workload.

Have fun. Steve 

Thursday Oct 20, 2011

Shadow Migration

Still not talking about VDEVs? I know, I know, but hey, there's only so many hours in a day, folks, and I do have a life... So something came up this week and I want to talk about Shadow Migration, instead.

Now, built-into the ZFSSA you have both Replication and Shadow Migration. Be sure to use the right one for the right job. Replication is used from one 7000 family system to a different 7000 system. This is important: It can NOT be used on two clustered controllers of the same system. That will mess you up. It is only for other 7000's, and can not replicate to anything other than another ZFSSA. ***UPDATE- This is no longer the case. Replication inside the same system between two clustered controllers has been supported since October 2012.

Shadow Migration, on the other hand, is really handy for both migrating the data from any, non-ZFSSA, NFS source (think from a filer made by someone other than Oracle), or even from a different pool between controllers on the SAME clustered ZFSSA system. This can be very cool when you have an important share on one pool, and you want to move it (and the data inside it) to a different pool. Maybe it's because you want it on your RAIDz2 pool instead of your Mirrored pool. Maybe it's because you want ControllerA in charge of the share but it got made months ago by mistake in the pool owned by ControllerB. I don't care, you just want data from some share, either local to the system or from a NFS share on a different system, to come over into a brand-new share in some pool. Maybe you want to suck in the data from an older, non-Oracle filer, but you know it will take a while, and you want people to be able to still get to the data while the migration is taking place.

Great. That's Shadow Migration. It can get data from both a local source (another share of the same system) or from any NFS mount from anywhere. While the migration is taking place, the original source turns read-only, and users start to mount and use your new share being created. If the data being requested by a user has not been migrated over yet, the ZFSSA will go get it, while continuing to migrate in the background.

Here's how to do a Local Shadow Migration, moving data from a share in one pool to another pool on the same system.

1. Check out the Shadow Migration Service. Under Services, one can change how many threads the background service will use to do the migration. Make sure the green light is on here, while you're at it. **Update: I have been told that our internal team took this down from 8 to 2 for our large (13PB) migrations from various older filers to new 7000s for our Oracle data center. Oracle IT and our Oracle DC is now 100% ZFSSA. 

2. I have a share called Share1A, inside Pool1, which is a mirrored pool. Note that I have about 85MB of stuff in it.
Be careful NOT to choose the replication area from here, or at all, from anywhere. You're not doing replication, remember? 
Do not confuse replication with shadow migration.

3. Now, I don't want that data inside pool1, I really want it in Pool2, which is a RAIDz1 pool. So, switch to Pool2, and create a brand-new share, just like normal.
Change pools with the Pools drop-down in the upper left, then click the plus sign.

 4. Now, in the new Share box, first choose the pool you want the new share to be in, and then be sure to choose "LOCAL" as your data migration source.
Instead of typing in the path to some external NFS share, you will type in the local path of another share on the same system, in this case it's "/export/Share1A"

5. Now it gets cool. Check out my new Shadow1 share. As the migration begins (right away), you will see the progress bar here on the left. You can actually stop it, and even change the source from here, mid-stream (although that would be strange and I don't think I would recommend that).  ***Update: To be fair, it was explained to me that this process may take a while to start. The process may have to read a large amount of metadata before you see the bar move. If you have very large directories in the share, especially at the top, then be patient.

6. When the migration is done (The Local version should go quite quickly), the Shadow Migration section goes away, and you will get an alert message on the top of the screen like this:

7. Also, you can view some Shadow Migration specific Analytics while it's running:

8. Now that it's done, I have 2 shares. My original Share1A, and my new Shadow1 in a different pool with the same data copied over.
I could now delete the first share or pool in order to rebuild the pool a different way. Or, if this was a migration from an older filer, I could re-purpose that filer as a nice planter in my garden.

Wednesday Oct 12, 2011

ARC- Adaptive Replacement Cache

I know, I know, I told you I was going to talk about the very important VDEVs next, but this other article came up in another blog, and it’s a rather good read about the ZFSSA cache system, called our ARC, or Adaptive Responsive Cache.

So, if you want to learn more about the ARC in a ZFSSA, go check it out. Our ARC has two levels. Level 1 ARC is our RAM. Almost the entire RAM in a ZFSSA is used for data caching, and that’s the ARC, or L1ARC. Now, we go further by having a L2ARC. Once RAM is full, our L2ARC can hold even more cache by using any Readzillas you have in the system. That’s right; our Readzillas SSDs are the L2ARC. We use SSDs for cache, not as storage space. (Logzillas, on the other hand, are for fast synchronous write acknowledgements, and have nothing to do with ARC at all).

So a 7420 with 512GB memory and four Readzillas has about a 500GB L1ARC and a 2TB L2ARC to use as an Adaptive Responsive Cache to work with. 500GB of that 2.5TB of space will be nano-second speed while 2TB of it will be micro-second speed. Still much faster than the milli-second speed you get when you have to get data off a hard drive.

So Cache is cool, and it’s nice to have a high cache hit ratio, and it’s easier to have a high cache hit ratio if you have more cache, right? With the new, lower priced Readzillas, this should be easier to do.

Now, this other blog I’m pointing you to says we call our cache something else, but don’t worry about it, we use the name “Adaptive responsive Cache” in the Oracle ZFSSA world.

Go check out:

Ok, VDEVs will come next!



Tuesday Oct 11, 2011

Where can you find info on updates?

Someone asked where one could find info on what is updated in each update. Once you download any update from MOS, there is a readme file inside of it with this info.

However, if you want to see the readme file first, go here:

Thursday Oct 06, 2011

New ZFSSA code release today - 2010.Q3.4.2

A new code was released on MOS today. We are now on code 2010.Q3.4.2. (ak-2010.

Our minimum recommended version is still 2010.Q3.4.0, but if you have the time and opportunity to upgrade to this new Q3.4.2 release, it would be a very good idea. It includes many minor bug fixes. You can view the readme file it comes with to see what it includes.

Download it under the "patches & upgrades" tab in My Oracle Support. 

Tuesday Oct 04, 2011

How to calculate your usable space on a ZFSSA

So let’s say you’re trying to figure out the best way to setup your storage pools on a ZFSSA. So many choices. You can have a Mirrored pool, a RAIDz1, RAIDz2, or RAIDz3 pool, a simple striped pool, or (if you’re REALLY anal) you can even have a Triple Mirrored pool.

How can you choose which pool to make? What if you want more than one pool on your system? How much usable space will you have when it’s all done?

All of these questions can be answered with Ryan Mathew’s Size Calculator. Ryan made a great calculator a while back that allows one to use the ZFSSA engine to give you back all sorts of pool results. You simply enter how many disk trays you have, what size drives they are, how many pools you want to make, and the calculator does the rest. It even shows you a nice graphical layout of your trays. Now, it’s not as easy as a webpage, but it’s not too bad, I promise. It’s a python script, but don’t let that scare you. I never used Python before I got my hands on this calculator, and it was worth loading it up for this. First, you need to go download and install Python 2.6 here: Make sure you have 2.6 installed, as the calculator will not work with the newer 3.0 Python. In fact, I had both loaded, and had to completely uninstall 3.0 before it would work with my installed 2.6.

Now, get your hands on the Size Calc script. Ryan is making a new one that is for the general public. It will be out soon. In the meantime, ask your local Oracle Storage SC to do a calculation for you.

This is a copy from Ryan’s, but I fixed a few things to make it work on my Windows 7 laptop. If you’re not using Windows 7, you may find Ryan’s original blog and files here:

So now you’re ready. Go to a command line and get to the Python26 directory, where you have also placed the “” script.

Type “ ZFSipaddress password 20”
Use your ZFSSA for the IP address and your root password for the password. You can use the simulator for this. Remember, the simulator is the real code and has no idea it's not a 'real' system.

Mine looks like this: “ changeme 20” Now, you will see the calculator present a single tray with 20 drives, and all the types of pools you can make with that.

So now, make it bigger. Along with the first tray that has 20 drives (because of the Logzillas, right?), we also want to add a 2nd and a 3rd tray, each full with 24 drives. So type “ changeme 20 24 24”  You could do this all day long. Notice that now you have some extra choices, as the NSPF (no single point of failure) pools are now allowed, since you have more than two trays.

That’s it for the basics. Pretty simple. Now, we can get more complicated. Say you don’t want one big pool, but want to have an active/active cluster with two pools. Type “ changeme 10/10 12/12 12/12”

This will create two even pools. They don’t have to be even. Check this out. I want to make two pools, one with the first 2 disk trays with 8 logzillas plus half of full trays 3 and 4. So the second pool would only be the other half of trays 3 and 4. I used “ changeme 20/0 20/0 12/12 12/12”

Here’s the last one for today- Say you already have a 2-disk shelf system, with 2 pools, and you set it up like this: “ changeme 10/10 12/12” Simple. Now, you go out and buy another tray of 24 drives, and you want to add 12 drives to each pool. You can use the “add” command to add a tray onto an existing system. It’s very possible that adding a tray will give you different results than if you configured 3 trays to begin with, so be careful. This is a good example. Note that you get different results if you do “10/10 12/12 12/12” then if you do “10/10 12/12 add 12/12”.

Our next lesson will be about VDEVs. When you add the “-v” command right after “”, you may notice a new column in the output called “VDEVS”. These are the most important aspect of your pool. It’s very important to understand what these are, how many you need and how many you have.

It’s so important, I’m going to save it for another blog topic. Have a great day!!!! J

Monday Oct 03, 2011

New SPC benchmark for the 7420

Oracle announced today a great new benchmark on SPC (Storage Performance Council) for our 7420. Instead of re-writing everything already written, please go see this excellent blog entry by Roch at

It explains the new results and why they're so cool.

Go to to see the results. Scroll down to the "O" section for oracle, and the 7420 results is the first one.

Wednesday Sep 21, 2011

What happened to Steve's blog???

You may have noticed that I had an entry last week after our announcement came out, and that entry is no longer here.

Yes, sometimes I jump the gun and say things I'm not supposed to, and get slapped down. :)  Most of what I spoke about was very public knowledge, but Oracle wanted to save some of it for Oracle Open World, which is coming up the first week of October.

So I'll be able to speak about everything much more freely after OOW. If you attend OOW, you will hear about it. A lot. All very good stuff and great news for our customers. 

The piece that I am allowed to talk about, since many new quotes have gone out to our customers since September 13th when this all happened, is that our list prices for the ZFSSA (7000) hardware have been dramatically reduced. Our clients are getting new quotes and seeing a nice difference from quotes made before the 13th. This is very good news for not only people new to the 7000, but also for those looking to expand their current ones. There are some new drive types to choose from, as well. Ask your local storage rep or SC about them.

Friday Sep 16, 2011

Resetting your 7420 password

So, you have a 7420 demo or test system, and either forgot which password you used or it came from another department or company and it still has a password.

You're up a creek, right?

No, there is a way to fix it. We had to do this with a demo box that was wiped clean of all data, but still had a non-standard password on it. We could have sent it back for the demo pool engineers to fix, but then we would have had to wait a week or more to start testing. So here is a document I made with the steps I took to fix both the ILOM password and the Fishwork Appliance Kit password on this 7420.

Disclaimer--- Yes, physical security is VERY IMPORTANT. If someone can touch your system, they can do this. Of course, if someone can touch your system, they can also unplug it, remove the drives, or un-rackit and take it, couldn't they?

Monday Sep 12, 2011

New Q3.4.1 code out now for the 7000 ZFSSA family

Code release Q3.4.1 is available now on MOS for download.

This is very exciting news, even if you don't need to upgrade. Why? Because Q3.4.1 contains NO bug fixes. The only thing it does is allows your system to use the new drive trays with 15K speed drives.

You can mix-n-match drive trays in the same system, and create a new, faster ZFS pool using the 15K drives in the same system you already have. You do not want to mix the high-capacity, slower drives with these in the same pool, however. Make sure they are different pools to be most effective. Not everyone needs these faster drives to drive their performance, as the ZFSSA really tries to drive performance with it's huge amount of L1 and L2 cache. However, some people run workloads that either fill up the cache, or by-pass the Logzillas during writes altogether, and the faster, 15K speed spindles will matter a lot to them.


Tuesday Aug 02, 2011

Important - Clean your OS drives. Before you update your systems.

Some folks out there are not reading the readme file before they update, and are running into trouble. Remember the "Important Safety Tip" line in Ghostbusters about not crossing the streams? This is kind of like that.

Have you cleaned up your OS drives lately?

It's important. You should do this every now and then, but especially before you do a big update, such as the update to Q3.4 (which may have you running multiple updates, as explained in my previous post)

You want to remove system updates that are getting old. You may want to keep the previous system software in order to roll back, but do you really need the one before that and the one before that? Are you really going to roll back your 7000 to the system code you used in September 2010? I doubt it. Don't be a hoarder. Delete it. Call Dr. Drew if you need to.
Then, let's check your analytic datasets. These can get big before you know it. If you leave them running all the time, even when you don't need to collect data, you're asking for trouble. Either only run them when you need to collect data (you can do this manually or via a script or alert), or for goodness sake export them as a .CSV file at the end of each month and delete them off the system so they don't get too large. There have been issues reported of problems with the systems once these files got much too large. Part of those issues have been addressed in Q3.4, but you still need to keep it clean.

Very bad things will happen if you let your OS drives fill up. Think of all the atoms of your body moving away from each other at the speed of light. Furthermore, when you do an update, the update will go much more smoothly if you clean up the OS drives, first.
See the readme section below, which I'm sure you forgot to read...

The following section comes from the Q3.4 readme file:

1.4 System Disk Cleanup

Remove any old unused Analytics datasets especially any over 2G in size. The following command can be used to list the datasets.

node:> analytics datasets show

dataset-000 active 745K 2.19M arc.accesses[hit/miss]
dataset-001 active 316K 1.31G arc.l2_accesses[hit/miss]
dataset-002 active 238K 428K arc.l2_size
dataset-003 active 238K 428K arc.size
dataset-004 active 1.05M 2.80M arc.size[component]
dataset-005 active 238K 428K cpu.utilization

Also remove old support bundles or old software updates. It's important that the system disk has at least 20 to 25 percent free space. The following commands can be used to compare the amount of free space to the system disk size. In this case, 289G free / 466G size = 0.62 or 62% free space, which is reasonable. If you have trouble getting enough system disk free space call Oracle Support.

node:> maintenance system disks show
                       profile = mirror
                          root = 4.44G
                           var = 14.3G
                        update = 1.12G
                         stash = 1.38G
                          dump = 131G
                         cores = 29.7M
                       unknown = 15.0G
                          free = 289G


DISK        LABEL      STATE
disk-000    HDD 1      healthy
disk-001    HDD 0      healthy

node:> maintenance hardware select chassis-000 select disk select disk-000 show
                         label = HDD 0
                       present = true
                       faulted = false
                  manufacturer = SEAGATE
                         model = ST95000NSSUN500G
                        serial = 9SP123EQ
                      revision = SF03
                          size = 466G
                          type = data
                           use = system
                        device = c2t0d0
                     interface = SATA
                        locate = false
                       offline = false

Thursday Jul 28, 2011

ASR- Automated Service Request - AKA "Phone Home"

Many of you know that the 7000 family (and pretty much every piece of hardware Oracle sells) has a built-in feature called ASR, or Automated Service Request.

In the past, you may have called this "Phone-Home" or some other name, but basically it's all the same. Something goes wrong on the box, and a signal is sent to Oracle to alert us that there's an issue. Now, this isn’t magic. YOU have to setup ASR on the box correctly or this will not work. The 7000 walks you through this setup during the initial install of the box, but if you skip it, you can always go back and set it up later. Just go to Configuration, Services, Phone Home.

Would you like to know what issues DO send a signal? This website has documentation for Oracle ASR, and at the bottom of the page you’ll see docs for “Fault Coverage Information” for a variety of products, including our 7000 & 6000 storage families. The 7000 is obvious, and the 6000 & 2000 families are under the one titled “Common Array Manager (CAM)”.

Now, once you open up the 7000 document, you will see about 15 pages of issues that will create an ASR if they occur (and ASR is setup properly). The many links inside the document are only useful if you are either and Oracle employee or have a proper MOS login account. If you don’t, and you would like more info on one, please speak to your friendly, neighborhood Oracle storage SC, and they’ll be happy to look some up for you.

It’s pretty important to setup ASR, if you haven’t already. Not only will the system be able to quickly let us know when there’s a major problem, but it also generates a heartbeat with our Oracle support team, and gives them monthly status updates about your system.  There is a comprehensive privacy statement that goes along with this, and Oracle’s lawyers are pretty good at assuring you that this is safe and no private data is collected. I can show you the actual data collected in these reports, if you like. They are very useful. Not only can your local SC use these reports to help you plan for firmware updates and storage capacity use, but will also, in the future, be able to automatically inform you when important bug fixes or system updates come out. If you’re not on ASR, you’re on your own and will most probably miss many of these updates, and your local team will not see your systems in the status reports, so will not even know you have a 7000 out there for them to help you with.

Monday Jul 25, 2011

Are you looking for the software updates?

For those of you still not used to finding your way around MOS (My Oracle Support), here are the patch numbers you can use to search for the 7000 ZFSSA patch you are looking for:

12780572 - This is the latest ZFSSA release, version 2010.Q3.4 - from July 22, 2011.
You want this.

11887647 - This is version 2010.Q3.2.1 - This is the minimum version you need to be on in order to then upgrade to Q3.4

12727815 - This is the free 7000 plug-in for OGC (Oracle Grid Control) that allows one to monitor multiple 7000s from OGC. It's small enough that I have provided it here.

12736304 - This is the 7000 NFS plug-in patch for Solaris Cluster version 1. You need this if you have a cluster server running Oracle Solaris and you want to NFS to a 7000. You load this on the cluster, not on the 7000. It's small enough that I have provided it here.

Saturday Jul 23, 2011

2010.Q3.4 software release- July 2011

The newest software release for the ZFSSA, 2010.Q3.4, has been released. You can download it at the My Oracle Support patches & Downloads section.

Check out the release notes of what this update has here:

This is a pretty important update. It fixes an issue with the SIM card on the disk shelves. (The SIM card is the card that connects the disk shelf to the controller via the SAS2 port. It is a FRU and needs this update to stop the "white light special" issue that stops the SIM after a set number of hours. We have been un-seating and re-seating the cards to get around this up until this fix).

Two things to consider:
1. You have to be at code level 2010.Q3.2.1 or higher
2. The SIM firmware update takes about 4 minutes per disk shelf. So a 4-tray system will take about 12 minutes to update the SIM firmware on your system.

Plan your upgrades accordingly. If you are many versions back, you may have to do more than one upgrade to be at the proper level.


Friday Jul 08, 2011

Oracle DB high availability paper for ZFSSA

I read a great whitepaper today on best practices for snapshots and clones of an Oracle database on a ZFSSA.

If you use Oracle software, and are looking at the ZFSSA to store it, you want to check out this paper.  

**Sorry, I had the wrong link... Fixed... Thanks to whomever pointed that out**


Tuesday Jun 21, 2011

Older SAS1 hardware Vs. newer SAS2 hardware

I got a question today from someone asking about the older SAS1 hardware from over a year ago that we had on the older 7x10 series. They didn't leave an email so I couldn't respond directly, but I said this blog would be blunt, frank, and open so I have no problem addressing it publicly.

A quick history lesson here: When Sun first put out the 7x10 family hardware, the 7410 and 7310 used a SAS1 backend connection to a JBOD that had SATA drives in it. This JBOD was not manufactured by Sun nor did Sun own the IP for it. Now, when Oracle took over, they had a problem with that, and I really can’t blame them. The decision was made to cut off that JBOD and it’s manufacturer completely and use our own where Oracle controlled both the IP and the manufacturing. So in the summer of 2010, the cut was made, and the 7410 and 7310 had a hardware refresh and now had a SAS2 backend going to a SAS2 JBOD with SAS2 drives instead of SATA.

This new hardware had two big advantages. First, there was a nice performance increase, mostly due to the faster backend. Even better, the SAS2 interface on the drives allowed for a MUCH faster failover between cluster heads, as the SATA drives were the bottleneck on the older hardware. In September of 2010 there was a major refresh of the rest of the 7000 hardware, the controllers and the other family members, and that’s where we got today’s current line-up of the 7x20 series. So the 7x20 has always used the new trays, and the 7410 and 7310 have used the new SAS2 trays since last July of 2010.

Now for the bad news. People who have the 7410 and 7310 from BEFORE the July 2010 cutoff have the models with SAS1 HBAs in them to connect to the older SAS1 trays. Remember, that manufacturer cut all ties with us and stopped making the JBOD, so there’s just no way to get more of them, as they don’t exist. There are some options, however. Oracle support does support taking out the SAS1 HBAs in the old 7410 and 7310 and put in newer SAS2 HBAs which can talk to the new trays. Hey, I didn’t say it was a great option, I just said it’s an option. I fully realize that you would then have a SAS1 JBOD full of SATA drives that you could no longer connect. I do know a client that did this, and took the SAS1 JBOD and connected it to another server and formatted the drives and is using it as a plain, non-7000 JBOD. This is not supported by Oracle support.

The other option is to just keep it as-is, as it works just fine, but you just can’t expand it. Then you can get a newer 7x20 series, and use the built-in ZFSSA replication feature to move the data over. Now you can use the newer one for your production data and use the older one for DR, snaps and clones.

New price reduction on some ZFSSA parts

For those of you keeping track, you may have noticed a price reduction today on the ZFSSA Readzillas, Logzillas, and spinning drives. Ask your friendly neighborhood Sales Consultant for a new quote.



Wednesday Jun 08, 2011

Upgrade to Q3.3.1 notes -

Ok, so there is a good reason why you folks want to upgrade. These upgrades fix some great bugs that other clients may have found, and you just have been lucky not to have had effect you yet. Another reason is that at some point when you DO want to upgrade, you may be too far behind to upgrade directly to the newest version. Check out this screenshot. In trying to upgrade to Q3.3.1, the update informs me that I won't be able to do this until I upgrade to Q3.2.0.

Just something to be aware of, so you can plan for additional time if you need to upgrade twice during your maintenance window.



Monday Jun 06, 2011

New Q3.3.1 code- DID YOU KNOW?

Did you know you can also upgrade your 7000 Simulator to the new 2010.Q3.3.1 code that came out last week?
Remember, your 7000 simulator is the real software of a ZFSSA. The only thing being simulated is the hardware.

If you're having a hard time finding the download in MOS, when you get into MOS, just click on the "Patches & Updates" tab, and type 12622199 in the "Patch Name or Number" search box. That will take you right to it. Thanks to Jon B for suggesting this tip.


Friday Jun 03, 2011

New ZFSSA 7000 code release today

The new ZFSSA code was released today, 6-3-11.

  1. Sign in to my oracle support at
  2. Select the "Patches & Updates" tab.
  3. Search by Sun ZFS Storage Appliance product family or by Patch ID (see table below for patch IDs).
  4. Download the zip file to your local system and unzip.
  5. The ak-nas-2010-08-17-3-1-1-1-28-nd.pkg.gz update file and license files are expanded into the All_Supported_Platforms directory.

You can see the release notes here:


This micro release contains significant bug fixes for all supported platforms. Please carefully review the list of CRs that have been addressed and all release notes below prior to upgrading.

This release contains SAS-2 HBA and SAS-2 Disk Shelf Log Device firmware updates for SAS-2 based 7310, 7310C, 7410 and 7410C appliances and all 7120, 7320, 7320C, 7420, 7420C and 7720 appliances. The SAS-2 HBA and the SAS-2 Disk Shelf Log Device firmware will be eligible for firmware updates during the first boot of the appliance after upgrading to this release. The firmware updates are summarized in the following table:

Device Vendor Product ID Description Old Firmware Version New Firmware Version
SAS-2 HBA Sun Microsystems, Inc. Dual 4x6Gb External SAS-2 HBA Dual Port SAS-2 HBA 1.09.02 1.09.03
SAS-2 Log Device STEC ZeusIOPs 18GB SAS-2 Log Device 9002 9004

The SAS-2 HBA firmware update will take approximately 1 minute per SAS-2 HBA. Each SAS-2 Disk Shelf Log Device will take approximately 1 minute to update the firmware. The total firmware update time is dependent on the number of devices being updated. For example, an appliance with 12 SAS-2 Disk Shelves and 16 Log Devices may take 16 minutes to upgrade the firmware. Upgrades may also take longer when the appliance is under load. It is important that customers postpone administrative operations such as cluster failback, reboot, or power down until the system has updated all device firmware. For more information on firmware updates and information on how to monitor them following the first boot after upgrade, refer to the Maintenance:System:Updates Hardware Firmware Updates section of the Customer Service Manual or online help.

This release requires appliances to be running the 2010.Q3.2.1 micro release prior to upgrading to this release. In addition, this release includes upgrade healthchecks that are performed automatically when an upgrade is started prior to the actual upgrade from the prerequisite 2010.Q3.2.1 micro release. If an upgrade healthcheck fails it can cause an upgrade to abort. The upgrade healthchecks help to ensure component issues that may impact an upgrade are addressed. Release specific documentation is provided below to help with upgrades and upgrade healthchecks. Please carefully review it prior to performing an upgrade to 2010.Q3.3.1. It is important to resolve all hardware component issues prior to performing an upgrade.

Wednesday Jun 01, 2011

Monitoring Multiple 7000s with OGC

Did you know that one can monitor and get alerts from multiple Oracle ZFSSA (7000 family) systems down to a single pane of glass?
Did you know you could also monitor the Oracle 6000 family as well as a huge variety of other systems as well?
Did you know it would not cost you anything to do this?
I bet you didn't.

OGC is a no-cost subset of Oracle Enterprise Manager. There are a large variety of Management Agents available for it, many at no cost. The 7000 and 6000 agents are free. The oracle database needed is considered an "Infrastrucre database", a separate Oracle Database that can be installed and used as a OEM Grid Control repository without additional license requirements, provided that all the targets (databases, applications, and so forth) managed in this repository are correctly licensed. This database may also be used for the RMAN repository. It may not be used or deployed for other uses.


Once you're set up, you can monitor not only the metrics from the 7000 analytic screens, but also the 7000 alerts from multiple 7000s. Very handy indeed.

screen shots

More info about:

7000 Plug-in for OGC:
Other Grid Control Plug-ins:
Oracle DB Editions:


How to - Make your own QR Codes

Did you know you can make your own QR codes that can take someone to a website or give their phone contact info?
Use your Droid or iPhone's "Barcode Scanner" app to scan this code to find out how I made it. By the way, they seem to work better if the phone is vertical.


Tuesday May 31, 2011

7000 Software Update Matrix

7000 Software updates. You can find these both in MOS,
or at
You should be on 2010.Q3.2.1 right now.
New ones coming in next few weeks, so pay attention.

Oracle ZFSSA 7000 Series

These patches can be found in MOS.
Don't even start with me, MOS is easy once you get the hang of it.

 Date  Patch #  Firmware  Type  Code in BUI  
 12/23/2010  10379795  2010.Q3.1.1  Minor  2010-08-17-1-1-1-1-16  
 12/28/2010  10435589  2010.Q3.2  Minor  2010-08-17-1-1-1-1-18  
 2/9/2011  10378803  2010.Q3  Plug-ins  NA- extra software for VSS and OEM  
 3/23/2011  11887647  2010.Q3.2.1  Minor  2010-08-17-2-1-1-1-21 - Current release  
 June 2011  unk  2010.Q3.2.2  Minor  various bug fixes  
 June 2011  unk  2011.Q1  Major  Major release coming  

Some of the new features in 2011.Q1 will include: Active Directory domain controller hot failover, SMB level 2 Oplocks, ZFS enhancements, Replication enhancements, iSCSI and FC enhancements, Datalink configuration enhancements







7000 ZFS Storage Appliance - General info and websites

The ZFSSA is the current name of the oracle 7000 family of products. You can see the public page for them here:

There are just a huge amount of websites, white papers, and blogs regarding this product and the technology that makes it tick, Solaris 10, 11 and ZFS. The following is just a few of them to help get you started.

Software updates:
You need to check here, as the ZFSSA appliance kit gets updated with a minor update about every 3 months and a major update about twice a year. Believe me, you want these. They will either fix bugs that you may have seen (or have not seen yet and don't want to), or give you access to fantastic new features. These will be at no cost to you, as with all software on the ZFSSA, there are no licenses or costs for the software. Very cool.

BigAdmin is a great site to find and download whitepapers on lots of topics, including many for the 7000. Like best practices for setting up SharePoint Server with the 7000.

Power Calculator:
This link takes you to the 7420 calculator, but you can walk the links up at the top to take you to many others

Oracle Enterprise Manager Plug-in:
Did you know you could monitor and get alerts from multiple 7000s using OEM? Oh, and did you know the plug-in was free? Oh, and did you know there is no cost to run OEM and the Oracle database needed to run it, as long as you only use it for OEM???? DID YOU? DID YOU??? I bet you didn't....

How to boot from SAN over FC from a 7000:
Yep, of course you can do that.

Workflow and script depot:
I'm afraid this one is Oracle Internal only, but important for my fellow storage consultants to have. This is Chris' Script Depot.

7000 documentation and training guide :
Sorry, this is another Oracle-internal website

7000 Simulator download -
This is really not a simulator, but the actual, real software for a 7000, just like you bought the real box. The only thing being simulated in VirtualBox (free) is the hardware that the software is running on. It thinks it's a real 7000, it's just running on your PC so it's slow. It can do anything the real box can, even replicate to another 7000. The only parts you won't be able to play with will be clustering and the Hybrid storage pool, as I doubt your laptop has a mix of Readzillas, Logzillas, and SAS2 drives. After you install and set this up, be sure to go get the latest software upgrade and upgrade it, just like its a real box. Because remember, it thinks it is.


How to reset passwords on your 7000 if you want to start from scratch.

How to reset passwords on your 7000 if you want to start from scratch.[Read More]

This blog is a way for Steve to send out his tips, ideas, links, and general sarcasm. Almost all related to the Oracle 7000, code named ZFSSA, or Amber Road, or Open Storage, or Unified Storage. You are welcome to contact with any comments or questions


« November 2015