Thursday Dec 12, 2013

Cluster tricks & tips

Most of us have clustered ZFSSAs, and have been frustrated at one time or another with getting the proper resource to be owned by the proper controller.

I feel your pain, and believe me, I have to deal with it as much or even more than you do. There are, however, some cool things you can do here and it will make your life easier if you fully understand how this screen works. 

First, understand this- Never push the 'Takeover' button. That's right, I said never. That button is mislabeled.  Now, yes, we have two heads here and they're both in the "Active" state as you see here. This means you can not click the "Failback" button which is how we move resources to the head you wish to own them. You are only allowed ONE Failback when a head is in the "Ready for Failback" state, as it is when it first comes up. We have already hit Failback on this system, so both heads are now Active. That's it. You're done until one reboots. Do NOT hit the 'Takeover' button.


Do NOT hit the 'Takeover' button. That button should be labeled "Panic the other controller". Those were just too many words to fit on the button, so they called it Takeover. Panic is exactly what it does. Sure, that means that since the other head is now in panic, this head will now takeover all of the resources and the other head will now reboot. This is one of the worse ways to reboot the other head. It's not nice. It does not flush the cache first. It's actually slower then the other way. Don't do it.

Instead, for a clean and faster reboot, log into the controller you want to reboot, and click the power button:

This allows you to reboot is gracefully, flushing the cache first, and it actually comes up faster than a panic.

Now that it has rebooted, which may take 5-15 minutes, the good controller's cluster screen should show that it's "Ready for Failback". Be certain all of your resources are set to the proper owner, and then hit the "Failback" button to move the resources and change both controllers to the "Active" state. REMEMBER--- You only get to hit the Failback button ONCE!!! So take your time and do all of your config and setup and get the ownership right before you hit it. Otherwise, you will be rebooting one of your controllers again. Not a huge deal, but another 15 minutes of your life, and perhaps a production slowdown for your clients.


Now for a trick. There's nothing I can do to help you with the network resources. If they are on the wrong controller, you may have to reboot one and fix it and do a failback. However, if you have a storage pool on the wrong controller, I may be able to show you something cool.  The best thing to remember and do is this: Create the resource (network or pool) ON the controller you wish to be the owner in the first place!!! Then, it will already be owned by the proper one, and you don't have to do a failback at all. However, what if, for whatever reason, you need to move a pool to the other controller and you MUST NOT reboot a controller in order to move it using the Failback process? In other words, you have an Active-Active setup, the Failback button is grayed out, and it's very important that you change the ownership of a storage pool but you are not allowed to reboot one of the controllers?

Bummer, right? Not so fast, check this out. 

So here I have a system with two pools, Rotation and Bob, both on controller A. The Bob pool is supposed to be on controller B. They are both Active, so I can not click Failback. I would normally have to reboot head B to fix this. But I don't want to.

So I'm going to unconfigure the Bob pool here on controller A. That's right, unconfigure. This does NOT hurt your data. Your data is safe as long as you do NOT create a new pool in that space. We're not going to create a new pool. We're going to IMPORT the Bob pool on controller B. All of your shares, LUNs, and their properties will be perfectly fine. There is only one hiccup, which we will talk about.

Go to Configuration-->Storage, select the correct pool (Bob), and then click "Unconfig". 
But first, I want you to look carefully at the info below the pie chart here. Note that Bob currently has 2 Readzilla cache drives in it. This is important.

You will get this screen. Take a deep breath and hit apply.

No more Bob. Bob gone. Not really. It's still there and can be imported into another controller. This is how we safely move disk trays to new controllers, anyway. No big deal.

So, now go log into the OTHER controller. Don't do this on the same one or else you'll have to start all over again. 
Here we are on B. DO NOT click the Plus Sign!!!! That will destroy your data!!!!
Click the IMPORT button.

The Import button will go out and scan your disk trays for any valid ZFS pools not already listed. Here, it finds one called "bob". 

Select it and hit "Commit". There, Bob Pool is back. All of it's shares and LUNs will be there too. The "Rotation" pool shows Exported because it's owned by the "A" controller, and the Bob Pool is owned here on B. 

We can go to Configuration-->Cluster and see all is well and Bob Pool is indeed owned by the controller we wanted, and we never had to reboot!

However, we have one big problem.... Did you notice when you Imported the Bob Pool  into controller B, the Cache drives did NOT come over?
It now has zero cache drives. What did you expect? The cache drives are the readzillas inside the controller, itself. They can't move over just because you changed the owner.
No problem.
I have 2 extra Readzillas in my B controller not being used. So All I have to do is Add them to the Bob Pool.
Go back to Configuration-->Storage on the B controller. Select the Bob pool and click "ADD". Do NOT click the plus sign. This is different.

I can now add any extra drives to the Bob pool. In this case, I don't have anything I could possibly add other then these two readzillas inside controller B. So pretty easy.

Once added, I'm all good. I now have the Bob pool, with cache drives, being serviced on controller B with no reboot necessary.

That's it.

By the way, you know you can not remove drives from a pool, right? We can only add. This includes SSDs like Logzillas and Readzillas.
Well, I kind of just showed you a way you CAN remove readzillas from a pool, didn't I? Hmmmmmm.....

Tuesday Dec 10, 2013

Upgrading to AK8.1 (2013.1.1.0)

Ok, so AK8.1 has some cool new features. One of which is the ability to have block or record sizes on your shares larger than the current 128K. You can now have 256K, 512K, and 1M. This is important for some great performance boosts on different types of workloads. 

This feature is a deferred update in AK8.1. In other words, just upgrading to AK8.1 does not turn on this feature. Also, after you apply the deferred update, you will NOT be able to roll-back to a previous version. So test and check before you apply.

Here is my screen after the upgrade. Note the new "Deferred Update" section that appears.

If you click the "More info" button, you will see the following help file:

 So, before I apply the deferred update, if I go into a share, I will see the large block sizes in the pull-down but note that they are greyed out.

 After I apply the update, which does NOT require a reboot and is very quick, the same menu now has black entries that I can choose.

Monday Dec 09, 2013

New code out today- AK8.1 or 2013.1.1.0

The first minor release of code 2013.1 is now out and can be downloaded in MOS.

It is 2013.1.1.0, or AK8.1

Along with some bug fixes, it has three main new features:

Support for 2-port 16Gbs Fibre Channel HBA Target and Initiator (backup)

The drivers for supporting the 8300 Series adapters are available in 2013.1.1.0. In ZFS Storage Appliance, it supports SAN traffic at line rate, 16Gbps Fibre-Channel speeds at extremely low CPU usage with full hardware offloads.This extreme performance eliminates potential I/O bottlenecks in today's powerful multiprocessor, multicore servers.

 

Large Block Size (1M) Support

Enable support for block/record sizes bigger than 128k (256k, 512k and 1M) for filesystems or LUNS. The implementation for this includes a deferred update detailed in the Deferred Updates and Remote Replication Compatibility with Large Block/Recordsize Update sections below.

 

SPA Sync Concurrency

In order to better utilize the performance of high-speed storage devices, such as SSDs, improvements have been made to ZFS's algorithm for committing transaction groups. Specifically, the Storage Pool Allocator (SPA) sync process has been improved to parallelize some operations so that it spends a larger percentage of time writing data to the pool devices.

Friday Nov 01, 2013

VNIC - New feature of AK8 - Working with VNICs

One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down.

What a waste. Those days are over. 

I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at Scott.Gill@Onx.com and he can point you in the right direction for your area. 

Here we go:

Here is what your Datalinks window looks like BEFORE you upgrade to AK8.

Here's what the same screen looks like after you upgrade. See the new box?

So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems. 

So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?
Let's change that.

Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it.

Now, I will create a new Interface, and choose "v_dl2" for it's datalink.

My new network screen looks like this.
A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.
I think it should, but OK, it looks like VNICs don't highlight when you click the device. 
Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of.

Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here:

Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that.

So let's try pinging 240 now. Of course, it works great.

 So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.
Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3.

Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.
Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself.

This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste. 

Very, very cool. 

One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code.

Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.
HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this.

Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right?

Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you.

When you click cancel, the device is still missing from dl3.

The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back.

Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.

 That's it for now. Have fun with VNICs.


Wednesday Oct 23, 2013

Replication with AK8

Hello folks,

This came up today and I want to make sure it's clear.

Remember the "deferred update" I spoke about in my "Upgrade to AK8" entry just a bit ago? It's important to understand that this deferred update changes the way replication works. It is necessary that systems with the deferred update applied only replicate with other systems that have also had this deferred update applied. So if you apply it, your system can NOT replicate with ANY other system that has NOT had it applied, even if that other system is running AK8!!! Got it???

Remember, we do have a new version of the 2011 code for the older systems that do not want to upgrade to AK8. This 2011.1.8 code ALSO HAS this same deferred update in it. So, if you upgrade your system to AK8, and then apply the deferred update, and you have another system running either 2011.1.8 or AK8, you can replicate with them again once they apply the deferred update for multiple initiator groups. Yes, even if you're not using LUNs. Here is what it looks like if you try. It will fail.

Wednesday Oct 16, 2013

OS8- AK8- The bad news...

Ok I told you I would give you the bad news of AK8 to go along with all the cool new stuff, so here it is. It's not that bad, really, just things you need to be aware of.

First, the 2013.1 code is being called OS8, AK8 and 2013.1 by different people. I mean different people INSIDE Oracle!! It was supposed to be easy, but it never is. So for the rest of this blog entry, I'm calling it AK8.

AK8 is not compatible with the 7x10 series. Ever. The 7x10 series is not supported with AK8, and if you try to upgrade one, it will fail at the healthcheck.

All 7x20 series, all of them regardless of age, are supported with AK8.

Drive trays. Let's talk about drive trays and SAS cards. The older drive trays for the 7x20 series were called the "Riverwalk 2" or "DS2" trays. They were technically the "J4410" series JBODs that Sun used to sell a la carte before we stopped selling JBODs. Don't get me started on that, it still makes me mad. We used these for many years, and you can still buy them right now until December 15th, 2013, when they will no longer be sold. The DS2 tray only came as a 4u, 24 drive shelf. It held 3.5" drives, and you had a choice of 2TB, 3TB, 300GB or 600GB drives. The SAS HBA in the 7x20 series was called a "Thebe" card, with a part # of 7105394. The 7420, for example, came standard with two of these "Thebe" cards for connecting to the disk trays. Two Thebe cards could handle up to 12 trays, so one would add two more cards to go to 24 trays, or have up to six Thebe cards to handle 36 trays. This card was for external SAS only. It did not connect to the internal OS drives or the Readzillas, both of which used the internal SCSI controller of the server.

These Riverwalk 2 trays ARE supported with AK8. You can upgrade your older 7420 or 7320, no problem, as-is. The much older Riverwalk 1 trays or J4400 trays are NOT supported by AK8. However, they were only used by the 7x10 series, and we already said that the 7x10 series was not supported.

Here's where it gets tricky. Since last January, we have been selling the new style disk trays. We call them the "DE2-24P" and the "DE2-24C" trays. The "C" tray is for capacity drives, which are 3.5" 3TB or 4TB drives. The "P" trays are for performance drives, which are 2.5" 300GB and 900GB drives. These trays are NOT Riverwalk 2 trays, even though the "C" series may kind of look like it. Different manufacturer and different firmware. They are not new. Like I said, we've been selling them with the 7x20 series since last January. They are the only disk trays we will be selling going forward. Of course, AK8 supports them.

So what's the problem? The problem is going to be for people who have to mix drive trays.

Remember, your older 7x20 series has Thebe SAS2 HBAs. These have 2 SAS ports per card.  The new ZS3-2 and ZS3-4 systems, however, have the new "Thebe2" SAS2 HBAs. These Thebe2 cards have 4 ports per card. This is very cool, as we can now do more SAS channels with less cards. Instead of needing 4 SAS cards to grow to 24 trays like we did with the old Thebe cards, I can now do 24 trays with only 2 Thebe2 cards. This means more IO slots for fun things like Infiniband and 10G. So far, so good, right? These Thebe2 cards work with any disk tray. You can even mix older DS2 trays with the newer DE2 trays in the same system, as long as you have Thebe2 cards.

Ah, there's your problem. You don't have Thebe2 cards in your old 7420, do you? Well, I told you the bad news wasn't that bad, right? We can take out your Thebe cards and replace them with Thebe2. You can then plug your older DS2 trays right back in, and also now get newer DE2 trays going forward. However, it's important that the trays are on different SAS channels. You can mix them in the same system, but not on the same channel. Ask your local SC if you need help with the new cable layout. By the way, the new ZS3-2 and ZS3-4 systems also include a new IO card called "Erie" cards. These are for INTERNAL SAS to the OS drives and the Readzillas. So those are now SAS2 instead of SATA like the older models. Yes, the Erie card uses an IO slot, but that's OK, because the Thebe2 cards allow us to use less SAS HBAs to grow the system, right?

That's it. Not too much bad news and really not that bad. AK8 does not support the 7x10 series, and you may need new Thebe2 cards in your older systems if you want to add on newer DE2 trays. I think we can all agree that there are worse things out there. Like our Congress.  

Next up.... More good news and cool AK8 tricks. Such as virtual NICS. 

Friday Oct 11, 2013

Do you want to upgrade to AK8 (2013.1) right now?

Ok, so you will hear some great stuff about AK8, but are you going to upgrade your production system to a new major release right after it comes out? Probably not. If you have a test system or a lab system you can play with, then I highly recommend upgrading it so you can start to see the new performance features that AK8 can give you. If you only have one system, or they're all in production, then of course you're going to wait for the first minor release of the new code, aren't you? I would too. I'm told the first minor is coming out in just a few weeks. It is the release they used for the public benchmark performance testing. So you can feel more confident in that release. You may also be able to talk to your local sales team about getting a demo unit. Then, you can play with the new code in a safe lab area before upgrading your production system.

Next up... The negative aspects of upgrading to AK8. It's not too bad, but you will need to know which older systems can't do it, how to work with older disk trays, and whether or not you can replicate newer systems with older systems. 

Hey, I told you I wasn't just going to blow sunshine on you all the time, right? I can spit out the kool-aid as well as drink it!  :)

Thursday Oct 10, 2013

Upgrading to OS8 - AK8- 2013.1

The upgrade to OS8, AK8 or whatever we are calling it this week was pretty straightforward. It will take some extra time, as it has to perform some one-time jobs the first time it reboots, but it wasn't more than 15 minutes. Your mileage may vary, it's possible on larger systems that it takes longer. There is also a deferred update I will show you down below that you can choose to do right away or later. Once you do that deferred update, you do NOT want to roll back to the previous version, so be warned. 

It's been over 1.5 years since the last major update, so many of you probably have never done one before. The process is just like a minor update, it just takes longer. 

1 Get the update from MOS and unzip it to a folder. Go ahead and upload it and unpack it like normal from your Maintenance-->System screen. I did like how it tried to tell me how much time was left, but the numbers were all over the place, and it was over by the time it was correct.

Now, when you click the arrow to apply the update, the normal health check window appears, but you will notice something extra. That's the 'Deferred Update' choice. You can make it apply as soon as it reboots, or you can manually apply it later. Remember, you do NOT want to rollback after this is applied. I did "Upon Request", click the "Check" button, and if all is well, click "Apply" 

After it installs and reboots, you can look at the command line via serial port or SSH. You will notice a few things are different during this boot-up.

Right after the "Updating ####" section you can see it actually upgrading various services and the SMF repository. This can take around 3 minutes, but if you have a lot of aggragations or IPMP then it could take longer. So relax. You can see mine, below, which went 290 seconds, and then continued upgrading other stuff.

 The upgrade continues, and the screen is pretty obvious.

 When you see it configuring network devices, you're almost done. You can see the new code level, and it's about to go to the login prompt. At that point, you should be able to log back into the BUI.

 Log back into the BUI, and you will see the new version is the current version in Maintenance-->System

Now, let's do the deferred update on the same screen.

You can read about the deferred updates here, and click apply when ready to add them. In this case, it's for the ability to associate multiple initiator groups with a LUN, something we have wanted for some time now, so very cool. Note that ANY other deferred updates you have not applied yet will also apply, as there is no way to pick and choose. Either they all apply or none do. Remember I said not to roll-back to a previous version of the code after you do this? It will let you, but if you do, your LUN operations will fail. No bueno. Don't do it. The deferred upgrades are one-way.

Note that the deferred update does NOT force a reboot. 

Once you apply the deferred updates, the whole deferred update area goes away, and the screen now looks like this. 

Do you want to see something cool right away now in OS8 that you could not do before? There's a lot I will talk about later, but for now, since you're so excited, go to Configuration-->Alerts, and create a new Threshold Alert. Notice the new Capacity threshold alerts, where you can now get emails or create an action when a pool, and project, or a share goes over, say, 80% full. Sweet.

Tuesday Oct 08, 2013

AK8- OS8- 2013.1- New major release code is available NOW

Well, they said it would be release on October 8th, and they did not disappoint.

The new code, internally called 2013.1.0.1 and what marketing is calling ZFSSA OS8 or AK8 is out now. Download from MOS.

The numerous updates are  hard to all get a handle on at once. This readme file will help: https://wikis.oracle.com/display/FishWorks/ak-2013.1.0.1+Release+Notes

I will be loading it, playing with it, and showing some of my favorite things coming up soon, as in the next few days.

Much of the improvements are what you can not see, such as the improved ARC and RAID benefits. 

Lots to talk about. Especially if you need to mix trays. Be careful. Read the file. Stay tuned. 

Tuesday Sep 24, 2013

Great Analytic blog

My co-worker, Darius, just made a great post about how he helped a client using the built-in analytics of the ZFSSA. Check it out here: https://blogs.oracle.com/si/entry/using_analytics_dtrace_to_troubleshoot

Tuesday Sep 10, 2013

ZS3 is #1 on Storage Performance Council benchmark site

This is pretty cool. It seems the ZS3-4 just became the number 1 system in performance on Storage Performance Council's benchmark site.

The email below went out today to all SPC members. 

I would like to point out that we are also the LEAST EXPENSIVE system per SPC-2 performance. Check out our Price/Performance numbers.
So we came in at 17,244 for a $388,472 system, for a price/performance of $22.53.
Now compare that to the 2nd place system on the site, which is HP's P9500. It came in at  13,147 for a huge price of $1,161,503 and a price/performance of $88.34

We KILLED it....

****************************
SPC Members:
Oracle Corporation has submitted the SPC-2 Result™ listed below.
The Executive Summary and Full Disclosure Report (FDR) are posted in the Benchmark Results section of the website.
The documents may be accessed by using the URL listed below:
http://www.storageperformance.org/results/benchmark_results_spc2#b00067

Oracle ZFS Storage ZS3-4 (2-node cluster):

   SPC-2 Submission Identifier .... B00067
SPC-2 MBPS™ …………………….... 17,244.22
   SPC-2 Price-Performance™ …… $22.53/SPC-2 MBPS™
   Total ASU Capacity ………….…..  31,610.959 GB
   Data Protection Level ………..…. Protected 2 (Mirroring)
   Total Price ………………………….... $388,472.03

Congratulations to Oracle for an outstanding SPC-2 Result, which established a new #1 for SPC-2 performance (17,244.22 SPC-2 MBPS™).
Regards,
SPC Administrator
Storage Performance Council (SPC) 

New ZS3 ZFS Storage family announced TODAY! Finally!

It's official and we can finally be excited about the new ZS3 family. We can start talking about it now and start ordering it on Thursday. It won't actully ship, however, until next month on October 8th. I know, I know... but hey I CAN give you something TODAY... How about the 4TB drives, available right now??? Also, the new 1.6TB Readzillas and the 16-port 4x4 SAS HBAs are all here this Thursday, Sept 12th. Not bad, right?

So we now have three new systems:
1- The new ZS3-2
2- The new ZS3-4
3- The updated 7420M2 with internal SAS

The first two will ship with the new OS8 code. The 3rd one will ship with the older OS7 (2011.1.7) code, but can be updated to OS8 at anytime. (see my last blog entry about the new OS names) 

Ok, here is the low-down. The new 4TB drives are for the DE2-C trays which have been out since last December. I do NOT YET KNOW if they will also be available for the older DS2 trays, but I will tell you when I find out. This is important--- The new 1.6TB Readzillas are SAS, not SATA, so they will only work in the new ZS3 series and the new 7420M2 box. Your older 7420 and 7320 use internal SATA, not SAS, for their Readzillas and system OS drives. The new 900GB OS drives and the new 1.6TB Readzillas are SAS, so you need the newer versions to work with them.

The LAST order date for the current 7420 is September 30, 2013, and the LOD for the 7320 and 7120 is November 30, 2013. 

You can get a new product datasheet or the product announcement from your local storage SC.  

Monday Aug 26, 2013

Some info about our ZFSSA codes

As you now know, version 2011.1.7.0 is the current shipping code for our ZFSSA. You really want to be running this code, no matter what ZFSSA system you have. This code will work all they way back to 7x10 systems. There are found bugs in even the last code, 2011.1.6.0, that this newer code will fix, so get on it.

Let's talk for a moment about code names and numbers, as it's going to change from your point of view very soon. Many years ago, Sun Microsystems created the "Fishworks" team to create this code that we now run on the ZFS Storage appliance. You can still see Fishworks and the original team names if you "Shift-Click" the Oracle/Sun logo in the top left corner of your ZFSSA. (There are MANY secret Shift-Click operations in the ZFSSA. I told you about some back in my blog on analytics here: https://blogs.oracle.com/7000tips/entry/fun_tips_with_analytics) By the way, FISH stands for "Fully Integrated Software & Hardware.

So the code that Fishworks created is a layer between you, the user, and the special version of Solaris and ZFS underneath. This is called an "Appliance Kit", and you will see all sorts of system names with an "AK" on them, which are directly linked to the Appliance Kit, which is basically the code for the interface, both the GUI and the CLI, which you all know and love. Internally at Oracle the Fishworks team, now a much larger team that Oracle has grown far beyond the original, calls the code levels for the ZFSSA "AK#####". For example, the code level you are all running right now is called AK7. It has minor updates to it, but the major code is AK7, with a minor now of 04.24.7, so really the last code level released is AK7.04.24.7. You have all been calling it "2011.04.24.7", because in the past they used the year the major release came out as it's name. For obvious reasons, this no longer makes sense. People think the current code they're running was made two years ago in 2011, but that's just not the case. This last release bears almost no resemblance to the original AK7 code. So much has changed in it.

So, to make things simpler, Oracle is dropping the year on the code, and will now call it AK#.#.#, starting with the upcoming release of AK8. In all likelihood,  there will still be one more minor release of AK7 coming first, so don't wait to upgrade like AK8 is just around the corner. It's still going to be a few months, and you don't want to hit a bug before that, so upgrade when you can to AK7-7 (my nickname for the current release). 

AK8 will be a game-changer. I'm not allowed to talk about it too much, but speak with your local storage SC and maybe they can give you a heads-up. HUGE stuff coming folks. Just the performance enhancements are going to be a world-changing event in the storage industry. If you have a 7x20 series system, going to AK8, without doing anything else, is going to make your system better and faster.

You can see all of the software release history here: https://wikis.oracle.com/display/FishWorks/Software+Updates

Wednesday Aug 14, 2013

New code out now

In case you were not paying attention, code 2011.1.7 is now out.

https://wikis.oracle.com/display/FishWorks/ak-2011.04.24.7.0+Release+Notes

Enjoy 

Tuesday Jul 02, 2013

Awesome new feature for HCC

I've talked about HCC (Hybrid Columnar Compression) before. This is Oracle's built-in compression feature, free of charge in 11Gr2, that allows a CRAZY amount of compression on historical data inside an Oracle database. It only works if the database is being stored in a ZFSSA, Exadata or Axiom. You can read all about it in this whitepaper, which shows the huge value of HCC when used with the ZFSSA. http://www.oracle.com/technetwork/articles/servers-storage-admin/perf-hybrid-columnar-compression-1689701.html

Now, even better, Oracle has announced  a great new feature in Oracle 12c called "Automatic Data Optimization". This allows one to setup HCC to AUTOMATICALLY compress data AS IT AGES. 

So this is now ILM all built into the Oracle database. It's free for crying out loud. It just needs to be sitting on Oracle storage, such as the ZFSSA, Exadata or Axiom. 

Read about ADO here: http://www.oracle.com/technetwork/database/automatic-data-optimization-wp-12c-1896120.pdf?ssSourceSiteId=ocomen

Thursday May 30, 2013

Wikibon has a new article giving nice praise to the ZFSSA

It seems Wikibon has done some research and interviews and has written a very nice article on the awesome cost savings of the ZFSSA. 
Check it out here:

http://wikibon.org/wiki/v/Oracle_ZFS_Hybrid_Storage_Appliance_Reads_for_Show_but_Writes_for_Dough

Here are some of my favorite quotes from the article:

“The high-end ZFS storage array is the highest performing hybrid storage device that has been analyzed by Wikibon, and in a class of its own when it comes to high write-IO environments.”

“Wikibon analyzed the architecture and performance of the ZFS Appliance in depth, and compared it to "traditional storage arrays" (e.g. EMC, NetApp, HDS, HP, etc. mainstream mid-market arrays) in high write environments.”

“For an environment with 100 terabytes, 1,000,000 IOPS and 20% writes, the additional cost of the traditional system (NetApp) is 194% higher than the hybrid system (ZFS). “

“Wikibon members should consider the ZFS Appliance in more demanding workloads where sustained write performance and IO requirements are higher. Examples include high performance environments such as specific backup applications and core transaction-intensive database workloads. In these situations, because of the hybrid design of the ZFS Appliance, customers will find significant savings relative to traditional disk arrays that don't scale as well.”

“CIOs, CTOs and senior storage executives should position the Oracle ZFS appliance as an ideal strategic fit for high streaming environments such as database backups. As well, the product can be successfully integrated into high-performance Oracle database workloads. In write-intensive and heavy IO workloads, the ZFS appliance will likely prove the best-of-breed, lowest cost solution.”

“The general feedback from the ZFS appliance practitioners was positive”

“Praise for the performance of the ZFS, particularly in backup (high-write) environment;”

“7 gigabytes/second write rates achieved in a benchmarks;”

“11 terabytes/hour sustained over 2.5 hours for backup, compared with 1 terabytes/hour for a traditional storage device;”

“ZFS snapshots and clones universally praised;”

“DTrace was praised for the quality and completeness of the performance analytic tool;”

“Compression performance was strongly praised (up to 16x compression), especially for reads;”

“None of the respondents needed to tune the ZFS read or write caching - performance maintenance was minimal;”

“No problems with availability.” 

Tuesday May 28, 2013

New eye-chart for the ZFSSA

I finally updated my ZFSSA eye-chart. Hey, it's only three months late.

You can find it under the "Bookmarks" section on the right. Version 12 is the newest one.

Monday May 20, 2013

Great video

I love the many videos on www.Wimp.com. New ones every day.

Today, they had a nice video of the Oracle racing boat. Check this out...

http://www.wimp.com/overwater/

Wednesday May 08, 2013

ZFSSA update 2011.1.6.0 is now available

For those who have not noticed, ZFSSA version 2011.1.6.0 came out on April 29th.

Go get it.

Release notes are here:

https://wikis.oracle.com/display/FishWorks/Software+Updates

Monday Apr 08, 2013

Clone license clarification

Someone asked a good question about the clone and snap-manager licenses, so I wanted to clarify.

The info I received is that if you use the new snap-manager for ZFSSA product, all snaps and clones created by that product are covered by it's one-time license. You do not need the additional clone license to manage these at all.

However, if you had other clones you are creating that are NOT created via the snap-manager tool, then yes, you do need the other clone license for these to be supported.

I hope that makes sense.

Steve 

Tuesday Feb 12, 2013

SnapManager for Oracle DB for ZFSSA is out and ready

A few weeks ago, Oracle announced the Oracle database SnapManager software for ZFSSA.

It is a license just like the Clone or the Replication license. It's just a one-time, yes-or-no, on-or-off license per controller. Better yet, you can go ahead and get the software and try it out for free for 30 days. Go check it out with the link below.

The Snap Management Utility combines the underlying snapshot, clone, and rollback capabilities of the Oracle ZFS Storage Appliance with standard host-side processing so all operations are consistent.


Downloading the Oracle Snap Management Utility for Oracle Database Software

A. Customers who purchased the license need to download the software from eDelivery (see

instructions below)

B. Customers who wish to evaluate for 30-days prior to purchase may download from the same

site. The license allows a 30-day evaluation period. Follow instructions below.

Instructions to download software:

1. Go to eDelivery link: https://edelivery.oracle.com/EPD/Search/handle_go

2. Login

3. Accept Terms and Restrictions

4. In the “Media Pack Search” window:

a. Under Product Pack, select “Sun Products”

b. Under Platform, select “Generic”

c. Click “Go”

5. From the results, select the “Oracle Snap Management Utility for Oracle Database”

6. There are two files for download:

a. The “Oracle Snap Management Utility for Oracle Database, Client v 1.1.0” is required

b. The “Sun ZFS Storage Software 2011.1.5.0” is the latest version of the ZFS Storage

Appliance SW provided for customers who need to upgrade their software.

UPDATE-7-27-13- Just found out that if you buy the SMU license, you do NOT need to buy the clone license. The cloning is included in SMU, so that's cool.

Monday Feb 11, 2013

Oracle Iaas now includes the ZFS Backup Appliance

Ok, so this is pretty cool. If you didn't know, Oracle has this great program called Iaas, which is Infrastructure As A Service. You can go check it out here: http://www.oracle.com/us/products/engineered-systems/iaas/overview/index.html

What this means it that someone who really wants an Oracle engineered system, such as an Exadata, but can't come up with the up-front cost, can do Iaas and put it in their datacenter for a low monthly fee. This can be really cool. Some people can now change their entire budget from a Cap-ex to an Op-ex, save a bunch of up-front costs, and still get the hardware they need and want.

As of this week, the ZFSBA is now included in the Iaas offering. So one can get the ZFS Backup Appliance and use it to backup their engineered system (Exadata, Exalogic, or SuperCluster) over infiniband. They can also use it to then make snaps and clones of that data for their testing and development, as well as use it for general-purpose storage over 10Gig, 1Gig or FC. Pretty sweet way to get the ZFS Storage system in your site without the up-front costs. You can get the ZFSBA in a Iaas all by itself if you want, without the engineered system at all, just to get the ZFS storage.

Now, some of you may be asking, "What the heck is the ZFSBA and how is it different than the ZFSSA?"

I haven't talked about the ZFSBA before. The ZFS Backup appliance. I probably should have. You can get more info on it here: http://www.oracle.com/us/products/servers-storage/storage/nas/zfs-backup-appliance/overview/index.html
Here is the low-down. It's a 7420 cluster with drive trays, all pre-cabled and in a rack, ready-to-go. The 7420 has IB cards in place and the whole system is a single line-item to make it easy for the sales team to have a single line-item part number to use as an easy way to add a ZFSSA to an engineered system deal for backing up the engineered system. There are two versions, one with high-capacity drives and the other with high-performance drives. Either one you get can add additional trays of either type later. Unlike the other engineered systems, the ZFSBA does allow one to use the extra space in the rack, which is nice. 
Sun ZFS Storage 7420

So, if you want a 7420 cluster and a rack, is there a downside to always using the ZFSBA to order a 7420? Not many. Same price, easier to order with less part numbers. You can still customize it and add more stuff. There is one downside, and that's the fact that the ZFSBA does use the 32-core version of the 7420, not the 40-core version. The backup of an Exadata does not require more cores, so they went with the smaller of the two. If you need more power and more DRAM for faster workloads, however, you may want to build a 7420 ZFSSA the normal way.

If this doesn't make sense, please add a comment below or just email me.  

Steve 

New trays- better pictures

Ok, here are some much better pictures of our two new trays. 
The DE2-24P is the 2u performance model, meaning that it holds 2.5" 10,000 RPM drives (and up to four LZ SSDs, of course). These are currently either 300GB or 900GB drives.

The DE2-24C is the 4u capacity model, which holds the larger 3.5" 7,200 RPM drives and LZ drives. These are currently 3TB drives.

One of these days, I really need to update my storage eye charts with these new trays. I just haven't had the time!!! 


Tuesday Jan 08, 2013

New code and new disk trays !!!

Hey everybody, happy new year and some great news for the ZFSSA...

The new 2u disk trays have come out early. I was not expecting them until later this quarter, but was surprised yesterday that Oracle announced them ready for sale. Sweet. So we now have a 4u capacity tray for 3TB drives (soon to be 4TB drives), and a 2u high-performance tray with either 300GB or 900GB 10K speed drives. These new 900GB 10K speed drives have the same IOPS as our current 600GB 15K speed drives, since the form factor went from 3.5" to 2.5". So you now can have 24 drives in a 2u tray. Very cool. These new trays require OS 2011.1.5, and right now you can NOT mix them with the older DS2 trays. Being able to mix them will be supported later, however.

To go along with that, the new 2011.1.5 code has been released. you can download it right now in MOS. It fixes a ridiculous amount of  issues, as well as supports these new 2u drive trays. You can read all about the new code here: https://updates.oracle.com/Orion/Services/download?type=readme&aru=15826899

Enjoy! 

**Update 1-18-13 - I need to correct myself, and I'm adding this note instead of changing what I wrote up above and trying to hide that I messed up... Hey it happens...
At first I was lead to believe the the smaller size platter made up for the slower speed on the new 2.5" drives. This is not the case. It does help, but the 10K speed drives do get slightly less IOPS and throughput then the 3.5" 15K speed drives.  Not that this matters too much for us, since we pride ourselves on the fact we drive performance with the ZFSSA via our cache, not our spindle speed, but it's important to point out.  Now, the power savings and space savings are real, and very much worth using the smaller form factor. Also, you do understand that Oracle does not have a whole lot to do with this? This is the way drive manufacturers are going. They just don't make 2.5" drives at 15K speed. So this is the way it is. Now, at some point sooner rather than later, we will also be putting out an all SSD tray. So if you need fast IOP speeds on the spindles, we will have you covered there, too.

Monday Dec 03, 2012

My error with upgrading 4.0 to 4.2- What NOT to do...

Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2.

Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right???

Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet.

So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE....

I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all.

Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine. 

So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time. 

So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason.

In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned. 

By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade.


1) Upload software to both controllers and wait for it to unpack
2) On controller "A" navigate to configuration/cluster and click "takeover"
3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software.
4) Wait for controller "B" to apply the update and finish rebooting
5) Login to controller "B", navigate to configuration/cluster and click "takeover"
6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software.
7) Wait for controller "A" to apply the update and finish rebooting
8) Login to controller "B", navigate to configuration/cluster and click "failback"

Thursday Nov 15, 2012

New code release today - 2011.1.4.2

Wow, two blog entries in the same day! When I wrote the large 'Quota' blog entry below, I did not realize there would be a micro-code update going out the same evening.

So here it is. Code 2011.1.4.2 has just been released. You can get the readme file for it here: https://wikis.oracle.com/display/FishWorks/ak-2011.04.24.4.2+Release+Notes

Download it, of course, through the MOS website.

It looks like it fixes a pretty nasty bug. Get it if you think it applies to you. Unless you have a great reason NOT to upgrade, I would strongly advise you to upgrade to 2011.1.4.2. Why? Because the readme file says they STRONGLY RECOMMEND YOU ALL UPGRADE TO THIS CODE IMMEDIATELY using LOTS OF CAPITAL LETTERS.

That's good enough for me. Be sure to run the health check like the readme tells you to. 

**Updated after I posted the above... What worries me is that 2011.1.5.0 was supposed to be out pretty soon, as in weeks. So if they put this 1.4.2 version out now, instead of just adding these three fixes to the 1.5.0 code, they must be pretty important.  

Quotas - Using quotas on ZFSSA shares and projects and users

So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once.

Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page.

First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple.

So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool. 

Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share. 

Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button.

If I go back to "Show All", I can see that Joe now has a quota, and no one else does.

Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.

 That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.

 So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.

 Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that.

Ok, here is where it gets FUN....

Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm....

No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works. 

Here's what it does...
Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did. 

 Go back to any share or project. You will see that you now have four new, custom properties on the bottom.

 Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.

 After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.

 Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?  

 That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.

 After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB

 You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.

 For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.

 I hope these tips have helped you out there. Quotas can be fun. 

Sunday Oct 28, 2012

Our winners- and some BBQ for everyone

Please also see "Allen's Grilling Channel" over to the right in my Bookmarks section...

Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…)

OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting.

Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have.

Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole.

See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens.

Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best.

Now cover and place in fridge overnight.

Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice.

Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork.

So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever.

Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you.

Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor.

Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough.

How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!!

Step 6. Eat. It pulls apart like this when it’s done.

By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor.

Thanks, and I will put up another good tip, about the ZFSSA, around the end of November.

Steve 

Friday Oct 26, 2012

Replication - between pools in the same system

OK, I fully understand that's it's been a LONG time since I've blogged with any tips or tricks on the ZFSSA, and I'm way behind. Hey, I just wrote TWO BLOGS ON THE SAME DAY!!! Make sure you keep scrolling down to see the next one too, or you may have missed it. To celebrate, for the one or two of you out there who are still reading this, I got something for you. The first TWO people who make any comment below, with your real name and email so I can contact you, will get some cool Oracle SWAG that I have to give away. Don't get excited, it's not an iPad, but it pretty good stuff. Only the first two, so if you already see two below, then settle down.

Now, let's talk about Replication and Migration.  I have talked before about Shadow Migration here: https://blogs.oracle.com/7000tips/entry/shadow_migration
Shadow Migration lets one take a NFS or CIFS share in one pool on a system and migrate that data over to another pool in the same system. That's handy, but right now it's only for file systems like NFS and CIFS. It will not work for LUNs. LUN shadow migration is a roadmap item, however.

So.... What if you have a ZFSSA cluster with multiple pools, and you have a LUN in one pool but later you decide it's best if it was in the other pool? No problem. Replication to the rescue. What's that? Replication is only for replicating data between two different systems? Who told you that? We've been able to replicate to the same system now for a few code updates back. These instructions below will also work just fine if you're setting up replication between two different systems. After replication is complete, you can easily break replication, change the new LUN into a primary LUN and then delete the source LUN. Bam.

Step 1- setup a target system. In our case, the target system is ourself, but you still have to set it up like it's far away. Go to Configuration-->Services-->Remote Replication. Click the plus sign and setup the target, which is the ZFSSA you're on now.

Step 2. Now you can go to the LUN you want to replicate. Take note which Pool and Project you're in. In my case, I have a LUN in Pool2 called LUNp2 that I wish to replicate to Pool1.

 Step 3. In my case, I made a Project called "Luns" and it has LUNp2 inside of it. I am going to replicate the Project, which will automatically replicate all of the LUNs and/or Filesystems inside of it.  Now, you can also replicate from the Share level instead of the Project. That will only replicate the share, and not all the other shares of a project. If someone tells you that if you replicate a share, it always replicates all the other shares also in that Project, don't listen to them.
Note below how I can choose not only the Target (which is myself), but I can also choose which Pool to replicate it to. So I choose Pool1.

 Step 4. I did not choose a schedule or pick the "Continuous" button, which means my replication will be manual only. I can now push the Manual Replicate button on my Actions list and you will see it start. You will see both a barber pole animation and also an update in the status bar on the top of the screen that a replication event has begun. This also goes into the event log.

 Step 5. The status bar will also log an event when it's done.

Step 6. If you go back to Configuration-->Services-->Remote Replication, you will see your event.

Step 7. Done. To see your new replica, go to the other Pool (Pool1 for me), and click the "Replica" area below the words "Filesystems | LUNs" Here, you will see any replicas that have come in from any of your sources. It's a simple matter from here to break the replication, which will change this to a "Local" LUN, and then delete the original LUN back in Pool2.

Ok, that's all for now, but I promise to give out more tricks sometime in November !!! There's very exciting stuff coming down the pipe for the ZFSSA. Both new hardware and new software features that I'm just drooling over. That's all I can say, but contact your local sales SC to get a NDA roadmap talk if you want to hear more.  

Happy Halloween,
Steve 

New Write Flash SSDs and more disk trays

In case you haven't heard, the Write SSDs the ZFSSA have been updated. Much faster now for the same price. Sweet.

The new write-flash SSDs have a new part number of 7105026 , so make sure you order the right ones. It's important to note that you MUST be on code level 2011.1.4.0 or higher to use these.

They have increased in IOPS from 6,000 to 11,000, and increased throughput from 200MB/s to 350MB/s.  

 Also, you can now add six SAS HBAs (up from 4) to the 7420, allowing one to have three SAS channels with 12 disk trays each, for a new total of 36 disk trays. With 3TB drives, that's 2.5 Petabytes. Is that enough for you?

Make sure you add new cards to the correct slots. I've talked about this before, but here is the handy-dandy matrix again so you don't have to go find it. Remember the rules: You can have 6 of any one kind of card (like six 10GigE cards), except IB which is still four max. You only really get 8 slots, since you have two SAS cards no matter what. If you want more than 12 disk trays, you need two more SAS cards, so think about expansion later, too. In fact, if you're going to have two different speeds of drives (in other words you want to mix 15K speed and 7,200 speed drives in the same system), I would highly recommend two different SAS channels. So I would want four SAS cards in that system, no matter how many trays you have. 


About

This blog is a way for Steve to send out his tips, ideas, links, and general sarcasm. Almost all related to the Oracle 7000, code named ZFSSA, or Amber Road, or Open Storage, or Unified Storage. You are welcome to contact Steve.Tunstall@Oracle.com with any comments or questions

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today