Monday Aug 25, 2014

ZFS Storage for Virtualization and Cloud

I had the good fortune over the weekend to read through a VMware Press book called Essential Virtual SAN: Administrator's Guide to VMware Virtual SAN by Cormac Hogan and Duncan Epping. Very good book.

First, I'd like to observe that VSAN is a very interesting idea, but based on the content of this book, it's still in it's early days. In fact, I'd even go as far as to say that things like VSAN just might be the barbarian at the gate for certain storage use cases, as they aim to get virtual machine storage to be highly integrated with the virtual machine infrastructure.

Yet, as I've said, VSAN is very young. My read of the book is that while the promise of VSAN in small and medium-sized deployments is there, the simplicity isn't yet. Nor should anyone expect it to be. Storage projects have a very high bar to reach, and reaching that bar takes a long time. Especially when your goal is to run on a variety general-purpose hardware.

What really got my attention, though, was how the authors laid out the need for VSAN in the first place. In the introduction, they say:

When talking about virtualization and the underlying infrastructure that it runs on, one component that always comes up in the conversation is storage. The reason is fairly simple: In many environments, storage is a pain point. Although the storage landscape has changed with the introduction of flash technologies that mitigate many of the traditional storage issues, many organizations have not yet adopted these new architectures and are still running into the same challenges.

Hoo boy!  That's quite a statement relative to the current market-leading storage offerings. And yet, readers of this blog already know it's true. Virtualization masks storage bottlenecks, and therefore the easiest way to solve for this is to serve I/O from faster media.  It removes some of the need to know what the hypervisor is doing. And while the traditional storage leaders have each bolted some flash into their systems, generally it's been via relatively clunky (and expensive) caching algorithms that don't solve the challenges all that well. The best example that pops to mind is the new VNX8000, for which EMC took the unusual (for them) step of publishing a SPECsfs2008 result.  It's a fantastic result, but they built the entire back end with flash drives (over 500 of them!) to get there. Makes you think that maybe their caching isn't all that great yet.

Managing VSAN continues:

The majority of these problems stem from the same fundamental problem: legacy architecture. The reason is that most storage platform architectures were developed long before virtualization existed, and virtualization changed the way these shared storage systems were used.

In a way, you could say that virtualization forced the storage industry to look for new ways of building storage systems.

I'm smiling. One of my oft-repeated statements about Oracle ZFS Storage is that the workloads are now coming to us. ZFS was architectured to serve the majority of I/O from memory (you might even call it in-memory storage). We back that memory with a large flash cache. This leapfrogs the all-flash band-aid altogether. And in a sense, the VSAN approach concurs with this thought. 

To wit, here's a picture of how a VSAN implementation needs to see storage (from this really good VSAN blog):


Each disk group must include at least one flash drive and at least one conventional disk. Moreover, the book explains that you need to be careful to provision such that the flash drive is big and fast enough to deliver most of the read I/O, because ordinary disks deliver 80-175 IOPs/second each, while flash drives deliver 5000-30,000 IOPs each.  This makes cache misses VERY expensive. So, the statement here is to make sure that you have enough cache, and it's implied that the OS knows what to do with it.

We agree. And so do all of these new guys promoting all-flash arrays for virtualization.  But all-flash means that EVERYTHING has to be stored on flash disks. including the vast majority of data currently at rest. This scales well, but it gets pricey pretty fast. We think we've found a better way.

What if instead you had as much as 2 TB of DRAM in your system, and all of the hottest read data lived THERE?  Then, when the data started to cool a bit, you had a secondary flash cache with more than 12TB per system? And what if you'd been working on the algorithms to manage these caches for the better part of a decade? The logic is the same as what's being explained for VMware Virtual SANs: Fast media for hot data, cheaper media for "cold storage". 

So, bravo to VMware on the VSAN concept. And bravo to EMC for promoting the idea that virtualized workloads need better caching. We've struggled getting customers fully engaged in our Hybrid Storage Pool discussion, so having others out there telling the same general story can only help. Especially since we're better at it.

Happy reading!
Your OPN Communications team

Thursday Jan 30, 2014

The Results Are In! Achieving Enterprise Data Performance

As big data continues to infiltrate transaction systems, appliances and devices from a variety of applications – including social media – organizations are facing a number of challenges as they attempt to capture and utilize this data. While industry solutions are offering new and more digitally compact forms of data storage, these solutions don’t get to the meat of organizations’ chief issue: data must be managed and integrated closely into the business from the get go.

How exactly do we do this? Good question.

We conducted a survey[1] on database growth with more than 300 data managers and professionals who are members of the Independent Oracle Users Group (IOUG). After reviewing the results, we took away the following top-line challenges related to managing and storing big data, and the opportunity for partners that lies within.

1. Business challenge: The need to reduce infrastructure costs. Organizations struggle as they attempt to manage large amounts of unstructured data without improvements and updates to their data environments.

2. Technical challenge: The need to improve performance and availability. Database administrators are mostly concerned with performance and consolidation efforts.

3. Application performance challenge: Survey results showed that the most critical performance issue is data growth outpacing storage capacity and the need to purchase additional hardware.

4. Data growth challenge: Half of our respondents stated that in order to combat and manage data growth they have to add more disk storage. Very few plan to use database level compression as a way to address growth.

Opportunity: Three words: Hybrid Columnar Compression (HCC). As you may have caught above, challenge #1 is at odds with challenge #3. How can a business minimize infrastructure costs and put more hardware in place? You can’t, really. Enter HCC, an Oracle Database feature supported by Oracle ZFS Storage ZS3 Series, but unavailable to third-party storage systems. HCC uses both row and columnar methods for storing data. By doing this, HCC reaps the benefits of columnar storage while avoiding the performance shortfalls of a pure columnar format and can dramatically improve storage efficiencies. HCC compression ratios range from 6x to 15x, significantly reducing storage capacity required and enhancing performance, while enabling customers to significantly reduce their storage costs.

Partners, it’s up to us to engage our Oracle Database customers with storage cost saving messages by raising awareness and adoption of HCC. Here are a few ways to do so:

  • Consult with customers: Run Oracle Advanced Compression Advisor to estimate potential storage savings. Use ’DBMS_COMPRESSION’ package included with Oracle Database 11g Release 2 or higher.
  • Use an HCC customer reference: This Starbucks case study is a great example. Architect, sell, and implement the ZFS Storage ZS3 Series.

Best of luck!

Joel Borellis is group vice president, Partner Enablement at Oracle. Monthly guest blogs such as this one are part of The VAR Guy’s annual sponsorship. Read all of Oracle’s guest blogs here.

[1] Achieving Enterprise Data Performance 2013 survey underwritten by Oracle Corporation and conducted by Unisphere Research, a division of Information Today, Inc.

Thursday Nov 29, 2012

Bring on the Cheer, Oracle’s Q3 is Here

November is long gone and December is near… this must mean OPN’s Q2 Winter Wrap-Up is here! Listed below are just a few of the highlights from Oracle’s past three months…

  1. Yet another successful Oracle OpenWorld 2012 and the launch of our first ever Oracle PartnerNetwork Exchange program! Get the recap.
  2. Our exciting Java Embedded @ JavaOne event. Get the low-down here!
  3. The debut of our new Oracle Cloud programs for partners, which have already created some awesome buzz in the Channel. Check out the CRN article, and don’t forget to watch the Cloud Programs Overview video and visit our OPN Cloud Knowledge Zone!
  4. On the product front, Oracle’s Sun ZFS Storage Appliance was awarded the 2012 Tech Innovator and Enterprise App Award by CRN. Read the full article.
  5. Oracle partner, Hitachi Consulting, reached OPN’s premier Diamond Level status. Read more.

Was Oracle part of your September, October or November highlights? If so, leave us a comment below, we’d love to feature your story! Also, don’t forget to share the love by re-tweeting this post on Twitter or “liking” this post on Facebook!

Stay Warm,

The OPN Communications Team 

Monday Oct 08, 2012

Showing ZFS some LOVE


L is for the way you look at us, and O because we’re Oracle, but V is very, very, extra ordinary, and E, well that’s obvious… E is because Oracle’s new Sun ZFS Storage Appliance is Excellent, and here at OPN, we like spell out the obvious! 

If you haven’t already heard, the Sun ZFS Appliance has “A simple, GUI-driven setup and configuration, solid price-performance and world-class Oracle support behind it. The CRN Test Center recommends the Sun ZFS Storage”. Read more about what CRN said here.

Oracle's Sun ZFS Appliance family delivers enterprise-class network attached storage (NAS) capabilities with leading Oracle integration, simplicity, efficiency, performance, and TCO.  The systems offer an easy way to manage and expand your storage environment at a lower cost, with more efficiency, better data integrity, and higher performance when compared with competitive NAS offerings. Did we mention that set up, including configuring, will take you less than an hour since it all comes in one box and is so darn simple to use?

So if you L-O-V-E what you’re hearing about Oracle’s Sun Z-F-S, learn more by watching the video below, and visiting any of our available resources .

It Had to Be You,

The OPN Communications Team