A not so fabulous new release

According to recent blog entries, Netezza is claiming a 10x – 15x increase in price-performance. At first glance that sounds pretty impressive. According to the bloggers, the performance increase supposedly comes from a 3x cut in price per TB and a 3-5x performance improvement. Looking more closely, however, reveals that Netezza's improvements are far from impressive in a generally not so fabulous refresh.

Performance – better, good or not so good…?

Netezza's new system shows small performance improvements and is still slower than the Oracle Database Machine introduced nearly a year ago. Netezza’s new system has an I/O throughput of 10 GB/sec … notably underwhelming compared to the 14GB/sec delivered today by an HP Oracle Database Machine.

Netezza markets their new system as delivering 145 TB/hr … assuming a compression ratio of 4:1 [this one is interesting, more to come], so the net result is an IO rate of 10GB/sec. Bearing in mind that their former system was anecdotally discussed on blogs with 7.5 GB/sec IO throughput I only see a raw performance improvement of 33% at best. Hmm, not that it really matters, because aAn Oracle Database Machine is 40% faster today than Netezza's upcoming new product, not to mention that Netezza themselves see compression rates of 2 – 2.5 on their systems. The 4x compression is on the roadmap… so maybe you won’t even make it to that underwhelming performance number?

I guess there is no better “pad on the back” for Oracle’s Database Machine than the fact that it’s already faster than Netezza’s new system.

Lower Cost – really?

Cheaper sounds good though…? I think it does, and Netezza is claiming a large reduction in price as one of the biggest benefits of their new system. Unfortunately, there does not seem to be a price cut at all. The price of the new system seems to be about the same as the price of the old system. Huh? Didn’t you say price cut?

What Netezza is doing is simply upgrading their disk drives from 400GB to newer 1TB technology (the SATA version of the Oracle machine has 1TB drives for about a year now). Keeping the old price for the entire box yet sticking more storage into it creates the idea of a price reduction if you look at $$ per TB. Naturally as disk drive vendors introduce bigger disks, the price per TB decreases if you keep the overall price the same. That is not innovation, that same benefit applies to everyone in the industry, it's just a matter of WHEN a customer will benefit from it:.

Interestingly enough however, Netezza's TB-based pricing approach enables them to control when to pass on lower hardware costs to the customers... oh and the support is probably a percentage of the total price not of the so called lower price…

Open Standards

Netezza is saying that their new systems are based on open, industry-standard components – IBM blade technology – but this appears to be only half of the story.

Netezza's architecture is still using proprietary hardware components, through their usage of FPGAs (Field-Programmable Gate Arrays). These FPGAs are apparently running non-open Linux and Netezza's proprietary code.

Slowly switching to commodity hardware components however looks like Netezza's silent acknowledgment that Oracle's approach – using open standards – is advantageous. But they are not there yet, and it's going to be interesting to see what they are going to do with their FPGAs looking forward ...

Does Netezza catch up on the software front?

While the new release of Netezza helps to bring Netezza to a more modern, and slightly more open, hardware platform, this new release does not appear to address any of Netezza's major shortcomings in software:

Netezza continues to lack enterprise availability feature like active disaster recovery systems, or the ability to tolerate node failures without downtime

Netezza continues to lack enterprise security features like encryption and virtual private database

Netezza continues to lack robust application development features like rich SQL extensions and a procedural language

Netezza continues to lack fundamental database optimization techniques, such as indexes

Netezza continues to lack advanced business intelligence capabilities like built-in OLAP and data mining

Looking at the apparent weaknesses on the software side for Netezza and the better performance characteristics of the HP Oracle Database Machine the balance is still clearly in favor of the Database Machine.

No matter what angle you choose to look from:

nz_table.JPG

* Oracle Database Machine SAS
** A storage processing unit consists of a CPU and an FPGA

Summary

This looks like a disappointing new release for Netezza, unless they have something else to say tomorrow .. something 'big', as promised on their web page. There are no major new features announced so far. It looks like the new release is purely a migration to new hardware with incremental gains due to newer hardware components. Meanwhile, the Oracle Database Machine continues to exceed Netezza's performance, and is poised to benefit from significant new enhancements upcoming in Oracle Database 11gR2 (more to come soon). Netezza is touting their lower prices rather than better performance. This is a clear shift in strategy – away from innovation and moving towards pure price competition. Is this the beginning of the end for Netezza?

Comments:

You list Netezza as having 8 DB cores per rack, according to the information I have seen published so far the rack has 12 S-Blades, each S-Blade contains 2 quad core Intel CPU's, the majority of the database processing is performed on the S-Blades so that is 96 CPU cores per rack not 8, and of course that's not including the 96 FPGA's in the rack as well.

Posted by Harry on August 03, 2009 at 07:04 PM PDT #

I found that this posting was so fraught with misconceptions about what really matters in data warehouse platforms along with downright errors that a simple brief comment didn’t feel sufficient. Instead, to help correct at least some of those misperceptions, I have posted a response on my “Thoughts from Inside the Box” blog and I would encourage readers to have a visit to get the record set straight. --Phil Francisco VP, Product Management & Marketing Netezza Corporation

Posted by Phil Francisco on August 04, 2009 at 06:33 AM PDT #

Hi Harry, we are trying to confirm what is actually in the box. Unfortunately not much we can get out of Netezza at their launch. We are also still trying to figure out if there is something like InfiniBand in the box. It seems that storage and compute servers are now separate, very much like in a DB Machine. More updates to come - undoubtedly. JP

Posted by Jean-Pierre on August 04, 2009 at 08:20 AM PDT #

It is only good Blog sportsmanship to post rebuttals. http://www.netezzacommunity.com/blogs/nzblog/2009/08/04/a-fud-machine-in-overdrive As a customer of both Netezza and Oracle I would like to see the discussion continue in blog space. -Dave

Posted by Dave Anderson on August 04, 2009 at 11:51 PM PDT #

Hi Dave, fully agree... I was traveling so I did not get to this right away. JP

Posted by Jean-Pierre on August 05, 2009 at 12:07 AM PDT #

You write "Netezza continues to lack fundamental database optimization techniques, such as indexes" but they use something called ZoneMaps instead. I think the goal is to alleviate the burden of constantly creating and maintaining indeces everywhere. Strong clinging to age-old index idea seems counter-productive. Also, what exactly does "These FPGAs are apparently running non-open Linux" mean?

Posted by James on August 05, 2009 at 11:40 AM PDT #

Hi James, Well a zone map is probably much closer to a range partitioning scheme in for example Oracle. Works well with data that can be sequentially grouped. Works great for dates, not so for random text of course. You can read some of this here: http://www.dbms2.com/2006/09/20/netezza-vs-conventional-data-warehousing-rdbms/ Netezza themselves call zone maps "anti-indexes". In other words, you don't worry about the record to find, you worry eliminate about where not to look. Oracle calls that partition elimination. If I need a certain set of rows within a range of dates (the classical example) I go after only those partitions and ignore the others. By eliminating these partitions you reduce the data to scan. So zone maps are not indexes, they are comparable to range partitions. We are not talking about strongly clinging to something... we are solving two problems using different ways to access data. Consider a customer table, you can index for example the customer id, or the customer name. Now if you need that one customer, and index will give you that one customer. A zone map will work on the customer number and give you a set of blocks that will hold this customer number. You will then have to go through the blocks and flip through whatever is in that block. Ask for the customer name, and the zone map will have no impact (you can't quite zone on random names). So, if your business users always ask this - and they will when you use this data to feed to CRM call center screens - you do want the index. If you never ask this, you do not create the index. Point is, if your system cannot solve the problem, you have a problem. Hope this makes sense... JP

Posted by Jean-Pierre on August 07, 2009 at 05:26 AM PDT #

I have spent some time digesting the Netezza posting, so here are some thoughts. First of all the non-open Linux comment was a bit of a goofy one… Mea Culpa… Kick me! However, while there were a lot of words in the NZ posting the key point – which was not disputed – is that the current Oracle Database Machine is faster than the new Netezza machine. Customers are saying this publicly (here: http://www.oracle.com/us/corporate/press/020542) and we are seeing this in benchmarks. Rather than just switching the discussion to a “number of rows” discussion I think we should have some more discussions around the 14GB/sec metric and what this really means (more to come therefore). On the overall performance claims, I think this posting (http://www.dbms2.com/2009/08/08/sorting-out-netezza-and-oracle-exadata-data-warehouse-appliance-pricing/) says it all. The other thing that struck me in the Netezza post is the lack of discussion around any significant software enhancements in the new Netezza product. While this new system shows an architecture and hardware revamp (I still don’t understand why the new hardware does not run on the latest Intel Nehalem chips), the software remains mostly unchanged. New compression is promised but the Twinfin still lacks functionality like enterprise security, mixed workload support (think consistent results reading while writing into the same tables) and many other features required in both a data mart and an enterprise data warehouse scenario. On the price/performance metric, well, yes the overall cost per TB went down, but it really is all about larger disks. What is very interesting is that overall value proposition for Netezza has gone from pure performance to price/performance. To me this is a clear shift in strategy driven by competition. Netezza has realized that competing on performance is no longer feasible. For a small company, competing on price does however pose a risk as it squeezes margins. The other problem for Netezza is that the Oracle Database Machine is still very competitive when looking at price/performance metrics.

Posted by Jean-Pierre on August 13, 2009 at 02:19 AM PDT #

hi folks, I am working in a web application. Some database tables in this application is very huge and data is increasing day by day. We are using Oracle 10 g DB. Initially we thought of going for Table Partitioning(oracle 10G feature). But due to some license and physibility problem , we had to drop the plan. Now we are thinking of creating backup tables every year to back up data. But due to constraints and some application's flow, it is giving birth to several problem. Is there any other way to do this?? Thanks Satish

Posted by Satish on August 18, 2009 at 06:29 PM PDT #

Satish: You definitely want to use partitioning (which was actually introduced in v8.0). Creating dynamic SQL to access multiple tables (which are really like partitions) is error prone and difficult to maintain at best. As far as cost goes (since partitioning is a licensed option), consider how much your company will spend trying to develop additional code and application functionality as well as the relatively expensive ongoing maintenance cost for whatever solution you come up with. I think that cost will be quite significant (especially the maintenance cost year-over-year) and make a case for purchasing partitioning much clearer and easier. For example, partitioning code is maintained by Oracle and improved with each release. Your custom code is maintained by you and is only enhanced with additional cost and effort by you. Dan

Posted by Dan Norris on August 19, 2009 at 09:58 AM PDT #

Post a Comment:
Comments are closed for this entry.
About

The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
2
4
5
6
7
8
9
10
11
12
13
14
16
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today