X

Recent Posts

sun

The cloud is coming, the cloud is coming!

If you are an Oracle systems ISV, you might be feeling a bit perplexed over this seemingly unavoidable windstorm – the Oracle Public Cloud is imminent, how does the Oracle SPARC Cloud fit into it, and why should an ISV even care...?  The folk tale Henny Penny comes to mind where Chicken Little claims, “The sky is falling, the sky is falling!” in the mistaken belief that disaster is imminent...Just recapping a few comments made by CEO Mark Hurd at last month’s Oracle OpenWorld is enough to get that mindset started.  In fact, all Oracle keynote presentations carried a common theme about the conversion to the cloud being one of the most significant IT transformations in history.Mark Hurd, in outlining 'CEO Priorities', made sure his audience was clear on the fact that, 
“Oracle’s growth rate in cloud [82%] is higher than other cloud vendors.” He further stated his case, predicting that by 2025, 
"80% of IT budgets will be spent on cloud, not traditional systems (IT)", and,"the number of corporate data centers will drop by 80% — as the workloads move to the cloud”.Well, before the sky does fall (for in this case, the cloud IS coming...), lets review 3 questions ISVs should be asking about this dramatic change in their business model of selling software to the masses.Question #1 – What’s in it for ME?In a word, OPPORTUNITY. which Webster would define as, "an amount of time or a situation in which something can be done."If you are an organization making and selling software to run on Oracle systems (x86 or SPARC) with Oracle Solaris 11 (public list here), you should care very much about the market opportunity this represents.  Today there are hundreds of thousands of SPARC servers around the world running mission critical enterprise workloads. SPARC Solaris systems don’t break and don't just go away – they keep on running and running.   If Mark Hurd is correct, 80% of these workloads will be running on the cloud by 2025.  There is no need to re-platform if you can run these same mission critical apps on the exact same gear in Oracle’s public cloud.  Whether that is for Dev/Test, production, or a disaster recovery site, the simplicity and cost savings of moving those workloads to dedicated SPARC compute in the Oracle Public Cloud just got real easy.  And btw, that's x86 or SPARC -- you choose, same price either way.  Both are less than AWS!  Oracle is the only vendor offering that option.  And yes, we have customer demand filling up the pipeline now.Oracle SVP Dan Miller estimated in his Oracle OpenWorld keynote that "today" approximately 5% of enterprise workloads are on the cloud.  Going from 5% to 80% is another way for ISVs to spell OPPORTUNITY.Question #2 – What is Oracle’s SPARC Model 300 and why does it matter to an ISV?Oracle’s SPARC cloud is available today as a dedicated compute infrastructure as a service (IaaS) offering called the SPARC Model 300 (video: SPARC in the Oracle Public Cloud).  This offering provides GINORMOUS advantages to our ISVs over competitive IaaS offerings.  Here are four things to consider:1) Security – security for the cloud needs to be FAST and ALWAYS ON.  Hands down, the Model 300 will deliver public cloud users an unprecedented level of security  -- with virtually no performance impact.  If this sounds too good to be true – check out these blogs on how the latest SPARC processor, the SPARC S7, compares to the latest x86 processors, which happen to be running the IaaS offerings for our competitors Microsoft Azure and AWS cloud:
 - SHA Digest Encryption: SPARC S7 Performance, Beats Intel E5-2699 v4 Per Core Under Load
 - AES Encryption: SPARC S7 Performance, Beats Intel E5-2699 v4 Per Core Under Load
 - Real-Time Enterprise: SPARC S7-2 Advantage Per Core Under Load Compared to 2-Chip x86 E5-2699 v3 - SPECjEnterprise2010: SPARC S7-2 Secure and Unsecure Results2) Java -- Java is the force behind many of the applications that power cloud computing.   This includes mission critical enterprise apps where SPARC has been a mainstay forever.  Since Oracle acquired the rights to Java from their purchase of Sun Microsystems, they have been investing heavily to ensure that their SPARC Solaris systems are the best platform in the world to deploy Java (JVM) applications in the cloud.  So it should be no surprise to hear that SPARC systems are faster per core running Java workloads than the latest x86 compute systems, BY FAR.  Questions?  See this blog: SPARC S7 & Java – Wake Up and Smell the Coffee!3) Analytics -- the SPARC Model 300 has revolutionized the approach to big data analytic workloads with its hardware-based data analytics accelerators (DAX) co-processors, delivering 3x – 12x faster time to insight.  See this 5-minute video to better understand how Oracle SPARC DAX changes the game in analytics.  Need an example?  Apache Spark is an extremely popular open source software analytic workload, which clearly demonstrates the value of including DAX co-processors into the equation (see Apache Spark and DAX), resulting in a 6x performance improvement.4) Deep innovation – CTO Larry Ellison has been beating the drum for quite awhile now that Oracle is quite serious about leveraging hardware and software engineering innovation to deliver a radically better cloud-based solution.  By innovating the complete software stack (which only Oracle offers) with the compute infrastructure, it is hard to argue when he says that Oracle is uniquely positioned as a company to deliver a public cloud for all workloads, not just for the Oracle software stack.  Oracle is aiming to provide a cloud solution with the lowest total operating cost, mission-critical reliability, fastest performance, built on open standards, providing compatibility to move between cloud and on-premises, and with build-in always-on security.If I could just say that a little differently in Mr. Ellison's terms,game, set, match.  Oracle plans to win this game.Question #3 – What should an ISV do to position for success in the Oracle SPARC Cloud?Here are 3 immediate actions for ISVs to get started:1) Register and publish your Oracle Cloud-compatible applications on the Oracle Cloud Marketplace (example: Infosys).  There are 1,400 partners today with 3,000 applications posted there.  By doing that, you immediately gain visibility to Oracle’s 400,000 customers.  Its hard to go wrong with that.2) Get your developers on board NOW so they can develop better software, faster.   1st, they can gain immediate access to the latest SPARC technology on Oracle's Software in Silicon Cloud - swisdev.oracle.com.  This secure cloud platform provides our partners with free access to ready-to-run virtual environments, and the Software in Silicon features of Oracle SPARC M7 and S7 systems.  2nd, they should come and play at Oracle's Developer Cloud Services  to gain some hands-on experience in a fully provisioned environment to code, test, deploy and manage applications in the Oracle cloud.  3) Contact our Cloud Applications Engineering team for a free discussion of your needs for the Oracle SPARC Cloud.  Simply send an email to: ISVSupport_WW@Oracle.com and we will get back to you within 48 hours to schedule a time.  We have a global team of cloud engineers who would love to hear about your challenges with moving to the cloud.  We can discuss licensing, metering, networking, security, quality of service, data access, configurations, migration and more. We look forward to hearing from you -- don't wait for the sky to fall!

If you are an Oracle systems ISV, you might be feeling a bit perplexed over this seemingly unavoidable windstorm – the Oracle Public Cloud is imminent, how does the Oracle SPARC Cloud fit into it,...

sun

Don’t Miss Oracle OpenWorld 2016!

Oracle OpenWorld 2016, held from September 18-22, is not something an Oracle systems partner should miss.If you haven’t already registered, and cost is a concern, you can still purchase a Discover Pass for a mere $75 that gets you access to all 5 days of keynote presentations, all Exhibition Halls and DEMOgrounds, and attendance to our exclusive partner and customer reception Tuesday night (RSVP here).  All this while attending one of the biggest and best events on earth in one of the truly great cities in the world.  It doesn't get much better than that!Here are 5 additional reasons you don't want to miss out:#1 -- Find out how Data Analytics Accelerator (DAX) technology is setting the big data world on fire. There will be a number of DAX-focused activities at OpenWorld 2016 (search the session catalog on "data analytics"), or stop by our booth at Moscone South 637 to see how DAX APIs are accelerating application performance (see this blog for more info).#2 -- The Oracle Cloud Infrastructure Platform [GEN7610] - Don't miss out on the chance to hear from John Fowler (EVP, Systems) and his infrastructure thought leaders for a glimpse into the next decade of enterprise computing (Monday @ 1:45pm @ the Oracle Cloud Plaza on Howard Street).#3 -- Larry Ellison keynote - There's no excuse for attending Oracle OpenWorld and not seeing Larry Ellison on Sunday evening as he alone opens the kimono on the future product strategy for Oracle.  Wealth aside, you have to be a bit fascinated with the fact that at 72 years young he continues to be extremely involved in Oracle's strategic direction as the reigning Executive Chairman and CTO of the company.   #4 -- Next Generation Solaris Technology — See a preview of the next release of Oracle Solaris and all the breakthrough capabilities that will come with it. On Tuesday, September 20 at 11am at the Marriott Marquis hotel, Markus Flierl, VP, Software Development, will provide an overview of what's in the new release as well as the investments beyond [more here].#5 -- BEERS with ENGINEERS - each afternoon (Mon - Wed) you can wander over to the Infrastructure showcase area [see map - nearly 10,000 sq. ft. of area] to grab a FREE BEER with Oracle Engineers and discuss the technical aspects of Oracle’s engineered systems, servers, storage, and networking development.  I hear there are even some nice beer mugs getting passed out. Looking forward to seeing you there!

Oracle OpenWorld 2016, held from September 18-22, is not something an Oracle systems partner should miss. If you haven’t already registered, and cost is a concern, you can still purchase a Discover Pas...

sun

SPARC S7 & Java – Wake Up and Smell the Coffee!

Oracle’s recently announced SPARC S7 server (Next Gen Oracle SPARC) has gained BIG news on its screaming performance and cloud infrastructure capabilities, while offering x86 commodity cost points (Not KIDDING!).  But a bit more under the radar with SPARC S7 for our ISVs, is what happens when you run Java applications on it.  Let's just say its worth pouring a cup of Java and reading on. Here are 7 reasons to run your Java applications on SPARC S7: 1. The SPARC Core is architected, tested, and tuned to maximize performance on Java   The SPARC S7 processor has a ton of Oracle intellectual property in it to make it the best platform for your Java applications.  Oracle's relentless investment in the SPARC processor (6 NEW processors in 6 years, with more to come) is part of Larry Ellison's strategy to ensure his large enterprise customers can benefit by running their software applications on SPARC Solaris systems.  For example, the SPARC processor design is one that handles a high level of database transactions, huge memory bandwidth requirements and very low latency -- optimal for the most demanding Java workloads.  In fact, SPARC S7 systems are faster per core running Java workloads than the latest x86 compute systems, BY FAR.  Need some proof?  - Go to: blogs.oracle.com/BestPerf  and search on "S7" - Read this blog (3rd paragraph):  New SPARC S7 Servers for Cloud & Scale-Out 2. Oracle Solaris 11 drives enterprise Java to new heightsOracle Solaris 11 has been designed for enterprise Java applications and cloud deployments.  Important Solaris technologies which boost Java application performance include virtualization, dynamic threading, large page support, accelerated cryptography, Software Defined Networking (SDN) and more.  For example, on SDN, Oracle Solaris 11 has a socket based API for Application-Driven SDN. This API has been included in the JDK, and can help applications determine service level properties to allow them to control SLAs within the application itself.  And with additions to JDK in Java 8 (see JDK 8 install on Solaris), such as Lambda Expressions and a new Streams API, developers can now more easily write high-performance Java applications that are multi-threaded, and take advantage of the performance of SPARC multi-core platforms. And that is just skimming the foam off the top of your latte -- there is a a lot more behind the synergies of Solaris and Java. 3. Java applications ROCK on S7 -- Kicking butt on x86The S7 offers industry leading performance and scalability on Java, which saves customers real $$.  Better core efficiency than x86 systems lowers costs for running Java applications and databases by reducing licensing costs.  This means you have to buy LESS CORES that you do with x86!  Recent public benchmark results have demonstrated the performance advantages of SPARC S7 and Java versus x86.  Check this out -- you will see that SPARC chips keep improving the per core performance, while overall x86 core efficiency is stalled: - An S7 core is 1.7x faster than Intel x86 on the SPECjEnterprise2010 benchmark.   [details] - An S7 core is 1.5x to 1.9x faster than Intel x86 on the SPECjbb2015 benchmark .   [details] - An S7 core is 1.5x to 2.1x faster than Intel x86 on the SPECjbb2015 “distributed” benchmark.   [details] 4. Hands-down, Big Data analytics applications will run BEST on SPARC S7 The Data Analytics Accelerators (DAX) capability in SPARC S7 (and M7) provides hardware acceleration to process a variety of big data analytic workloads on the cloud, while leaving the CPU cores free to focus on other tasks.  This 5-minute video shows how DAX work, but net-net is that hardware acceleration of cloud analytic applications on S7 can deliver 3x - 12x faster time to insight.  Here's one ISV who's drinking the Java:QuartetFS is pushing the limits of Java to take advantage of SPARC systems for analytics.  They provide an in-memory computing analytical platform which delivers interactive calculations for a new generation of credit and market risk applications, and here's what they had to say about S7:"We have been testing the SPARC M7 and S7 Data Analytics Accelerator (DAX) open APIs and initial results show our queries running an amazing 6-8 times faster. These results were achieved while we had up to 8x lower CPU usage, as the DAX engines offload the cores and free them up to do other work.” There are similar results with the open source cluster computing framework Apache Spark (here). 5. SPARC systems have unrivaled tools for Java developersAsk our Java developers what matters and they will quickly set you straight that it is about the tools available for efficient and effective development of the application.  Without them, the coffee never gets out of the cup (so to speak) -- or at least not for long after that first sip.  Managing the Java development process and maintaining the underlying infrastructure for Java services is exactly what the #1 development platform for Oracle SPARC systems does.  Meet Oracle Developer Studio.  Specific to Java development, developers can get increased observability of their applications using a combination of Oracle Developer Studio Performance Analyzer and Java Mission Control/Flight Recorder. By using integration's with Oracle Solaris DTrace and enhanced DTrace probes in the Oracle Java VM, performance data can be visually analyzed for hot spots, thread deadlocks, and the Java heap including garbage collection right down to the hardware.  On top of that, optimizations for the SPARC processor have been made to the JVM itself, including Java security performance improvements, Java just-in-time compiler code generations, and enhancements in Java scalability.  6. SPARC S7 is the MOST SECURE platform option, with SCREAMING encryption performance    We hear about customers delaying the decision to turn on application encryption because they are worried about the performance impact on the application.  That is dangerous...  The S7 processor has the fastest cryptographic acceleration in the industry, delivering end-to-end data encryption and secure transactions with near-zero performance impact to the Java application.  The incredible efficiency of this design is that these cryptographic accelerators can rapidly process data for encryption while the S7 processor cycles are available for use by the database and other applications. This 4-minute video demonstrates how this works.  Here's more data to show you that we really aren't kidding about this: - SHA Digest Encryption: SPARC S7 Performance, Beats Intel E5-2699 v4 Per Core Under Load - AES Encryption: SPARC S7 Performance, Beats Intel E5-2699 v4 Per Core Under Load - Real-Time Enterprise: SPARC S7-2 Advantage Per Core Under Load Compared to 2-Chip x86 E5-2699 v3 - SPECjEnterprise2010: SPARC S7-2 Secure and Unsecure Results In addition, the SPARC S7 & M7 processors features the Silicon Secured Memory (SSM) feature, which safeguards against  invalid or stale memory references and buffer overruns.  This 4-min video demonstrates its capabilities. Here is what AsiaInfo, a leading telecom network and software solutions provider in China, had to say about SSM: " Our solutions are multi-threaded, memory intensive and response time sensitive and demand high performance. Working with Oracle and using the Silicon Secured Memory feature on SPARC M7 and S7 we have been able to shorten time to find and fix bugs by 3 days. This is a huge benefit as detecting and fixing memory access issues is normally a very difficult and time consuming process." 7. Java (JVM) is the language of the cloud -- and the design target for SPARCToday, Java, and Java Virtual Machine (JVM), are the invisible force behind many of the applications that power cloud computing (see chart below). This includes mission critical applications where performance, scale, and ease of management are vital requirements.  The design target of SPARC S7 is scale-out processing in the cloud.  And since Oracle has acquired rights to Java from Sun Microsystems, it should come as no surprise that they are investing heavily to ensure that Oracle's SPARC Solaris systems are optimized to be the best platform in the world to deploy Java (JVM) applications in the cloud. More coffee??

Oracle’s recently announced SPARC S7 server (Next Gen Oracle SPARC) has gained BIG news on its screaming performance and cloud infrastructure capabilities, while offering x86 commodity cost points (Not...

sun

SPARC M7 [and now S7]: Are You Kidding Me!?

Our ISV partners are beginning to ask us if we ARE kidding,  as we continue to produce a plethora of evidence demonstrating the performance, security, throughput and many other advantages of Oracle's SPARC M7 systems, including the newly announced SPARC S7 systems.  Hearing things like 2x more cores, 2x more threads, 4x larger cache per core, 4x more I/O bandwidth... can tend to lose their meaning after awhile.  Does combining software and silicon really amount to that big of a difference!?  And just what do those "x" factors equate to when it comes to real world application of these new SPARC M7/S7 systems? We'll let you be the judge -- here is a compilation of what evidence we have to date.  Analyst Reviews: - Edison: Security & Performance Advantages with Oracle Software in Silicon- Enterprise Strategy Group: Oracle M7 Enhances CPU-level Security- ESG: I May Be Losing My Mind – I’m Believing In Oracle Hardware- Forbes: The Breakthrough Technology That Will Turbocharge Big Data And Cloud Computing- Forrester: Oracle Delivers “Software on Silicon”- IDC: What If Oracle Is On To Something?- Ovum: Oracle's latest Sparc refresh puts software in silicon- SiliconANGLE: Oracle debuts first systems with 10-billion-transistor SPARC M7 chip- The Next Platform: Oracle Takes On Xeons With SPARC S7- Wikibon: Software-in-Silicon Drives Oracle SPARC M7 Processors World Records: - Infosys Finacle Sets a New Record on Oracle SuperCluster M7 (Press Release)- SAP Two-Tier Standard Sales and Distribution SD Benchmark: SPARC M7-8- Siebel PSPP: SPARC T7-2 Beats IBM- SPECvirt_2013: SPARC T7-2 for Two- and Four-Chip Systems- SPECjbb2015: SPARC T7-1 for 1 Chip Result- 20 World Record Performance Benchmarks ISV Video Testimonies: - Siemens Eyes Dramatic Increase in Speed of SPARC M7 Servers- MSC Software Sees Amazing Throughput with Oracle's SPARC T7- The Advantage of Running Temenos on Oracle Engineered Systems- Quartet FS In-Memory Analytics Java Advantage on SPARC ISV Evidence: - Unbeatable Scalability of SAP ASE on Oracle M7- SAS and Oracle SPARC M7 Silicon Secured Memory- Amazing Results on SPARC T7 by MSC Software- SPARC T7 - Minutes to Seconds!- Siemens Teamcenter on SuperCluster M7 with 10,000+ Concurrent Users- Unbeatable Scalability of SAP ASE on Oracle M7- SPARC S7 & Java - Wake Up and Smell the Coffee- IBM Informix Server 2X Faster Per Core on SPARC S7 Than x86 Customer and Partner Quotes (in Press Release): - AsiaInfo: "able to shorten time to find and fix bugs by 3 days. "- B&H Photo Video: "run 83x faster on SPARC T7"- BPC: “excited to see dramatic performance increases"- Capitek: “the only effective method of protection against dangerous programming vulnerabilities"- Informationsverarbeitung für Versicherungen GmbH (ivv): "will dramatically increase the speed of services"- MSC Software: “able to deliver better core-to-core throughput than an Intel Xeon X5 v3 server"- QuartetFS:  “... our queries running an amazing 6-8 times faster. "- SAS: "detected difficult to find run-time errors far more quickly than other products"- Siemens PLM Software: “the new Oracle SPARC M7 servers showed dramatic performance improvements"- Software AG: “achieved an amazing 2.8X performance increase"- University Hospitals Leuven: “SPARC is the only suitable platform that meets our application needs" Customer References: - University Hospitals Leuven Provide Electronic Health Records in Real Time- B&H Photo Experiences up to 83 Times Faster Results with Oracle SPARC T7 Servers (video)- ivv Runs Applications 1.8x Faster, Cuts Compliance Reporting Overhead 10x Why not TRY OUT M7 and see for yourself?Oracle now offers partners, customers and university researchers access to a Software in Silicon Cloud, which provides developers a secure and ready-to-run virtual machine environment to install, test, and improve their code in a SPARC M7/S7 system running Oracle Solaris. Try it -- we think you'll agree.  We are NOT kidding around! Our ISV .htmtableborders, .htmtableborders td, .htmtableborders th {border : 1px dashed lightgrey ! important;}html, body { border: 0px; } body { background-color: #ffffff; } img, hr { cursor: default }

Our ISV partners are beginning to ask us if we ARE kidding,  as we continue to produce a plethora of evidence demonstrating the performance, security, throughput and many other advantages of Oracle's S...

sun

SPARC Cryptographic Accelerators — Securing the Cloud

The significant volume of data being harbored on a global basis presents a massive target for today’s increasingly sophisticated IT hackers.  Securing this data represents one of the largest challenges facing businesses today.  Check out map.norsecorp.com to see an interesting view of the frequency by which these attacks may be taking place.  Not a pretty picture...As businesses are moving rapidly to virtualized cloud computing infrastructures, many organizations are hesitating to move their strategic enterprise workloads over due to concerns about the security of that data.  Today’s standard encryption models fall far short — and often enterprises are not willing to pay the performance penalty for turning on security encryption.Enter SPARC M7/S7 Integrated Cryptographic Accelerators.  Oracle has taken a very different approach to this problem with the SPARC processor.The SPARC M7 processor has 32 integrated cryptographic accelerators (highlighted in yellow) on board. Think of them as mini data traffic cops who only do encryption.There are as many crypto accelerators as there are CPU cores on the M7/S7 chip.  The incredible efficiency of this design is that these cryptographic accelerators can rapidly process data for encryption while the SPARC M7/S7 processor cycles are available for use by the database and other applications.  These traffic cops don’t care about what is going on over there, they only care about processing encrypted data.  FAST.As a result, each processor core contains the fastest cryptographic acceleration in the industry, allowing IT organizations to deliver end-to-end data encryption and secure transactions with near - zero performance impact.  And no additional hardware or software investment required.  Data encryption can be  enabled by default, without compromise in performance, using wide-key cryptography accelerated in hardware.Want to see how this works?watch this  4 minute video of the SPARC M7/S7 Integrated Cryptographic Accelerators.Need evidence?AES Encryption: SPARC S7 Performance, Beats Intel E5-2699 v4 Per Core Under LoadSPECjEnterprise2010: SPARC S7-2 Secure and Unsecure ResultsSHA Digest Encryption: SPARC S7 Performance, Beats Intel E5-2699 v4 Per Core Under LoadSPARC M7 Arrives, Breaks RecordsWant to test for yourself?Go to SWiSdev.Oracle.com and leverage Oracle’s Software in Silicon Cloud to gain access to this technology today, along with numerous FREE developer resources.

The significant volume of data being harbored on a global basis presents a massive target for today’s increasingly sophisticated IT hackers.  Securing this data represents one of the...

sun

SPARC M7 DAX – Stream Processing for Big Data

“And now you know the rest of the story.” – Paul Harvey Paul Harvey was an American radio broadcaster who ended each of his ABC radio broadcasts with the tag line, "And now you know the rest of the story."  His idiosyncratic stories always seemed to dig up the details that nobody had heard. In a similar fashion, the overwhelming growth of big data for enterprises today means that not only must you be able to analyze the rest of the information that truly counts, but that involves sophisticated search and machine learning algorithms over large data sets to find the important details that matter.Oracle is re-thinking this challenge with their innovative approach to processor design.In March 2016 Oracle announced a free and open API and developer kit for its Data Analytics Accelerator (DAX) in SPARC processors through its Software in Silicon Developer Program.  The SPARC M7 DAX is a unique innovation that accelerates a broad base of industry-leading analytic applications to help solve big data challenges.DAX accelerators were specifically designed to accelerate analytical queries, and were initially supported in the Oracle Database 12c In-Memory option. This open API for DAX is designed to expand the existing program so application developers can leverage the DAX technology to accelerate a broad spectrum of software applications, including big data analytics, machine learning, and more.   Apache Spark's in-memory framework is an ideal showcase for demonstrating this kind of acceleration benefit of DAX (more detail:  Apache Spark and DAX).Enter stream processing, which can be a game changer in a big data world.  This technical article describes stream processing using the DAX APIs in detail:Introduction to Stream Processing Using the DAX APIsThese DAX APIs allow you to use stream-processing techniques to analyze and act on real-time streaming data that is in-memory, by taking advantage of the DAX hardware acceleration on the SPARC microprocessor.  Stream-processing techniques allow efficient use of system resources by structuring memory operations as regular patterns that can be accelerated by the DAX co-processors.The DAX co-processors enable direct execution of many of the DAX API operations, manipulating in-memory data streams directly and freeing the SPARC CPU for other tasks.   The DAX APIs will continue to evolve over time as more applications are developed and more analytic functions are added.  Developers interested in accelerating analytics with the open API for DAX can register to access to Oracle’s Software in Silicon Cloud at: http://swisdev.oracle.com/. This 5-minute video demonstrates how DAX work: Oracle’s Data Analytics Accelerators (DAX)For more information, or to enter a discussion on this topic, visit our Software in Silicon Community page on the Oracle Technology Network: Develop the Next Generation of Analytics and Security And now you do know the rest of the story!

“And now you know the rest of the story.” – Paul Harvey Paul Harvey was an American radio broadcaster who ended each of his ABC radio broadcasts with the tag line, "And now you know the rest of the...

sun

SPARC T7 - Minutes to Seconds!

We are not kidding. SPARC M7/T7 results are now coming in from our partners and the results are truly worth noting.  Here is one more example to add to the list. Hangzhou New Century Electronic Technology Co. Ltd. (NCI), located in Hangzhou, China, is an Oracle partner specializing in systems integration, hardware and software distribution, and technical support services for the tobacco industry.  They develop and distribute an integrated management and control application called "The Tobacco Business Operation & Maintenance Platform v4.0", which is a widely recognized solution based on Oracle’s Java EE architecture.  This platform provides the tobacco industry with a unified platform for operations, management and safety, and includes cloud compute & big data analysis functionality. As the story has been going with the SPARC T7, NCI heard about the highly acclaimed analyst reviews, 20 new world record benchmarks, the numerous customer quotes (B&H Photo Video: "run 83x faster on SPARC T7"), and decided to try out the powerful T7-2 server on the Tobacco Business Operations & Maintenance Platform.  Their objective was to run tests to measure the performance results comparing the Oracle Database 12c with the IBM DB2 Database, both running on the Oracle SPARC T7-2 servers. Hardware Components:The hardware configuration included two Oracle SPARC T7-2 servers with 512GB of memory to deploy Oracle Database 12c RAC, and two Oracle SPARC T7-2 servers with 512GB of memory for deployment of Oracle Weblogic 12c Cluster.  A single Oracle FS1 Flash Storage System was used to deploy Oracle RAC ASM, and for operation and maintenance of data storage.  Software Components:The software configuration included Oracle Solaris 11 as the operating system; Oracle Database version 12.1.0.2.0; Oracle WebLogic version 12.1.1 for the middleware; and IBM DB2 Database version 10.5.0.3.   Test Results: 8 minutes to 7 seconds!Running the same series of SQL statements on the IBM DB2 database took just over 8 "minutes", when the same exact run on the same hardware took just under 7 "seconds" with Oracle Database 12c.The tests were run with the Oracle Database In-Memory option enabled and utilized the SPARC T7  Data Analytics Accelerator (DAX) co-processors.  These results were compared to the exact same hardware configuration running the IBM DB2 Database.  The test database file size in both cases was 256MB.  This was a test to see if the Oracle on Oracle solution could demonstrate a significant advantage. Are you kidding me!? NCI realized two primary factors were responsible for that dramatic advantage of the Oracle on Oracle solution: 1) Data Analytics Accelerators (DAX) are individual co-processors on the SPARC T7 chip which perform query-related operations directly through the hardware, thereby improving Oracle Database performance significantly. NCI was able to utilize DAX hardware acceleration for the Oracle Database 12c in-memory database operations on this test, thereby relieving the SPARC T7 cores from this analytics workload. 2) Oracle Database In-Memory is a new option with Oracle Database 12c which works with DAX on the SPARC T7 chip to transparently accelerate analytic queries by orders of magnitude.  Oracle Database In-Memory optimizes both analytics and online transaction processing workloads (OLTP), delivering outstanding performance for transaction processing while simultaneously supporting real-time analytics for business intelligence. A couple of notes on this test:1) The SPARC M7 DAX functionality is used by default when Oracle Database In-Memory Query Accelerator is enabled.  No additional administration is required to turn this functionality on.2) The database file size of 256MB is a very small size and not applicable to most use cases.  A larger database would yield even better performance results.3) Oracle offers a tool called the In-Memory Advisor which could be used here for assessment of the workload and recommendations regarding how to optimize the Oracle Database performance.  See the Oracle Database In-Memory Advisor White Paper for details. More Information:For more information, details and system sizing help you can contact the team via isvsupport_ww@oracle.com

We are not kidding. SPARC M7/T7 results are now coming in from our partners and the results are truly worth noting.  Here is one more example to add to the list. Hangzhou New Century Electronic...

sun

Webinar: Why Software In Silicon?

Oracle Academy is offering an exciting opportunity to learn about the design principals behind the revolutionary Software in Silicon technology which is at the core of Oracle's SPARC M7 microprocessor chip design.  In this LIVE panel discussion with senior executives from the software and hardware side of Oracle's systems business, you will gain insights into the strategy and thinking behind Oracle’s dramatic re-design of computer chip architecture, as well as be able to ask questions to be addressed live by the panel members.Course Title:    What Happens When Software Moves Into Silicon (Webinar Replay)Target Audience:  University students, Oracle customers & partnersEvent Date:    November 11, 2015Event Time:    9:00-10:00am (Pacific Time) Putting software onto the computer chip (Software in Silicon) is a revolutionary new technology which will fundamentally change the way computer systems are built in the future.  For over five years now Oracle has been investing in improving the Oracle software applications by embedding software technology into the microprocessor -- for better performance, lower cost, higher system availability, and most important of all -- securing the cloud from intruders. Oracle processor engineers have worked closely with their software engineers, in particular the database experts, in hard-wiring capabilities specific to application processes and performance onto the processor—hence the “Software in Silicon” moniker. This webinar will explore what happens when software moves into silicon, as well as providing practical examples of innovative features this new technology brings to the market. Here are a couple of pre-webinar viewings to wet your appetite: Forbes: The Breakthrough Technology That Will Turbocharge Big Data And Cloud ComputingForbes: 10 Reasons Software On Silicon Redefines Enterprise ComputingYoutube (start at 0:00): M7 - Breakthrough Processor and System Design with SPARC M7

Oracle Academy is offering an exciting opportunity to learn about the design principals behind the revolutionary Software in Silicon technology which is at the core of Oracle's SPARC M7...

sun

OpenWorld 2015 Recap - Secure Cloud

Even if you attended all 5 days of Oracle OpenWorld (OOW) 2015 this past week, there was WAY more content and goings on over the span of 5 days than one could possibly hope to keep track of.  But if I had to sum it all up in two words for our partners, it would be "Secure Cloud". "We are in the middle of a generational shift in computing that is no less important than our shift to personal computing..."  Larry Ellison, Oracle OpenWorld Keynote Looking like he was nearing his 40th birthday, a vibrant Larry Ellison kicked things off on Sunday night with a theme around Cloud, which pretty much set the tone for the week to come, and followed with more than a dozen announcements around Oracle Cloud services and capabilities.  The cloud drum beat continued throughout the week with pretty much all of the executive keynote presentations.  Here are just a couple of quotes I captured in the few I was able to catch: - “cloud is the biggest change happening to infrastructure in 25 or so years...”  Dave Donatelli - “having a public cloud is a requirement for future success...”  Dave Donatelli - "we now have virtually 100% of our portfolio rewritten, rebuilt, and modernized for the cloud…” Mark Hurd Mark Hurd's Monday morning keynote (Vision 2025: Digital Transformation in the Cloud) included a series of IT predictions for 2025:  - 80 percent of all production apps will be in the cloud. (Today, Hurd noted, it's about 25 percent.) - 100 percent of dev tests will be in the cloud. - Virtually all enterprise data will be in the cloud. - Enterprise clouds will be the most secure IT environments. So, lets pause here for a minute and look at just how dramatically this is changing business today.I saw a presentation recently at an industry conference which underscored this perspective.  4 simple examples -- think about this: - the largest car rental company in the world has no cars (Uber)- the largest retailer in the world has no stores (Amazon)- the largest hotel chain in the world has no hotels (Airbnb)- the world's most popular media owner creates no content (Facebook) Clearly a significant power shift is taking place, and disruptors like Airbnb and Uber are leveraging IT to achieve incredible market caps and growth multiples previously unheard of!  This begs the question --- who is going to be around to supply the infrastructure for these clouds and how will they do it!? Dave Donatelli even played into this point by highlighting key events in the IT industry that are happening because of this shift to the cloud:- IBM sells its x86 server business to Lenova for $2.3B and pays GlobalFoundries $1.5B to take chip unit- Dell, Riverbed, Veritas  — all go private- HP splits in two in a sign of changing times for hardware giants- Dell buys EMC for $67B (the largest tech deal EVER) Larry Ellison's 2nd keynote on Tuesday afternoon delivered a compelling argument that Oracle is uniquely positioned to steal this show by focusing on the cloud and shifting the focus of infrastructure to security.   "The biggest concern we have as an industry is security", said Ellison, as he unveiled the new SPARC M7 software in silicon offering.  I would recommend watching the positioning Larry used in the first 30 minutes of this keynote to understand the importance of this theme around secure infrastructure:  The Secure Cloud by Larry Ellison Wednesday morning then featured 2 keynote presentations which dug into the meat of this infrastructure strategy by Oracle.   John Fowler (EVP for Systems) and Juan Loiza (SVP for Oracle Systems Technology) reviewed this security strategy around Oracle's all-new family of SPARC systems built on the revolutionary 32-core, 256-thread SPARC M7 microprocessor with "Always On" Security in Silicon.  Don't miss the replay on this presentation (below), which provides a detailed overview of how the new SPARC M7 processor-based systems, including the Oracle SuperCluster M7 engineered system, SPARC T7 and M7 servers, and Oracle Solaris 11.3, are designed to seamlessly integrate with existing infrastructure and include Always On security for the cloud.  Here is a ~1 minute segment of John Fowler's representation of the Heartbleed and Venom threats and how SPARC M7's Silicon Secured Memory protection can prevent them. - The New Era of Secure Computing and Convergence with Oracle Systems by John Fowler and Juan Loiza Oracle has plenty of customer and partner testimonies on these new systems, 20 new world record benchmarks, and videos from partners and customers which demonstrate why these new servers are such a very big deal to the infrastructure needs of the secure cloud (summary blog).  But returning to Ellison's comments on Tuesday afternoon, the powerful message is around the dramatic innovations Oracle has made with software in silicon on the SPARC M7 processor, especially around the topic of Security.At Oracle, we think the strategy of pushing software features into silicon is how to get ahead of the bad guys, and new features like Silicon Secured Memory (always on memory protection in hardware) is something our customers see as a clear differentiator for infrastructure in the cloud.  Security has become a #1 concern of our customers.  "We think that going forward, customers are going to be more interested in how a system and application runs encrypted, than you are going to want to know how it runs in the clear, because that's how you are going to be running your datacenter."   John Fowler, EVP, Systems Why not TRY OUT M7 and see for yourself?Oracle now offers partners, customers and university researchers access to a Software in Silicon Cloud, which provides developers a secure and ready-to-run virtual machine environment to install, test, and improve their code in a SPARC M7/T7 system running Oracle Solaris. Try it -- we think you'll like it!

Even if you attended all 5 days of Oracle OpenWorld (OOW) 2015 this past week, there was WAY more content and goings on over the span of 5 days than one could possibly hope to keep track of.  But if I...

sun

Virtualization Monitoring in Solaris Zones

Much has been written about the built-in virtualization capabilities of Oracle Solaris Zones, which allow users to isolate software applications and services using flexible, software-defined boundaries in Solaris.  Unlike hypervisor-based virtualization, Oracle Solaris Zones technology provides a very low-overhead, low-latency environment, which gives the appearance of multiple OS instances rather than multiple physical machines. This makes it possible to create hundreds (or even thousands) of zones on a single system, without the performance penalty which is inherent in hypervisor-based virtualization technologies. As these Solaris Zones proliferate, it often can be necessary to restrict and monitor resource utilization of the virtual machines.  A NEW white paper has now been published (Virtualization Monitoring in Solaris Zones) doing exactly that! This technical white paper describes the opportunities for restricting resource usage of Oracle Solaris Zones, and for monitoring the different zone types from within the global zone.  The technical content of this white paper is divided into 5 separate Solaris zones environments: 1) Resource Controlling of Oracle Solaris Zones2) Monitoring Default Zones in Oracle Solaris 113) Monitoring Oracle Solaris 10 Zones4) Monitoring Oracle Solaris Kernel Zones (technical article)5) Using OpenStack for Monitoring Oracle Solaris Zones

Much has been written about the built-in virtualization capabilities of Oracle Solaris Zones, which allow users to isolate software applications and services using flexible,...

sun

HELP!! Solaris Technical Support for Developers

Help!",  the 1965 song by the Beatles (click on the album) that served as the title for both the 1965 film and its soundtrack album was reportedly written by John Lennon, and was a direct reflection of the Beatles' astronomical rise to success in the mid-1960's.  "The whole Beatles thing was just beyond comprehension. I was subconsciously crying out for help", stated John Lennon in a 1980 interview. Help, I need somebodyHelp, not just anybodyHelp, you know I need someone, HELP!! Well, we'd better not go too far down that path to 1965...  but, DID YOU KNOW that our Solaris developers get free technical HELP under the Oracle PartnerNetwork (Gold level members and above) for any questions regarding development of their applications on Oracle Solaris 11!?  Yes, great news.  But most importantly, to coin a line from our theme song, its not just ANY body who is providing that help.  Lets be clear that this is NOT a call center in the middle east.  At Oracle we have a team of senior ISV Engineers who respond directly to these questions from our OPN partners.  These engineers are trained professionals who support our ISV's on a daily basis for technical validation and support of the their application on Oracle Solaris.  In other words, this is their job.  And in doing this, they are involved in all kinds of technical support topics with Solaris developers and are extremely knowledgeable in the intricacies of making sure that the ISV application runs BEST on Oracle Solaris.  Here are just a couple examples of the types of support they are trained to provide: Checking application compatibility with Oracle Solaris Certifying application with Oracle Solaris Optimizing application performance Ensuring application stability & reliability Determining the Oracle Solaris readiness of your application Helping you install Oracle VM Templates for Oracle Solaris Utilizing Dynamic Tracing Facility (DTrace) to understand better how your system is operating Providing migration assistance to Oracle Solaris from Red Hat Linux, IBM AIX, HPUX and others Helping you get access to Oracle Solaris systems in a secure cloud-based environment Want to try it out?  Its as simple as sending one email to this alias:  ISVSupport_WW@oracle.com As long as you are an OPN member (Gold level and above), we guarantee you no longer than a 48 hour turnaround.   Check it out right off our OPN page for Solaris Developers: Solaris Adoption Technical Assistance Lennon wrote the lyrics of the song to express his stress after the Beatles' quick rise to success. "I was fat and depressed and I was crying out for 'Help," said Lennon. No need for our Solaris Developers to get fat and depressed -- get HELP now -- and you'll see that Oracle's ISV Engineering team is definitely not "just ANY body"! About the photograph:  I ran across the "Help" album in the very cool Downtown Local coffee shop in Pescadero, California and photographed it on my trusty iPhone.

Help!",  the 1965 song by the Beatles (click on the album) that served as the title for both the 1965 film and its soundtrack album was reportedly written by John Lennon, and was a direct reflection...

sun

3 Reasons To Test Drive our Cloud

Oracle’s Software in Silicon Cloud is available NOW, free of charge, to our customers and partners for a test drive.This special offering provides our ISV partners and enterprise developers early access to the revolutionary Software in Silicon technology in the forthcoming Oracle SPARC M7 processor running Oracle Solaris 11, with a robust and secure cloud platform and ready-to-run virtual machine environments.And we have 3 partners who have great things to say about using it TODAY!JomaSoft is utilizing Oracle’s Software in Silicon Cloud to evaluate memory leak and code security of their cloud management platform, Virtual Data Center Control Framework (VDCF). “Leveraging the Software in Silicon Cloud literally saved Jomasoft weeks in comparison to our normal beta test cycle process in-house." —Marcel Hofstetter, CEO Capitek is also leveraging this cloud offering to evaluate Oracle's latest Software in Silicon technology in the SPARC M7 processor.  “We were able to create our own customized application test and verification environment in just minutes, while eliminating the need to acquire and deploy our own server resources for similar tests." —Jerry Chen, Senior Manager, Telecom Software Product Department AsiaInfo is testing their AsiaInfo Internet Short Message Gateway (AIISMG) solution on Oracle’s Software in Silicon Cloud to improve overall software reliability and security.  “The Software in Silicon Cloud is a very economic and efficient method for AsiaInfo to validate the upcoming hardware and software from Oracle on our AIISMG solution." —Fu Tingsheng, Director of EngineeringLearn how you can break the code to take advantage of this revolutionary new technology in software development.Get an edge on your competition — Sign up NOW!

Oracle’s Software in Silicon Cloud is available NOW, free of charge, to our customers and partners for a test drive.This special offering provides our ISV partners and enterprise developers early...

sun

Oracle Solaris Development Initiative - FREE Beer

ISV developers take note, this is even better than FREE beer! The majority of Oracle’s top enterprise customers run their most mission-critical applications on Oracle Solaris today.  Solaris is the vehicle by which Oracle customers leverage the power of the SPARC microprocessor - - delivering world-record performance, maximum scalability, and continuous reliable service.  There are 113 examples of this on the Oracle customer & partner success portal.  Take a look!But the question ISV developers are asking is how they can gain access to Oracle Solaris for development purposes - - where they need access to multiple development systems with Solaris licenses for testing and validation work in a pre-production environment.  And they especially want access to the latest release of Oracle Solaris where they can exploit some of the new features, such as OpenStack functionality, building a simple and secure cloud, or being able to deliver unique innovation for Oracle database or middleware applications.  And as we all know developers, they want it for FREE -- as in FREE beer. Ok, we get it, and the Oracle Solaris Development Initiative (SDI) provides just that — and more.And yes — it is FREE to ISV developers who are Oracle PartnerNetwork (OPN) Members, as long as they are Gold members (or above).  Here is just a sampling of what you get:- Development and demonstration licenses for Oracle Solaris, Oracle Solaris Cluster, and Oracle Solaris Studio- Access to software patches, updates, new releases for Oracle Solaris products- My Oracle Support (MOS) access for performance & feature enhancements, security patches, & bug fixes- Access to all MOS self-help technical resources such as security alerts, product docs, & support services- Developer support via email for technical questions around Oracle Solaris products, including Oracle technologies integrating with Solaris- Access to a Solaris development cloud with preconfigured Oracle Solaris developer zones - And more...All for FREE -- are you kidding me?!No.  Provided you are an OPN Gold member (or above), Oracle is quite serious about this. And if you are not yet an OPN member — this program alone should be reason to join.We may not be offering free beer [yet], but the SDI program will move you further down the path to success for learning to run the most complete and proven enterprise-class operating system in the world.  For FREE.

ISV developers take note, this is even better than FREE beer! The majority of Oracle’s top enterprise customers run their most mission-critical applications on Oracle Solaris today.  Solaris is the...

sun

Stopping Security Breaches with a Revolution in Chip Design

Perhapsthe most significant value to customers of Oracle’s recently announced SPARC M7 chip  design is the enhanced security it provides. Youonly have to browse the daily news headlines to understand just how importantsecurity is to Oracle’s enterprise customers today.  Names like Target, Home Depot, Wal-Mart, JP Morgan Chase,Apple and many more immediately come to mind.  Top-level executives are losing their jobs over this.   To quote a March 6, 2015 FortuneMagazine article on “5 Huge Cyber-security Breaches”, “Hackers have beenslipping through corporate computer defenses like they’re Swiss cheese.” Atthe Hot Chips Conference in August of 2014 Oracle unveiled the next generation SPARC M7 processor,  a revolutionarychange in its microprocessor design, highlighting an architectureadvancement called “Software in Silicon.”   SPARC engineers collaborated with Oracle’ssoftware engineers to hardwire specific software techniques directly ontothe SPARC M7 chip.  And don’t thinkthis happened over night -- this hasbeen ongoing work between hardware and software teams for a good portion of the 5 years since Oracle purchased SunMicrosystems, along with the rights to the SPARC microprocessor. Inhis keynote speech at Oracle OpenWorld 2014, Larry Ellison referred to the M7’ssecurity feature as “the most important piece of engineering we’ve done insecurity in a very, very long time.”  Soback to the Swiss cheese -- One very important security innovation inherent inthis new SPARC M7 microprocessor design is “application data integrity,” orADI.  ADI makes sure that a memory area is accessed only for the purpose for which it was allocated.  Memory allocation issues are oftenthe source of cyber-security breaches.  ADI can prevent any read or write of data beyond the breadth of the data.  And what is revolutionaryis that it does it in hardware – actually in the silicon of the forthcomingSPARC M7 processor. Butthat is just the tip of the iceberg. ADI does a lot more to stop malicious attacks of valuable corporatedata.  For example, stopping asecurity bug like Heartbleed, which is a severe memory handling vulnerabilityin the OpenSSL library.  Heartbleedcan trick the server into sending more memory than a given user is authorizedto access, with potential user names, passwords and security key informationthat should be protected.  When theADI feature is enabled, it can protect against the Heartbleed bug by detectingan invalid memory access on the server. Exactly how this works is clearly demonstrated in this shortdemo of this feature in action.  Check it out -- its pretty cool! Ifyou are a developer and you want to test this stuff out, Oracle has announced anew Software in Silicon Cloud where you can do that!  This cloud is a secure environment with ready-to-run virtualmachine environments. In addition, it includes Oracle Solaris Studio 12.4, whichprovides a tool set that detects numerous types of memory corruption and can aiddevelopers in quickly improving code reliability.  In fact, an upcoming Studio 12.4 update uses the Software in Silicon ADI feature to help the code analyzer work at near hardwarespeeds to allow developers to quickly find and fix memoryerrors with minimal overhead.  Check out Raj Prakash’s blog [Move Over Purify and Valgrind, There is a New Kid in Town], for some staggering numbers on how it compares to other memory access checkers. Hereare some further links to check out  --- note that we have a live webinar on this March 18th: LIVE Webinar on Software in Silicon Cloud (March 18th @ 11:00am PDT) Software in Silicon Cloud for Developers (Video) -  Security Features  Youtube videos on Software in Silicon

Perhaps the most significant value to customers of Oracle’s recently announced SPARC M7 chip  design is the enhanced security it provides. Youonly have to browse the daily news headlines to understand...

sun

Oracle Solaris 11 and Pluribus - application-awareness in the cloud

Talk to any executive of any leading enterprise company and they will tell you that they are embracing cloud computing for the future of their business.  And with the rapid growth of turnkey cloud computing solutions, the network fabric carrying the application packets is emerging as a key focus to drive greater performance, reliability, and efficiency of the application.  Networks need to be able to prioritize packet flow according to the importance of the application.  So if an application is of high importance to their business, the network fabric should be able to know that, and provide the necessary resource to ensure high performance of that application is delivered.Oracle Solaris 11.2 and Pluribus Networks are providing an integrated solution for doing just that.You can read about it in a recently release solution brief:  Oracle Solaris 11 and Pluribus Networks - Enabling Application-Driven SDN in the CloudThis is an integrated solution for controlling the network in real-time, by combining Oracle Solaris 11.2 built-in application driven software-defined networking (SDN) with Pluribus Networks bare-metal network Hypervisor.   This solution offers fine-grained QoS services, allowing for different tenants, applications, and even flow within applications, to be assigned SLAs to leverage the high-end router-class traffic manager and the network processing unit on the Pluribus platform. Applications become network aware so they can monitor congestions, errors, and latency across the fabric, allowing them to dynamically adjust their network resource requests.In addition, there is a short (6 min) video interview on this topic with Founder and CTO of Pluribus Networks, Sunay Tripathi:  Oracle and Pluribus Ally on Software Defined Networking For additional details on Pluribus Networks solutions, visit www.pluribusnetworks.com

Talk to any executive of any leading enterprise company and they will tell you that they are embracing cloud computing for the future of their business.  And with the rapid growth of turnkey cloud...

sun

Webinar: Start Testing Your Application Code Today!

Oracle ISV Engineering is offering an exciting opportunity to learn about the NEW offering for developers who want early access to the revolutionary Software in Silicon technology in the forthcoming Oracle SPARC M7 processor running Oracle Solaris 11.  In this LIVE webinar (8:00am PST on Jan 21), developers will learn how to use the NEW Software in Silicon Cloud to dramatically improve reliability and security, and accelerate application performance in today’s market. Course Title:    Oracle's Software In Silicon Features Available for Developers Today!Target Audience:  Application Developers with experience on Oracle SolarisEvent Date:    January 21, 2015Event Time:    8-9:00am PST (Pacific Standard Time) ** Replay of this webinar available HERE ** Software in Silicon implements accelerators directly into the processor to deliver a rich feature-set that enables quick development of more secure, robust, reliable, and faster databases and applications.  With the Oracle Software in Silicon Cloud, developers can have a secure environment in which to test and improve their software as well as exploit the unique advantages of Oracle's Software in Silicon technology.  Angelo Rajadurai, Software Engineer with ISV Engineering, will provide an explanation of the technical details of the Software In Silicon features and a demonstration of how developers can use the Software in Silicon Cloud to test these features on their applications today.

Oracle ISV Engineering is offering an exciting opportunity to learn about the NEW offering for developers who want early access to the revolutionary Software in Silicon technology in the forthcoming Or...

sun

Java 8 Parallel Streams on SPARC T5

I showed on this previous post how a parallel word search implemented by Java 7 fork/join (based on this article) scales on a T5-4 server. Java 8 new features let the code needed for the same parallelism become much more simple, compact, and readable. Parallel streams and lambda expressions spare us from many helping classes and methods definitions we needed to create before. Here is how to add Java 8 parallel streams support to the word search example. Looking at the code of the above fork/join article (can be downloaded from here), we can easily avoid all recursive tasks and all fork/join code by replacing (or adding to, to compare) the WordCounter.compute() method with computeWithStreams(). We still keep the same parallel processing: // computeWithStreams() performs word counting by parallel streams public long computeWithStreams() { List<Document> docList = Document.getDocList(); return dlist.parallelStream(). mapToLong(d -> occurrencesCount(d, searchedWord)). sum(); } The parallelStream() method inside the computeWithStreams() method shown above automatically creates the threads parallelism behind the scenes. The current Java 8 implementation of parallel streams uses the fork/join mechanism, so the performance gain remains the same as using fork/join directly.  The rest of the code here is the helper code for preparing the documents list and call computeWithStreams(). First We need to add this static field and a static getter method to the Document class: // Listing all documents statically in Document class. Not necessarily a good practice, // but for the sake of showing the main idea here List<Document> docList = Document.getDocList();  Then we populate this list with each of the generated documents (at the end of Document.fromFile() method). Instead of:  return new Document(lines); we add: Document Doc = new Document(lines); docList.add(doc); return doc; Finally, we just add the code to the main() method (just after the same single and fork/join calls): long[] parallelStreamTimes[i] = new long[repeatCount]; for (int i=0; i<repeatCount; i++) { counts = wordCounter.computeWithStreams(args[1]); stopTime = System.currentTimeMillis();parallelStreamTimes[i] = (stopTime - startTime);System.out.println(counts + " , parallel streams search took " + parallelStreamTimes[i] + "ms"); } My next post will deal with how to observe the parallelism created behind the scenes, and how to troubleshoot parallelism (scalability) problems. Meanwhile, take a look at how to run Java faster on Solaris.

I showed on this previous post how a parallel word search implemented by Java 7 fork/join (based on this article) scales on a T5-4 server. Java 8 new features let the code needed for the same...

sun

CloudSigma Public Cloud built on Oracle Solaris

On September 30, 2014 CloudSigma issued a press release announcing a native Oracle Solaris-based Infrastructure-as-a-Service (IaaS) offering for the enterprise.  And following on that press release, Robert Jenkins, CEO of CloudSigma,  demonstrated how CloudSigma is deploying instances of Oracle Solaris in their public cloud in two separate presentations at Oracle OpenWorld 2014: - Oracle Solaris Strategy, Engineering Insights, and Roadmap [GEN7808]  - Kernel Zones: Next-Generation Cloud Platform [CON7842]This is a significant strategic move by CloudSigma, leveraging Oracle Solaris features to meet their business objective of building high availability cloud servers and cloud hosting in both Europe and the US, and demonstrates Oracle Solaris and SPARC traction and value in the public cloud space.  CloudSigma is a pure-cloud IaaS provider that offers one of the most customizable cloud offerings on the market.  They recognize that Oracle Solaris 11 is a complete, integrated cloud platform that  is engineered for the type of large scale enterprise cloud environments their customers are demanding.  They see the benefits of implementing Oracle Solaris cloud virtualization with zero performance loss and the highest consolidation ratios for large-scale applications.  As an example, in his presentation at OpenWorld, Mr. Jenkins spoke to the excitement of leveraging "Remote Administration Daemon" (RAD), which is a standard system service that offers secure, remote administrative access to an Oracle Solaris system.  RAD enabled CloudSigma to incorporate Solaris very quickly without the burden of writing the code themselves, as they have done with Linux. “We’ve been excited to see the recent developments in the Oracle Solaris offering and have worked closely with Oracle to demonstrate the unique power of this platform for the enterprise,” said Robert Jenkins, CloudSigma CEO. “This new service will allow customers to engage with a SPARC – Oracle Solaris environment in new ways, as well as bring the benefits of the cloud paradigm to existing Oracle Solaris based workloads.”CloudSigma is seeing significant interest in this offering and have customers using it as an ideal testing and development platform ahead of their production deployments.  CloudSigma customers are also deploying Solaris for elastic workloads in a way they previously weren’t able to do for private dedicated hardware-based solutions. The release of Oracle Solaris 11.2 has transformed it into a complete cloud platform with OS, virtualization and Software Defined Networking (SDN) capabilities as well as a full distribution of OpenStack.  The demand for Oracle Solaris in public clouds is increasing significantly and CloudSigma is just the beginning.

On September 30, 2014 CloudSigma issued a press release announcing a native Oracle Solaris-based Infrastructure-as-a-Service (IaaS) offering for the enterprise.  And following on that press release, R...

sun

Power of SPARC T5 scalability unleashed

I was working with a leading asset management vendor in the financial server sector who is using Python for a considerable amount of their software, a typical three-tier architecture, Database, Business Logic and User interface using Python as the main back-end language. Performance was critical for both latency (fast individual query response) and total throughput (being able to service a large amount of queries in parallel). This was an opportunity to validate the quality of the scalability advantage of SPARC processors, with their large amount of cores and threads within a single chip. In order to test the scalability of the SPARC processor in a Python environment, I decided to use the standard Python benchmark which is available in all the latest Python distributions. By running multiple benchmarks in parallel, I could then plot the scaling factor to see how linearly the total throughput would ramp up as more cores and threads were utilized. The specs of the tested systems are Intel i7 running at 3Ghz, Sparc T4 at 3Ghz and Sparc T5-2 at 3.6Ghz. We used Python 2.7.5 running the pystones script supplied with the interpreter. The following graph shows us the results: It is clear from the above that all threads are capable of running quite independently i.e. without any noticeable effect on each other. Quite impressive! by the way, the amount of hardware threads that Oracle is able to squeeze into a single chip, and this will only get better, looking at the SPARC M7 processor presented at the Hot Chips 2014 Symposium last month. SPARC is also an excellent platform for virtualization, a single system giving the performance of 128 single-processor systems. Cost savings of a single machine versus multiple machines are considerable, if we take into account rack-space, electricity, maintenance. A premier partner of ours has reduced their lab by 50% after consolidating multiple racks and using virtualization instead of dedicated machines per developer.

I was working with a leading asset management vendor in the financial server sector who is using Python for a considerable amount of their software, a typical three-tier architecture, Database,...

sun

Why Should Your Oracle Database Run on Oracle Solaris 11?

Answer: because it runs faster, is moresecure, is more scalable & highly available, and because it simplifiesmanagement. If you would like to know more, a newwhite paper (Oracle Solaris 11 is a state-of-the-art platform fordeploying Oracle Database) has been published which details howOracle Solaris 11 brings distinctive benefits to Oracle Databasedeployments, including deployments of the latest release, Oracle Database12c. It describes Oracle Solaris 11.2 enhancements, which help to improvedatabase scalability, availability, security, and manageability, and describein detail how Oracle Solaris 11 and Oracle Database optimizationsbring specific benefits for database deployments on SPARC servers.   The Oracle Solaris and Oracle Database engineeringteams have worked closely to improve all aspects of databasedeployments, from simplifying installationto accelerating performance to troubleshooting I/O and muchmore.  Here are a few highlights from this white paper about these optimizations,which demonstrate why Oracle Solaris 11 is the optimal platform for deployingOracle Database: Break-Through Performance and ProvenScalability - When deployed on SPARC servers, the OracleSolarisplatform exhibits outstanding performance and scalability forOracle Database instances.  Servers based on SPARC T5, M5, or M6processors have achieved numerous world records whenrunning database applications such as data warehousing,online transaction processing (OLTP), and infrastructureapplications. Accelerating Database Deployments -As a part of ongoing integration work, Oracle Solaris andOracle Database engineers collaborated to simplify databaseinstallation with the goal of speeding deployments and improving theavailability of database services.  Enhancing Service Availability -Oracle Solaris and Oracle Database 12c have beenoptimized for fast database startup. Optimizations includein-kernel parallel allocation of shared memory, faster spawning ofbackground processes, and deferred database SGA allocation. Togetherthese optimizations improve start-up time significantly.  Increasing Throughput -Oracle Solaris and SPARC servers feature built-invirtualization techniques that allow system resources to be allocated tospecific workloads. In addition to efficient resource management, OracleSolaris features the ability to identify a “criticalthread” to the process scheduler, which helps tooptimize transaction throughput and performance. Enterprise-Level Security andManagement - Oracle Solaris offers provensecurity to safeguard Oracle Databaseimplementations, with advanced features suchas system-and network-enforced security, role-based accesscontrols, and sophisticated validation, monitoring, and auditing capabilities. Its no surprise that OracleSolaris continues to be the leading enterprise platform and whyit is extensively deployed to host Oracle Database instances—includingdeployments of Oracle Database 12c. 

Answer: because it runs faster, is more secure, is more scalable & highly available, and because it simplifies management. If you would like to know more, a new white paper (Oracle Solaris 11 is a...

sun

Mobile Tornado Accelerates Product Deployment with Solaris Zones

Mobile Tornado is a long-standing partner and hardware OEM of Oracle which provides Instant Communication services for mobile devices, with a focus on enterprise workforce management. “We have seen significant performance improvements running Solaris 11 with SPARC on Oracle software with Mobile Tornado's IPRS™ (Internet Protocol Radio Service) platform.” Shlomo Birman, VP Delivery and Services, Mobile Tornado In our ever-changing business environment where products are outdated quickly and on-premise integration cost is sky-rocketing, Time-to-Market (TTM) is becoming the key factor for a successful technology adoption by end-customers. Mobile Tornado asked Oracle ISV Engineering to help them with the design and implementation of a new rapid provisioning environment in-house in order to improve their TTM by reducing the time it takes to deploy their solution at the customer's premises. The main challenge for the on-premise deployment is the time difference between the HQ and the field engineer. There are also the accommodation cost associated with a long business travel and the need to do customization at the customer site without jeopardizing the environment that has been previously built at the ISV site. Finally, Mobile Tornado's software stack is modular and flexible for the customer needs, so the integration between the different software modules can sometimes be an error-prone process. The design goals for the deployment environment were : Build pre-configured VM images that include all Mobile Tornado software tiers such as the database and application tiers. Automate provisioning in order to build a fully-functional environment in days vs weeks. The ability to do final customization at the customer site based on the customer's need. Build images with different software stacks, all-in-one for small deployments and images per software module for larger deployments. Like any new project, we started by understanding how to find the match between the Oracle technology and the ISV. After analyzing the requirements, we came to the conclusion that the best fit for this kind of environment is the Oracle Solaris Zones technology. Solaris Zones are a zero-cost enterprise-grade virtualization technology. The key feature that Mobile Tornado was able to adopt is the Zones V2V capability. Using this feature, one can install a fully-functional environment and deploy it rapidly without the need to build and setup the software environment from scratch. During the project, we tested various archive methods in order to find the best solution in terms of simplicity and flexibility. We came to the conclusion that creating TAR file for each Solaris zone including Mobile Tornado's software is the best archive method. The benefit of this method is that we can deploy the Solaris zones without the need of a Jumpstart install server in the environment.The zones can be updated on attach if the target operating system is newer than the system where the original image has been taken. TAR is a common file format with an easy-to-use interface; in addition you can perform a trial run before the zone is moved to the new machine in order to verify that the target machine has the correct configuration to host the zone. The project was a huge success for Mobile Tornado in terms of TTM. Mobile Tornado can now build new environments including the hardware and software in days and not weeks. The field engineer that needs to deploy the solution needs to be only few days and not weeks away from the main office and from its family (not to mention the reduced travel cost). And every cost that is saved in the pre-production cycle makes the solution more attractive to the end-customer and more competitive in the market. Looking into the future, Mobile Tornado would like to test the new Unified Archive technology that was introduced in Solaris 11.2. Any archived system from within a Unified Archive can be deployed to any supported same-ISA platform. This support includes crossing virtualization boundaries, so a Unified Archive created on a SPARC T5 LDOM can be supported as a Zone, and a Zone archive can be installed to a bare-metal system. With this ability you can create an image that includes all the software components inside a single image including the Global Zone and all the VMs (non-global zones). This can be used to move Solaris 11 systems between physical and virtual deployments (P2V) and vice versa, virtual to physical (V2P). In addition you can use the Unified Archive to create a Cloud in a Box.Another unique feature of this technology is the ability to do a selective restore. For example you can create an image with many VMs and during the restore process you can selectively choose which VMs you want to deploy.All those capabilities will enable Mobile Tornado to manage their deployments over time and even reduce further the deployment from days to hours. The author would like to thank Benny Tamar and Ofer Yaron from the Mobile Tornado team for their contribution to the blog.

Mobile Tornado is a long-standing partner and hardware OEM of Oracle which provides Instant Communication services for mobile devices, with a focus on enterprise workforce management. “We have seen...

sun

Configuring automatically multiple NICs at installation time with Solaris 11.2

Solaris 11.2 comes with several new and major features such as the Unified Archives, Open Stack integration and kernel zones to name a few. Alongside those major features many smaller but none the less useful enhancements are also introduced. One particular enhancement that has made my life easier is the possibility to automatically configure multiple NICs (network interface cards) at installation time. After installing Solaris 11, on bare metal or in a Zone, the system needs to be configured at first boot. By default this process is manual: a system configuration wizard walks the user through different steps in order to gather such system parameters as the host name, IP address, net mask, time zone, root passwords and so on. Of course there exists a mechanism to make this process hands-off and fully part of the installation process. In order to do so a system configuration profile XML file needs to be provided at installation time, containing all the information needed to configure automatically the system. However so far, Solaris 11 only allowed the automatic configuration of a single NIC. On systems with several NICs, it was still necessary to configure all the remaining network interfaces, either manually, or by setting up a first boot script mechanism to perform this extra task. In Solaris 11.2 this extra step becomes history: it is now possible to specify multiple NICs in the system configuration profile XML file in order get all NICs configured automatically during installation time. So how does it work? The best way demonstrating this would be to go through an example by installing a Solaris 11 Zone with two NICs and configuring them automatically. The first step is to create a simple Solaris 11 Zone with two NICs following Listing 1. # zonecfg -z zone1 zonecfg:zone1> create –bzonecfg:zone1> set zonepath=/zones/zone1zonecfg:zone1> add anetzonecfg:zone1:anet> set linkname=net0zonecfg:zone1:anet> set lower-link=autozonecfg:zone1:anet> endzonecfg:zone1> add anetzonecfg:zone1:anet> set linkname=net1zonecfg:zone1:anet> set lower-link=autozonecfg:zone1:anet> endzonecfg:zone1> verifyzonecfg:zone1> commitzonecfg:zone1> exit Listing 1 At this point the Zone is configured but not yet installed as confirmed by the following command # zoneadm list –civID NAME             STATUS      PATH                         BRAND      IP     0 global           running     /                            solaris    shared - zone1            configured  /zones/zone1                 solaris    excl Before actually installing the Zone “zone1”, a system configuration XML file should be created in order to configure automatically the Zone.As an example Listing 2 provides a system configuration file called sc_profile.xml used to configure the Zone’s two NICs: net0 and net 1 will be respectively assigned the IP addresses 10.0.2.101/24 and 192.168.1.34/24. (The part of the file concerning network configuration is in bold) <?xml version='1.0' encoding='UTF-8'?><!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"><service_bundle type="profile" name="sysconfig">  <service version="1" type="service" name="system/identity">    <instance enabled="true" name="node">      <property_group type="application" name="config">        <propval type="astring" name="nodename" value="zone1"/>      </property_group>    </instance>  </service> <service version="1" type="service" name="network/install">    <instance enabled="true" name="default">        <property_group type="ipv4_interface" name="ipv4_interface_0">         <propval type="net_address_v4" name="static_address" value="10.0.2.101/24"/>         <propval type="astring" name="name" value="net0/v4"/>         <propval type="astring" name="address_type" value="static"/>         <propval type="net_address_v4" name="default_route" value="10.0.2.2"/>        </property_group>       <property_group type="ipv4_interface" name="ipv4_interface_1">         <propval type="net_address_v4" name="static_address" value="192.168.1.34/24"/>         <propval type="astring" name="name" value="net1/v4"/>         <propval type="astring" name="address_type" value="static"/>         <propval type="net_address_v4" name="default_route" value="192.168.1.1"/>       </property_group> </instance>   </service>  <service version="1" type="service" name="network/physical">    <instance enabled="true" name="default">      <property_group type="application" name="netcfg">        <propval type="astring" name="active_ncp" value="DefaultFixed"/>      </property_group>    </instance>  </service>  <service version="1" type="service" name="system/name-service/switch">    <property_group type="application" name="config">      <propval type="astring" name="default" value="files"/>    </property_group>    <instance enabled="true" name="default"/>  </service>  <service version="1" type="service" name="system/name-service/cache">    <instance enabled="true" name="default"/>  </service>  <service version="1" type="service" name="system/environment">    <instance enabled="true" name="init">      <property_group type="application" name="environment">        <propval type="astring" name="LANG" value="en_US.UTF-8"/>      </property_group>    </instance>  </service></service_bundle> Listing 2For those already familiar with the system configuration files of Solaris 11 prior to Solaris 11.2, the only differences are the property group name which is now user-defined in order to allow multiple entries, and the property group type which changes from "application" to "ipv4-interface". However the older syntax is still valid as long as a single NIC is to be configured at installation time the existing system configuration files are relevant. Now that all the pieces are ready, let’s proceed and actually install the Zone “zone1” by explicitly providing the system configuration file shown in Listing 2.# zoneadm -z zone1 install -c /root/sc_profile.xml Boot the Zone :# zoneadm -z zone1 boot Check that the zone is running: # zoneadm list –civID NAME             STATUS      PATH                         BRAND      IP     0 global           running     /                            solaris    shared 3 zone1            running     /zones/zone1                 solaris    excl Finally log into the Zone and verify the IP interfaces and addresses as well as the routing table: # zlogin zone1[Connected to zone 'zone1' pts/3]Oracle Corporation    SunOS 5.11    11.2    June 2014root@zone1:~# ipadm show-addrADDROBJ           TYPE     STATE        ADDRlo0/v4            static   ok           127.0.0.1/8net0/v4           static   ok           10.0.2.101/24net1/v4           static   ok           192.168.1.34/24lo0/v6            static   ok           ::1/128root@zone1:~# netstat –rnRouting Table: IPv4  Destination           Gateway           Flags  Ref     Use     Interface -------------------- -------------------- ----- ----- ---------- --------- default              192.168.1.1          UG        1          0           default              10.0.2.2             UG        1          0           10.0.2.0             10.0.2.101           U         2          0 net0      127.0.0.1            127.0.0.1            UH        2          0 lo0       192.168.1.0          192.168.1.34         U         2          0 net1 As expected both NICs have been correctly configured and the defaults routes for each network added to the routing table.EDIT:As pointed out by a reader for more details you will find the documentation here:http://docs.oracle.com/cd/E36784_01/html/E36800/gklew.html#scrolltoc

Solaris 11.2 comes with several new and major features such as the Unified Archives, Open Stack integration and kernel zones to name a few. Alongside those major features many smaller but none the...

sun

SPARC Solaris Momentum

Following up on the Oracle Solaris 11.2 launch on April 29th, if you were able to watch the launch event, you saw Mark Hurd state that Oracle will be No. 1 in high-end computing systems "in a reasonable time frame”.  "This is not a 3-year vision," he continued.Well, According to IDC's latest 1QCY14 Tracker, Oracle has regained the #1 UNIX Shipments Marketshare! Suffice to say that SPARC Solaris is making strong gains on the competition.  If you have seen the public roadmap through 2019 of Oracle's commitment to continue to deliver on this technology, you can see that Mark Hurd’s comment was not to be taken lightly.  We feel the systems tide turning in Oracle's direction and are working hard to show our partner community the value of being a part of the SPARC Solaris momentum.We are now planning for the Solaris 11.2 GA in late summer (11.2 beta is available now), as well as doing early preparations for Oracle OpenWorld 2014 on September 28th.  Stay tuned there!Here is a sampling of the coverage highlights around the Oracle Solaris 11.2 launch:“Solaris is still one of the most advanced platforms in the enterprise.” – ITBusinessEdge“Oracle is serious about clouds now, just as its customers are, whether they are building them in their own datacenters or planning to use public clouds.” – EnterpriseTech"Solaris is more about a layer of an integrated system than an operating system.” — ZDNet

Following up on the Oracle Solaris 11.2 launch on April 29th, if you were able to watch the launch event, you saw Mark Hurd state that Oracle will be No. 1 in high-end computing systems "in a...

sun

Security Access Control With Solaris Virtualization

Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and...

sun

Parallel Java with Fork/Join on SPARC CMT

Java 7 Fork and Join allows an easy way to perform dividable work by executing parallel tasks on a single computing machine. This article introduced a fork/join example of counting occurrences of a word in all files/directories under a root directory. I thought to check how these forked threads scale on a T5-4 server. Oracle T5-4 server has 4 processors, each has 16 cores. CMT technology allows 8 threads contexts per core (each core includes two out-of-order integer pipelines, one floating-point unit,level 1 and 2 caches, full specs here). It took 1131.29 seconds for a single thread to process a root directory with 1024 files, 1.25MB each.  Doing the same work with Java fork/join (available from Java 7), increasing "parallelism level" —using the Java fork/join pool terminology— up to 2048, took 7.74 seconds! Clearly it is worth setting the ForkJoinPool parallelism level manually to higher than the number of virtual CPUs. This can be easily done using the non-default constructor. In this example, processing time was reduced by additional 15% when I set the parallelism level to 4x the default value. The default is the total number of virtual CPUs —512 for a T5-4. The side table reports the times for the different parallelism levels on 1, 2 or 4 CPUs enabled. Parallelism Level Seconds to complete 1 CPU 2 CPUs 4 CPUs 1 1136.26 1126.1 1131.29 16 91.58 92.27 96.51 32 52.99 47.88 44.61 64 33.54 25.67 22.62 128 29.24 17.08 13.59 256 28.76 15.42 9.3 512 28.72 14.73 9.07 1024 29.04 15.02 8.5 2048 28.65 14.68 7.74 Let's now look at scalability. I have plotted in the side graph execution speedup vs number of physical CPU cores for different parallelism levels (abbreviated as number of threads per core). The red line shows the inherent scalability of Java fork/join : linear scalability until 16 CPU cores then continued scalability to 60 and beyond, not bad for an automatic parallelism technology. If we now increase the parallelism level to run with multiple threads per core to leverage the hardware Chip Multi Threading (CMT) capability of Sparc T-Series, we get dramatic speed-ups and super-linearity effect. More interestingly, past the number of threads physically supported in hardware, we see that we are not taking the overloaded systems to its knee, on the opposite, it handles the load very well and is even able to extract a little more performance out of the overall system. This CMT behavior is something we consistently see, and this Java Fork/Join benchmark was no exception: a great contribution to throughput performance and robustness in the face of overload. It is because CMT threads very efficiently share the core's execution pipeline to significantly decrease the number of wasted CPU cycles (stalls) and dramatically increase CPU utilization —measured by instructions per second— as a result. When reading large memory areas e.g., much beyond what any existing memory caches can satisfy, the application's threads often stall on cache misses and the CMT technology enables the CPU to immediately switch to a runnable thread with no penalty. Back to the Java world. Java 8 introduces, among other features, streams, parallel streams and lambda expressions. What happens when these new capabilities meet the scalability power of SPARC T5? This will be my next post.

Java 7 Fork and Join allows an easy way to perform dividable work by executing parallel tasks on a single computing machine. This article introduced a fork/join example of counting occurrences of a...

sun

SAS and Oracle - Partnering Innovation on Big Data and Cloud

Oracle was a Platinum sponsor of the 39th SAS Global Forum, which took place at the Gaylord Convention Center, Washington DC on March 23-26th with a record attendance of over 4,500 attendees. SAS Global Forum is the annual gathering of SAS professionals, and has been happening every year since 1976. Here is the Opening Keynote by CEO Jim Goodnight  —good overview in Dr. Goodnight’s 10-minute opening. The theme of this year's conference was celebrating the “potential of one, power of all” around the potential to turn a single data discovery into a better way of doing business. Oracle’s Platinum sponsorship centered around the partnership with SAS to deliver the convergence of In-Memory and In-Database cloud platforms, for which Oracle’s Engineered Systems are providing unprecedented performance and scalability in SAS environments. Oracle had 2 main conference presentations, 2 SAS partner showcase opportunities and booth tech talks around this theme. David Lawler, Oracle SVP Product Management and Strategy, and Paul Kent, SAS VP Big Data, presented on SAS and Oracle: Big Data and Cloud – Partnering Innovation Targets the Third Platform. Paul and David shared their strategic insight into how SAS High-Performance Analytics solutions are tackling today’s big data challenges and requisite union to “data-optimized cloud platforms”. The benefits of the collaborative effort between SAS and Oracle enable joint customers to realize tangible value by analyzing all their data, quickly, safely and with the necessary agility to reduce time to insight. As evidence of the SAS and Oracle partnership producing results for customers, a large banking institution in Latin America presented a solution of a global SAS deployment on Oracle SPARC M5 servers. This solution provided the bank impressive scalability for Analytics with 5,000 SAS users consolidating over 50 Oracle Databases onto the same system with a 70% reduction in processors and a 30% performance improvement.

Oracle was a Platinum sponsor of the 39thSAS Global Forum, which took place at the Gaylord Convention Center, Washington DC on March 23-26th with a record attendance of over 4,500 attendees. SAS...

sun

Oracle DB 12c runs best on Sparc

Oracle's vision of Engineered Systems is transforming into reality with every new product that Oracle is launching. Oracle Database 12c, which was launched in 2013, is an example on how Oracle software is optimized for Oracle Solaris SPARC. Oracle Database 12c is co-engineered with Solaris engineering team and Oracle’s world record SPARC T5 servers have best performance and maximum ROI. Independent Software Vendors (ISV) who are developing applications using both Oracle DB 12c and Solaris are taking advantage of one core technology, one operating system, one virtualization tool for all the range of SPARC servers. An ISV application can boost its performance, flexibility and security just by using SPARC high numbers of cores and big memory, combined Oracle 12c/Solaris/SPARC multi-tenancy,  zones and LDoms light-weight virtualization technologies, SPARC and Solaris build-in encryption, etc. Let’s take a look at how all this is translated into technical features. What makes Solaris and SPARC servers the best infrastructure for Oracle 12c enterprise databases? Oracle Solaris Dtrace is integrated into Oracle 12c and provides end-to-end view for I/O operations taking too long. This way, a database administrator can trace I/O requester, I/O device and the exact time spent in each layer: database, OS and the storage device. This is an ideal tool when the storage is a raw device. During runtime, data is collected and stored into V$KERNEL_IO_OUTLIER. With simple queries, any DBA can get information if there is a problem with the storage device or the HBA connector. These are the fields of V$KERNEL_IO_OUTLIER: Column Description TIMESTAMP Number of seconds elapsed since 00:00 UTC, January 1, 1970 IO_SIZE Size of the I/O, in KB. IO_OFFSET Offset into the device of the I/O DEVICE_NAME Name of the device to which the I/O was targeted PROCESS_NAME Name of the process that issued the I/O TOTAL_LATENCY Total time in microseconds the I/O spent in the kernel SETUP_LATENCY Time in microseconds spent during initial I/O setup before sending to SCSI target device driver QUEUE_TO_HBA_LATENCY Time in microseconds spent in the SCSI target device driver before being sent to the Host Bus Adaptor TRANSFER_LATENCY Time in microseconds spent in the Host Bus Adaptor and physically transferring the I/O to the storage device CLEANUP_LATENCY Time in microseconds spent freeing resources used by the completed I/O PID Process ID that issued the I/O For an ISV, this tool helps to root-cause a problem when a customer is complaining of poor application performance. Without the need of root passwd, one can visualize the entire I/O path from process name to storage partition. SGA online resizing without reboot is new with Oracle Database 12c and only on Solaris 11.1. Combined with Solaris zones and resource management, this feature allows fast start-up of Oracle instance and dynamic memory resize. When running in a complex, virtualized environment like cloud, this feature helps avoiding disruptions of database services and easy SGA adjustment when database is under heavy use. Solaris 11.1 has implemented a new Optimized Shared Memory (OSM) interface which replaces DISM. Oracle 12c new multi-tenancy feature running over Solaris virtualization (Zones or/and LDoms) is a powerful and dynamic infrastructure for cloud. You can install and run multiple versions of database and OS on the same server. What an economical architecture to support old and new versions of different applications! And if you have Solaris ZFS as the underlined filesystem, you can take advantage of its snapshot capabilities and easily move zones, in just few seconds, to a different server. Due to Solaris 11 network virtualization, complete isolation is guaranteed between applications running on different zones even if the zones are using the same NIC of the server. This way you get maximum consolidation! This architecture is ideal for a development environment when multiple customer configurations should be supported. Solaris network virtualization can help to mimic complex customer network configurations, all in just one box. End-to-End Encryption with No Additional Costs when running Oracle 12c on Solaris SPARC. SPARC servers like T5, have an encryption engine on each core that will accelerate all of the most common bulk encryption ciphers like AES and DES. SPARC T5 also supports asymmetric key exchange with RSA and ECC and authentication or hash functions like SHA and MD5. ZFS encryption is supported from Solaris 11 and when running on SPARC T5-2 achieves ZFS File System Encryption Benchmark World Record. Oracle 12c is using Fast Oracle Database Advanced Security Transparent Data Encryption (TDE) and Oracle Database 12c Release 1 (12.1) introduces a unified key management interface for Transparent Data Encryption (TDE) and other database components. This eases key administration tasks, provides for better compliance and tracking, and improves separation of duty between the database administrator and security administrator. Oracle has a strong Solaris SPARC roadmap investing in silicon leadership. Next generation of SPARC servers will have software-on-silicon capabilities that will dramatically improve database performance. Due to Oracle Solaris commitment to application backward compatibility, ISV applications will improve their performance just by supporting Solaris SPARC. It's very simple to get your hands on all this technology. As an Oracle ISV partner, check that your company is an OPN member at the Gold level. Then, request your free access to the Oracle Solaris Remote Lab, our cloud infrastructure to run Solaris SPARC and x86 virtual machines. It provides templates with Oracle Database, Weblogic or Solaris Studio pre-installed! Look at the following whitepaper and/or video to quickly learn what is and how to access Oracle Solaris Remote Lab. See you there! Oracle Database 12c Runs Best onOracle’s SPARC Servers with Oracle SolarisCo-Engineered for Performance and Efficiency

Oracle's vision of Engineered Systems is transforming into reality with every new product that Oracle is launching. Oracle Database 12c,which was launched in 2013, is an example on how Oracle...

sun

Solaris Studio 12.4 Beta is live

The Oracle Solaris Studio 12.4 Beta release is out. Oracle Solaris Studio is a suite of compilers and code analysis tools that assist developers in creating highly optimized, robust, and secure applications for the Oracle Solaris and Linux Operating Systems. These tools help application developers achieve the best performance on Oracle's newest T-series and M-series SPARC servers, Fujitsu's M10 servers, and Intel-based servers.  New features and enhancements in Oracle Solaris Studio 12.4 include: New C++ compiler and dbx debugger that support the C++ 2011 language standard A completely redesigned Performance Analyzer UI that simplifies identification of key performance issues, plus remote data analysis, cross-architecture support, comparison of results, and improved kernel profiling Code Analyzer for improving your application with static source-code checking, run-time memory access checking (including memory leaks), and identification of un-exercised code. Graphical user interface and command-line provide robust interfaces for reviewing results and historical analysis of data Compiler and library optimizations for Oracle's SPARC T5, M5, M6, Fujitsu's M10, and Intel's Ivy Bridge and Haswell servers Support for new OpenMP 4.0 standard including Region Cancellation, Thread Affinity, Tasking Extensions and Sequentially Consistent Atomics Integrated Development Environment (IDE) that includes C++ 2011 support, improved response time, and a smaller memory footprint to efficiently handle very large source repositories. Visit the Oracle Solaris Studio 12.4 Beta homepage and download the Beta release today!

The Oracle Solaris Studio 12.4 Beta release is out. Oracle Solaris Studio is a suite of compilers and code analysis tools that assist developersin creating highly optimized, robust, and secure...

sun

LDom Direct-IO gives fast and virtualized IO to ECI Telecom

ECI Telecom is a leading telecom networking infrastructure vendor and a long-time Oracle partner. ECI provides innovative communications platforms and solutions to carriers and service providers worldwide, that enable customers to rapidly deploy cost-effective, revenue-generating services. ECI Telecom's Network Management solutions are built on the Oracle 11gR2 Database and Solaris Operating System. "As one of the leading suppliers in the telecom networking infrastructure, ECI has a long term relationship with Oracle. Our main Network Management products are based on Oracle Database, Oracle Solaris and Oracle's Sun servers. Oracle Solaris is proven to be a mission critical OS for its high performance, extreme stability and binary compatibility guarantee." Mark Markman, R&D Infrastructure Manager, ECI Telecom Not long ago, ECI was asked by a customer to provide a scalable solution with a smaller footprint, with a preference for a VM-like environment that can be rapidly deployed onto the carrier infrastructure and provide faster time-to-market. The main prerequisite was however not to compromise the application's performance, as the database disk I/O performance requirements can be especially demanding when the carrier has peak network traffic usage in its infrastructure. ECI was facing a tough challenge, of introducing no I/O performance penalty while deploying inside a virtualized environment. Indeed, once the I/O subsystem is virtualized, the disk I/O traffic passes through a virtualization layer which can add a “virtualization tax”. In order to provide both virtualization and native disk I/O performance, ECI turned to Oracle, Oracle VM Server for SPARC virtualization (aka Logical Domains or LDom) and the following features, with the support of the local Oracle ISV Engineering team. First, ECI used the LDom's Direct I/O (DIO) capability. With this technology, ECI was able to enjoy benefits that the virtualized environment can provide such as fast provisioning, advanced resource management and better system utilization. DIO allows ECI to assign PCIe slot from a SPARC T5-2 server directly to the guest domain without any intervention from the hypervisor. The main benefits for this technology are: Native I/O performance: the Solaris operating system has access to native I/O drivers vs virtual ones; the storage device can then be accessed directly, bypassing the virtualization layer and its associated overhead. Predictable I/O performance: assigning exclusive access to storage device assures that this device will be able to serve the current and future I/O workload; with DIO, there is no need to share the storage device with other guest domains and affect the application I/O performance. ECI also deployed the Sun Flash Accelerator F40 PCIe card: Low latency: flash technology can complete an I/O operation in microseconds, placing it between hard disk drives (HDD) and DRAM in terms of latency. Because flash technology contains no moving parts, it avoids the long seek times and rotational latencies inherent to traditional HDD technology. As a result, data transfers between the different application tiers are improved dramatically. Small footprint: the F40 card fits onto a low-profile PCIe card plugged-in inside the SPARC T5 server while conserving energy consumption. The solid-state DOM also operates at low power in comparison to disk devices. Finally, Oracle offers two sophisticated software solutions designed to optimize the application performance: Oracle Database Smart Flash Cache and Oracle Solaris ZFS. Oracle Database Smart Flash Cache is a unique software feature included with Oracle Database 11gR2; it applies extensive knowledge of dynamic database usage patterns to boost performance as a Database SGA extension. Oracle Solaris ZFS filesystem optimizes and automates the use of Flash storage technology. ZFS is included as an integrated feature with Oracle Solaris; the Flash storage technology can be used as advanced caching layer for filesystem read and write I/O operations. ECI conducted a proof-of-concept for the Oracle DB performance using their internal tests; the I/O workload characteristic was a combination of 20% sequential and 80% random write. ECI was able to achieve the following performance metrics during the benchmark: Disk latency : 0.5 m/sec Disk I/O throughput : 900 MBytes/sec Disk I/O operations per second : 7000 IOPS All results were similar to native disk I/O performance metrics. In conclusion, ECI Telecom could provide a cost-effective solution without any performance or scalability issues. We saw here how ECI delivered maximum performance, efficiency and ease of use with minimum space and capacity for Oracle DB 11gR2 environments. Together, the Oracle software and hardware boost system speed, simplifies operations, and lowers costs --all without management overhead. This reduces business cycles, promotes new efficiency and enhances customer experience. This is an example for how the Oracle technologies which are "engineered to work together” can provide better value for Oracle customers.

ECI Telecom is a leading telecom networking infrastructure vendor and a long-time Oracle partner. ECI provides innovative communications platforms and solutions to carriers and service providers...

sun

Bounce rate is not everyone's enemy

I am cross-posting here an interesting piece of info & advice that my colleague Abhijith Ramalingaiah shared on an Oracle alias then on his slideshare page. It has been a while since I have posted something about web analytics, a topic you quickly get interested in after starting your own blog, thanks to Abhijith for reviving it :-) Abhijith is taking a closer look at Bounce Rate (BR). BR represents the percentage of visitors who enter the site and leave ("bounce") rather than continue viewing other pages on the same site. High BR is thus typically viewed as a sign of poor performance, visitors are leaving. Abhijith is telling us however that it is not necessarily such a negative sign, and that the content on the page has to be considered as well as visitor experience. Abhijith then goes thru the following scenarios where BR is not a good measurement for the page.Imagine you’re posting a blog that describes all the benefits of your product. The visitor might read the whole post and remember your product really well and they might even go to search for few more similar of the search engines straight away. However, since the visitor only looked at 1 page (exactly where the blog post is) they will be recorded as bounced visitor.Another example if you have a description of the product right on the landing page, and your phone number on the same page. The visitor might study the description and call straight away --again, they will be recorded as a bounced visitor, as only 1 page was viewed. Especially true if they come for a specific article / product spec / event registration / contact details. So BR is really useful to a page where visitors are first coming into the website. Why BR is not a good measurement for the page is that not may people enter on that page --instead it is a page in the middle of most people's paths. People should ALWAYS include Entries along with the Bounce Rate in any reports.Segmentation plays crucial role for better understanding. I believe you're enjoying, if you have already started using segments to do data analysis! You could just segment visitors who come from search engine / social sites/ campaign or direct traffic / paid traffic / first time visits.

I am cross-posting here an interesting piece of info & advice that my colleague Abhijith Ramalingaiah shared on an Oracle alias then on his slidesharepage. It has been a while since I have posted...

sun

Misys Kondor+ runs best on SPARC T5

Misys is a leading financial software vendor, providing the broadest portfolio of banking, treasury, trading and risk solutions available on the market. At ISV Engineering, we have a long-standing collaboration with the Kondor+ product line, that came from the Turaz acquisition (formerly part of Thomson Reuters) and now part of the Misys Treasury & Capital Markets (TCM) portfolio. Jean-Marc Jacquot, Senior Technical Consultant at Misys TCM, was recently interviewed by ITplace.TV (in French language) about a recent IT redesign of the Kondor+ installed base at a major French financial institution. The customer was running various releases of Kondor+ over a wide range of SPARC machines, from Sun V240 to M4000. The IT project aimed at rationalizing these deployments for better ROI, SLA and raw performance —to meet the new system requirements of the latest Kondor+ release. In the short list, SPARC & Solaris beat x86 & Linux on performance, virtualization and price. The customer ordered his first SPARC T5-2 systems this year. Re. performance, in real-life benchmarking performed at the customer site, SPARC T4 was faster than SPARC M4000 and the x86 machine in test. Re. virtualization, the use of Oracle VM Server for SPARC (a.k.a. Logical Domains or LDom) allows the consolidation of machines with different system times; the seasonal right-sizing of the "end-of-year" machine; and the mixing of Solaris 11 and Solaris 10 —to meet the different OS requirements of different Kondor+ releases. Re. price, Jean-Marc pointed out that Solaris embeds for free virtualization technologies (LDom, Zones) that come for an hefty fee on x86 platforms. This proofpoint is particularly interesting because it shows the superiority of SPARC in a real-life deployment. SPARC is a balanced design —not in the race e.g. for absolute single-thread performance, price or commoditization— that is built to perform extremely well for enterprise workloads and service levels. Notably, Jean-Marc had an anecdote on the stability of Solaris SPARC : he had just performed the 1st reboot of a system that was up for the past 2606 days. That's 7 years!

Misysis a leading financial software vendor, providing the broadest portfolio of banking, treasury, trading and risk solutions available on the market. At ISV Engineering, we have a long-standing...

sun

Moving Oracle Solaris 11 Zones between physical servers

As part of my job in the ISVEngineering team, I am often asked by partners the following question : is it possible to easily move a Solaris 11 Zone from aphysical server to another? The short answer is : YES ! The longerone comes with the following restrictions : Both physical servers should be ofthe same architecture, x64 or SPARC (T-series and M-series systems are compatible). Both physical servers should runOracle Solaris 11. The destination server should runat least the same or higher release of Solaris 11. This includes theSRU (Support Repository Update) level. Given a physical server called “Source”hosting a Solaris 11 Zone called “myZone” on a ZFS filesystem,here are the steps to move the zone on another physical server called“Target” : Export the Zones configurationThezone needs to be configured on the destination server before it canbe installed. The first step is to export the configuration of theZone to a file:[Source]#zonecfg -z myZone export -f myZone.cfg Archive the ZoneMy favourite solution is touse the ZFS “send” functionality to archive the ZFS file systemhosting the Zone in a single movable file, although this can also be achieved in other ways (cpio, pax) Halt the Zone[Source]#zoneadm -z myZone halt Take a recursive ZFS Snapshot ofthe rpool of the zone[Source]#zfs snapshot -r rpool/zones/myZone@archive Archive the Zone using ZFS send (ZFS and cpio archives can be zipped using gzip or bzip2)[Source]# zfssend -rc rpool/zones/myZone@archive | bzip2 > /var/tmp/myZone.zfs.bz2 Move the configuration and archivefiles to the destination server FTP, scp, NFS, removable hard drive, … Configure the zone on thedestination serverDepending on the configuration of the Targer server, you might need to tweak the zone configuration file before using it. [Target]#zonecfg -z myZone -f myZone.cfg Install the ZoneIf the zone is being installed in the same network, the zone configuration (IP address, DNS server, Gateway, etc) can be preserved using the "-p" option:[Target]# zoneadm -z myZone install -a myZone.zfs.bz2 -pIf the Zone is being installed in a new network environment, using the "-u" option instead of "-p" will unconfigure the system. The Zone would need to be manually configured on the first boot. The configuration can be automatized during installation if a system configuration profile XML file is provided:#zoneadm -z myZone install -a myZone.zfs -u -c sc_profile.xmlQuick Tip: To create a system configuration file, you can use the sysconfig program with option "create-profile":# sysconfig create-profile -o sc_profile.xmlThe configuration text wizard will walk you through the system configuration steps (same process as the first boot configuration wizard) but will not re-configure your system. It will simply create an output XML file with the defined system configuration. This files can then be manually tweaked if needed and act as a template for future use. Boot the Zone#zoneadm -z myZone boot Youshould now be able to log in the Zone which is the exact copy of theoriginal Zone on the source server. Obviouslythere are many more options and possibilities that go beyondthe scope of this post. My intend was just to give a glimpse of whatcan be done, so don't hesitate to consult the documentationfor more options. Also,these simple steps cans be scripted to be made even more flexible andusable. Below are two scripts I have written for my own needs. Thereare only provided as an example and must not be considered asproduction ready scripts. archive_zone.sh #!/bi#!/bin/sh #---------------------------------------------------------------- #archive_zone.sh # #This script creates a movable archive of an Solaris 11 Zone #It take a single input parameter: The Zone name #---------------------------------------------------------------- SCRIPT_NAME=$0BASE_DIR="$(pwd-P)" ZONE_NAME=$1 ZONES_ROOT=rpool/zonesARCHIVE_FOLDER=/var/tmp/${ZONE_NAME}ARCHIVE_FILE=${ZONE_NAME}.zfs.bz2CONFIG_FILE=${ZONE_NAME}.cfg SNAPSHOT=${ZONES_ROOT}/${ZONE_NAME}@`date'+%d_%m_%Y-%H:%M:%S'` P { margin-bottom: 0.08in; }A:link { } if[ if [ ! -d ${ARCHIVE_FOLDER} ] ; then   mkdir-p ${ARCHIVE_FOLDER}fi zoneadm-z ${ZONE_NAME} halt zonecfg-z ${ZONE_NAME} export -f ${ARCHIVE_FOLDER}/${CONFIG_FILE} #Take a ZFS Snapshot of the rpool of the zonezfssnapshot -r ${SNAPSHOT} #Archive the Zone using ZFS sendzfssend -rc ${SNAPSHOT} | bzip2 > ${ARCHIVE_FOLDER}/${ARCHIVE_FILE} #Delete the snapshot used to create the archivezfsdestroy -r ${SNAPSHOT} deploy_zone.sh #!/bin/sh #---------------------------------------------------------------- #deploy_zone.sh # #This script deploys an archived Solaris 11 Zone #It take a single input parameter: The Zone name #---------------------------------------------------------------- SCRIPT_NAME=$0BASE_DIR="$(pwd-P)" ZONE_NAME=$1 ARCHIVE_FOLDER=/var/tmp/${ZONE_NAME}ARCHIVE_FILE=${ZONE_NAME}.zfs.bz2CONFIG_FILE=${ZONE_NAME}.cfg #Configure the Zonezonecfg-z ${ZONE_NAME} -f ${ARCHIVE_FOLDER}/${CONFIG_FILE} #Install the Zone and configure the systemzoneadm-z ${ZONE_NAME} install -a ${ARCHIVE_FOLDER}/${ARCHIVE_FILE} -u #Boot the Zonezoneadm-z ${ZONE_NAME} boot

As part of my job in the ISV Engineering team, I am often asked by partners the following question : is it possible to easily move a Solaris 11 Zone from a physical server to another? The short...

sun

IPgallery banks on Solaris SPARC

IPgallery is aglobal supplier of converged legacy and Next Generation Networks (NGN) productsand solutions, including: core network components and cloud-based Value AddedServices (VAS) for voice, video and data sessions. IPgallery enables network operators and service providers to offer advancedconverged voice, chat, video/content services andrich unified social communications in a combined legacy (fixed/mobile),Over-the-Top (OTT) and Social Community (SC) environments for home and businesscustomers. Technically speaking, this offer is a scalable and robust telco solutionenabling operators to offer new services while controlling operating expenses (OPEX). In its solutions, IPgallery leverages the following Oracle components: OracleSolaris, NetraT4 and SPARCT4 in order to provide a competitive and scalable solutionwithout the price tag often associated with high-end systems. Oracle Solaris Binary Application Guarantee A unique feature of Oracle Solaris is the guaranteed binary compatibility between releases ofthe Solaris OS. That means, if a binaryapplication runs on Solaris 2.6 or later, it will run on the latest release ofOracle Solaris.  IPgallery developedtheir application on Solaris 9 and Solaris 10 then runs it on Solaris 11, withoutany code modification or rebuild. The Solaris Binary Application Guarantee helps IPgallery protect theirlong-term investment in the development, training and maintenance of theirapplications. Oracle Solaris Image Packaging System (IPS) IPS is a new repository-based package management system that comes with Oracle Solaris11. It provides a framework forcomplete software life-cycle management such as installation, upgrade andremoval of software packages. IPgalleryleverages this new packaging system in order to speed up and simplify softwareinstallation for the R&D and production environments. Notably, they use IPS to deliver SolarisStudio 12.3 packages as part of the rapid installation process of R&D environments, and during the production software deployment phase, they ensure software packageintegrity using the built-in verification feature. Solaris IPS thus improves IPgallery's time-to-marketwith a faster, more reliable software installation and deployment in productionenvironments. Extreme Network Performance IPgallery saw a huge improvement in application performance both inCPU and I/O, when running on SPARC T4 architecture in compared to UltraSPARC T2servers.  The same application (with the sameactivation environment) running on T2 consumes 40%-50% CPU, while it consumes only 10% of the CPU on T4. The testing environment comprised of: Softswitch(Call management), TappS(Telecom Application Server) and Billing Server running on samemachine and initiating various services in capacity of 1000 CAPS (CallAttempts Per Second). In addition, tests showed a huge improvement in the performance of the TCP/IP stack, whichreduces network layer processing and in the end Call Attempts latency. Finally, there is a huge improvement within the file system and disk I/Ooperations; they ran all tests with maximum logging capability and itdidn't influence any benchmark values. "Due to the huge improvements in performance and capacity using the T4-1architecture, IPgallery has engineered the solution with less hardware.  This means instead of deploying the solutionon six T2-based machines, we will deploy on 2 redundant machines whileutilizing OracleSolaris Zones and OracleVM for higher availability and virtualization"Shimon Lichter, VP R&D, IPgallery In conclusion, using the unique combination of Oracle Solaris and SPARCtechnologies, IPgallery is able to offer solutions with much lower TCO, while providing ahigher level of service capacity, scalability and resiliency. This low-OPEX solution enables the operator, the end-customer, to deliver a high quality service while maintaining high profitability.

IPgallery is a global supplier of converged legacy and Next Generation Networks (NGN) products and solutions, including: core network components and cloud-based Value AddedServices (VAS) for voice,...

sun

VistaMart 4.3 shines on Oracle storage, SPARC T4 and Solaris 11

Aspart of our ongoing technological partnership with InfoVista, aleading provider of service performance assurance software solutionsfor IP-based network and application services, we ran aperformance benchmark of the central component of the latestVistaFoundation platform: VistaMart 4.3. VistaFoundationcollects raw performance data from network devices. VistaMart is incharge of data computation, aggregation and storage of the collecteddata, and embeds the Oracle 11gR2 database. The tests wereperformed on Oracle Sun servers and storage running Solaris 11 11/11. Purpose of the benchmark Thekey objective was to measure the scalability andperformance gains obtained by running the new version ofVistaMart 4.3 over the latest SPARC T4 processors, focusing on largenetwork topologies –1 million interfaces or above– and highinsertion rates into the Oracle Database. Also,Solaris 11 Zones –an operating system-level virtualizationtechnology for x86 and SPARC systems– offered an opportunity todeploy and run a higher number of VistaMart instances on a givenphysical server, either for vertical scalability, or for highavailability. Benchmark configuration TheSun SPARC T4-4 server used for this benchmark provides 256 computingthreads at 3 GHz with 4 UltraSPARC T4 processors (i.e. 64 threadsper processor), making it a natural candidate for virtualization. Inorder to reproduce a real customer environment, a Sun SPARC T3-2server was also used to deploy the complete VistaInsight for Networksapplication, and a load simulator. TheI/O load of our transactional database produces a lot of synchronouswrites. This possible bottleneck was addressed by combining the Solaris ZFS filesystem and SSD storage –provided by a Sun Storage F5100 FlashArray directly attached to the T4-4– to improve performancewithout compromising cost. ZFS is flexible enough to dedicate SSD toRead and Write caches, respectively L2ARC and ZIL in ZFS terminology. Therest of the data that represents the bulk of the storage consumption,but that doesn’t generate a high amount of critical I/O operations,is located on a mid-range Sun ZFS Storage 7310 Appliance that acts asa SAN. First scenario: Stress the full T4-4 Inthis scenario the full T4-4 server was used, and both VistaMart 4.3and Oracle 11gR2, version 11.2.0.3 where installed on the globalzone. Anetwork topology with approximately 500K interfaces was chosen,with the load simulator pushing data for this given topology intoVistaMart that aggregated it into the Oracle database. Thedefault is to collect 13 KPIs for each interface in a network, every5 minutes. An insertion rate of roughly 22,000 data/sec is thereforerequired by Infovista on such a scenario. Afterseveral optimization of the configuration of Vistamart and the I/Olayer, we achieved a staggering insertion rate of 103,341 data/sec. Thatis, more than 4,5 x better than what is currently required byInfoVista. This means thatcustomers can dramatically increase the number of interfacesmonitored, and/or the number of KPI per interface, using a T4-4 server. Secondscenario: Simulate a T4-1 through a Zone Inthis scenario, a Solaris 11 Zone using only ¼ of the T4-4 totalresources was created (CPU and memory capped). In other words, wetried to loosely simulate a T4-1 server (although the T4-1 runs atonly 2.8 GHz instead of 3 GHz with the T4-4). Thetopology was gradually increased from 54K to 1M interfacesand beyond. We have also run in both nominal and recovery mode inwhich VistaMart collects additional data that comes after a failure.This mode introduces additional overhead, and additional resourcesusage. Onceagain the table above presents strong performances achieved with aquarter of T4-4 : The average insertion rate is consistently abovethe 22,000 data/sec watermark, even in recovery mode. Recoveryreduces insertion performances, but at an acceptable level. Zonesallow perfect application isolation, and we can safely extrapolate a 4x gain if we design 4 similar containers on the T4-4. Afull T4-4 can thus handle more than 2 million interfaces at a 5minutes granularity Conclusion TheT4-4 server, in conjunction with a 7310 ZFS array and a bunch of single-cell SSD, has achieved an insertion rate gain of above 4.5 timeson an application that relies on the Oracle database 11gR2. T4-4servers with their 256 physical threads allowed us to dramaticallyimprove scalability. SolarisZones technology is another form of application scalability. Usingthis technology, we where able to support monitoring of above2,000,000 interfaces at a display rate of 5 minutes on a T4-4 serverwith only 512 GB of RAM. VistaMart4.3 also shows extreme scalability that is now fully unleashed on amany-core many threads system such as the T4.-4. It'salso worth noting that Solaris 11 was first used there without prior certification of VistaMart, confirming once again the binarycompatibility of Solaris 10 applications on Solaris 11. Finally,as always at Oracle ISV Engineering, we are happy to help our ISVpartners test their own applications on our platforms, so don'thesitate to contact us. Wouldn't you like to see what the SPARCT4-based systems can do for your application ?

As part of our ongoing technological partnership with InfoVista, a leading provider of service performance assurance software solutions for IP-based network and application services, we ran aperforman...

sun

Gemalto Mobile Payment Platform on Oracle T4

Gemalto is the world leader in digital security, at the heart of our rapidly evolving digital society. Billions of people worldwide increasingly want the freedom to communicate, travel, shop, bank, entertain and work – anytime, everywhere – in ways that are convenient, enjoyable and secure. Gemalto delivers on their expanding needs for personal mobile services, payment security, identity protection, authenticated online services, cloud computing access, eHealthcare and eGovernment services, modern transportation solutions, and M2M communication. Gemalto’s solutions for Mobile Financial Services are deployed at over 70 customers worldwide, transforming the way people shop, pay and manage personal finance. In developing markets, Gemalto Mobile Money solutions are helping to remove the barriers to financial access for the unbanked and under-served, by turning any mobile device into a payment and banking instrument. In recent benchmarks by our Oracle ISVe Labs, the Gemalto Mobile Payment Platform demonstrated outstanding performance and scalability using the new T4-based Oracle Sun machines running Solaris 11. Using a clustered environment on a mid-range 2x2.85GHz T4-2 Server (16 cores total, 128GB memory) for the application tier, and an additional dedicated Intel-based (2x3.2GHz Intel-Xeon X4200) Oracle database server, the platform processed more than 1,000 transactions per second, limited only by database capacity --higher performance was easily achievable with a stronger database server. Near linear scalability was observed by increasing the number of application software components in the cluster. These results show an increase of nearly 300% in processing power and capacity on the new T4-based servers relative to the previous generation of Oracle Sun CMT servers, and for a comparable price. In the fast-evolving Mobile Payment market, it is crucial that the underlying technology seamlessly supports Service Providers as the customer-base ramps up, use cases evolve and new services are launched. These benchmark results demonstrate that the Gemalto Mobile Payment Platform is designed to meet the needs of any deployment scale, whether targeting 5 or 100 million subscribers. Oracle Solaris 11 DTrace technology helped to pinpoint performance issues and tune the system accordingly to achieve optimal computation resources utilization.

Gemaltois the world leader in digital security, at the heart of our rapidly evolving digital society. Billions of people worldwide increasingly want the freedom to communicate, travel, shop,...

sun

Talend Enterprise Data Integration overperforms on Oracle SPARC T4

The SPARC T microprocessor,released in 2005 by Sun Microsystems, and now continued at Oracle,has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap byoffering a 5x better single-thread performance over previousgenerations. Following our long-termrelationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for DataIntegration Tools”, we decided to test some of their integrationcomponents with the T4 chip, more precisely on a T4-1 system, inorder to verify first hand if this new processor stands up to itspromises. Several tests were performed,mainly focused on: Single-thread performance ofthe new SPARC T4 processor compared to an older SPARCT2+ processor Overall throughput of theSPARC T4-1 server using multiple threads The tests consisted in readinglarge amounts of data --ten's of gigabytes--, processing and writingthem back to a file or an Oracle 11gR2 database table. They are CPU,memory and IO bound tests. Given the main focus of this project --CPUperformance--, bottlenecks were removed as much as possible on the memoryand IO sub-systems. When possible, the data to process was putinto the ZFS filesystem cache, for instance. Also, two external storage deviceswere directly attached to the servers under test, each one dividedin two ZFS pools for read and write operations. Multi-thread: Testing throughput on the OracleT4-1 The tests were performed withdifferent number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48and 64) and using different storage devices: Flash, Fibre Channelstorage, two stripped internal disks and one single internal disk.All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated1GB-large file containing 12.5M lines with the followingstructure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-20082;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-20083;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-20084;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007[…] The following graphs present theresults of our tests: Unsurprisingly up to 16 threads,all files fit in the ZFS cache a.k.a L2ARC : once the cache is hotthere is no performance difference depending on the underlyingstorage. From 16 threads upwards however, it is clear that IO becomesa bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow theT4-1 to scale quite seamlessly. From 32 to 64 threads, theperformance is almost constant with just a slow decline. For the database load tests, onlythe best IO configuration --using external storage devices-- wereused, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVMprocesses before saturating the CPU. The final result is astaggering 646K lines per second insertion in an Oracle table using48 parallel threads. Single-thread: Testing the single threadperformance Seven different tests wereperformed on both servers. Given the fact that only one thread, thusone file was read, no IO bottleneck was involved, all data beingserved from the ZFS cache. Read File → Filter → Write File: Read file, filter data, write the filtered data in a new file.The filter is set on the “Status” column: only lines with statusset to “A” are selected. This limits each output file to about500 MB. Read File → Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute theaverage of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, writethe result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform,write the result data in a new file. The transformations appliedare: set the address column to upper case and add an extra column atthe end, which is the concatenation of two columns. Sort: Read file, sort a numericand alpha numeric column, write the result data in a new file. The following table and graphpresent the final results of the tests: Throughput unit is thousandlines per second processed (K lines/second). Improvement is the % ofimprovement between the T5140 and T4-1. Test T4-1(Time s.) T5140(Time s.) Improvement T4-1(Throughput) T5140(Throughput) Read/Filter/Write 125 806 645% 100 16 Read/LoadDatabase 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 OracleDB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic:depending on the tests, the T4 is between 5.4 to 7 times faster thanthe T2+. It seems clear that the SPARC T4 processor has gone a longway filling the gap in single-thread performance, withoutsacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISVEngineering, we are happy to help our ISV partners test their ownapplications on our platforms, so don't hesitate to contact us andlet's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle,has a good track record in parallel execution and multi-threaded performance. However it was less suited...

sun

Mobile Tornado adopts Solaris features for better RAS and TCO

Mobile Tornado provides instant communication services for mobile devices, with a focus on enterprise workforce management. Its solutions include Push-To-Talk and Instant Locate, Alert & Message applications. As a software developer, Mobile Tornado's main challenges are up-time --the applications are largely sold today into the homeland security and defense markets-- and scalability --during network peak usage. With these challenges in mind, and as part of the on-going engineering collaboration between Mobile Tornado and Oracle's ISV Engineering, we investigated which Oracle Solaris technologies would improve the application's availability and scalability while reducing the solution's TCO. We looked at the following Oracle Solaris technologies: Solaris Cluster, ZFS and Zones. Mobile Tornado was able to benefit from Oracle Solaris Cluster as follows. Solaris Cluster's automatic failure detection at every level --application, server, network-- reduces unplanned downtime, increases application uptime. It also enables application scalability during peak usage by offering a single IP address to manage the increased capacity of the application service; there is no disruption to the application for adding more systems to the cluster. Finally Solaris Cluster can fail-over the application to another server in case of maintenance, thus removing many sources of planned downtime. Second, Mobile Tornado used the ZFS file system for storing the Oracle 11gR2 database files in order to secure the customer data. Indeed, ZFS provides end-to-end data integrity vs metadata integrity only for a traditional file system that Mobile Tornado was previously using. ZFS validates that each data block be verified against an independent checksum, after the data has arrived in memory, all of that without any performance degradation. Introduced in Solaris 10, ZFS is now the default file system in Oracle Solaris 11. Configuring Oracle Solaris ZFS for an Oracle Database is detailed in this whitepaper on OTN. Finally, Mobile Tornado improved the TCO of its application by implementing the partitioning license model for the Oracle database on Solaris Zones a.k.a. Containers. This license model helps to reduce the Oracle Database license cost. Mobile Tornado implemented the capped container feature --the ability to cap CPU usage in a Zone-- and paid only for the needed CPUs. Under the previous architecture, Mobile Tornado was paying for all CPUs in the physical system although the Oracle DB may not make use of them all. Best practices for running Oracle Databases in Oracle Solaris Zones are given in this whitepaper. TCO was further reduced by running the Solaris Zones on powerful heavily-threaded SPARC T4 systems, allowing Mobile Tornado to consolidate its IPRS PTT server and the Oracle Database on the same machine. "The usage of Solaris Zones in combination with advanced SPARC T4 CPUs allows Mobile Tornado to consolidate its IPRS PTT server and Oracle database in same machines, reducing TCO with higher level of performance."Shlomo Birman, R&D Manager, Mobile Tornado In conclusion, using unique Solaris technologies, Mobile Tornado was able to improve the high availability of its application, provide better workload distribution for scaling the application and reduce its application's TCO on the Oracle hardware and software stack. If you are interested in what Solaris features can do for your application, we're listening.

Mobile Tornado provides instant communication services for mobile devices, with a focus on enterprise workforce management. Its solutions include Push-To-Talk and Instant Locate, Alert & Message...

sun

Solaris SPARC live migration increases Oracle DB availability

One of the most significant business challenges for IT is to create and preserve value in a highly competitive environment, while keeping business applications available, improving hardware utilization and reducing costs. In particular, it is important to maximize application availability during planned or unplanned outages, without any compromise on resource allocation flexibility based on business needs, à la cloud computing. Orgad Kimchi and Roman Ivanov at Oracle ISV Engineering describe in a new whitepaper "Increasing Application Availability by Using the Oracle VM Server for SPARC Live Migration Feature: An Oracle Database Example" how the Oracle VM Server for SPARC software (f.k.a. Sun Logical Domains or LDoms) can increase application availability, using the example of the Oracle database software. The benefits of Live Migration are : Shorter Maintenance One can minimize application downtime by using the domain Live Migration feature of Oracle VM Server for SPARC. If some equipment must shut down for hardware maintenance, this feature can keep applications running by moving them to another server. Optimized Hardware UtilizationOne can improve application performance by using the domain Live Migration feature in order to move an active domain to a machine with more physical memory, more CPU capacity, or a better I/O subsystem. Higher Application Availability One can maximize application availability by using the domain Live Migration feature because there is no need to shut down the application while the migration is in process. The whitepaper shows the complete configuration process, including the creation and configuration of guest domains using the ldm command, the storage configuration and layout, and all software requirements that were used to demonstrate the Live Migration between two SPARC T4 systems. Orgad and Roman tested the Oracle 11gr2 DB while migrating the guest domain from one SPARC T4 server to another without shutting down the Oracle database. The Swingbench OrderEntry workload was used to generate the load; OrderEntry is based on the OE schema that ships with Oracle 11g. This workload introduces heavy contention on a small number of tables and it is designed to stress the following scenario : 30GB DB disk size with 18GB SGA size, 50 concurrent users and 100ms time between actions taken by each user. Throughout the testing, the Oracle VM Server for SPARC domain Live Migration proved to linearly scale across the 64 CPUs available on the SPARC T4 processor, to shorten overall migration time and deliver extremely short suspension time, as shown in the table below. Number of CPUs on the Control Domain Overall Migration Time Suspension Time Guest Domain CPU Usage 8 CPUs 8 min 12 sec 26 sec 70% 16 CPUs 4 min 2 sec 13 sec 80% 24 CPUs 2 min 3 sec 7 sec 85% SPARC T4-1 Live Migration Results

One of the most significant business challenges for IT is to create and preserve value in a highly competitive environment, while keeping business applications available, improving...

sun

ZFS secures your application

One of our ISV partner, a leading vendor in the financial services space, commonly recommends as deployment platform a commodity 2-socket Lintel server to minimize cost, but equipped with an internal RAID storage controller to increase the application uptime on such entry-level servers by mirroring the root disk. We recently worked with its professional services team to explore if we could improve on the solution, i.e. continuing to bring the cost down while increasing uptime. The proposed solution was to use the Oracle Solaris 10 operating system and its ZFS file system in lieu of the hardware RAID to mirror the root hard drive. ZFS is a new kind of filesystem that provides simple administration, transactional semantics and immense scalability. ZFS natively supports all common RAID functionalities and also embeds advanced fonctionnalities in compression, encryption and snapshot, typically expected from proprietary high-end storage systems. The benefits of the ZFS-based solution is : Higher Uptime The end-customer can replace damaged hard drives without the need of shutting down the system since ZFS management is at the OS level while the RAID management software is at the BIOS level. Simpler Management ZFS is a built-in feature of Solaris so nor our ISV partner nor the end-customer need to learn any cumbersome software management interface like RAID software. Faster Deployment ZFS mirroring can be set up during the Solaris OS installation process, no additional setup of software is needed, the solution can be deployed more rapidly. Better Data Integrity The end-customer benefits from the advanced data and metadata integrity features of ZFS --other file systems typically provide only metadata integrity. They can also use the ZFS snapshot and clone features at no extra cost. Lower Hardware CostThe ZFS-based software solution does not require to pay extra money on a hardware RAID controller. In conclusion, by using ZFS vs traditional filesystems like EXT3 or UFS, our partner was able to deploy its application on a Sun Fire X2270 class system vs a X4170 class one --a 20% price difference in their base prices-- and to reduce unplanned downtime and maintenance time --from hours to mins. This solution is in production today at several EMEA banks.

One of our ISV partner, a leading vendor in the financial services space, commonly recommends as deployment platform a commodity 2-socket Lintel server to minimize cost, but equipped with an internal...

sun

Infovista VistaInsight for Networks shows 3.7x performance on Oracle

System management vendor InfoVista markets the VistaInsight for Networks® application to enable telco operators, service providers and large enterprises effectively meet performance and service level agreements of converged and next-generation communication networks. As part of our on-going technology partnership, InfoVista and Oracle ISV Engineering together ran a performance test campaign of VistaInsight for Networks® over Oracle Solaris and Sun CMT hardware. The two companies shared many common objectives when starting this project. The most obvious was to improve the scalability and performance of VistaInsight for Networks® over Oracle's SPARC T-Series systems and thereby provide customers with a better price/performance ratio and a better ROI. From the onset, virtualization was considered a promising technology to improve scalability, thus testing VistaInsight for Networks® in the context of Oracle Solaris Zones was also a major milestone.Second, InfoVista was interested in setting new limits in terms of the workload that its application can sustain, in response to the evolving needs of its customers.Lastly, as the first improvements on computing scalability were delivered, it became obvious that the storage was the next critical component for the performance of the entire solution. A decision was then made to test the Oracle Solaris ZFS file system, the Sun ZFS Storage Appliance, and the SSD technology from Oracle to move to the next level of performance.The result of this performance test campaign is a new Reference Architecture whitepaper that provides detailed information about the configuration tested, the tests executed and the results obtained. It clearly shows that VistaInsight for Networks® takes full advantage of the server, storage and virtualization technology provided by Oracle. By leveraging the Oracle Solaris Zones, Oracle Solaris ZFS, SSD and Sun ZFS Appliance storage, Infovista increased the throughput performance by more than 370%, meeting the highest expectations in terms of workload and performance while maintaining the cost in a very attractive range.Learn all the details about this new Reference Architecture published on OTN.

System management vendor InfoVista markets the VistaInsight for Networks® application to enable telco operators, service providers andlarge enterprises effectively meet performance and service...

sun

Latency Matters

A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0# ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences...

sun

Talend's new data processing engine on Sun Blade X6270

This is an article posted back in April 2009 on my previous blog that no longer exists. Upon request, here is a re- post of that article. Enjoy ! Having the chance to testthe brand new SunBlade X6270 server based on the Intel Xeon X5500 seriesprocessors, I asked one of our ISV partners, Talend,an open source ETL (Extract Transform & Load) solution provider,if they where willing to do some benchmarking with me. The timing was perfectsince Talend has just rewritten some parts of their ETL engine, thatwill be included in the upcoming version, in order to make a betteruse of modern CPU multi threading capabilities. During the developmentthey had benched their application on a two socket Xeon 5320, andwhere very interested in seeing how the the new Intel Xeon 5500 wouldperform. Test descriptions We used DBGEN v2.8.0, adatabase population program that generates files to be loaded in adatabase tables. In our case we will generate moderately to verylarge files, and will process them directly (no use of a databasesystem) as simple flat files. Also, we will be only using the filecalled “lineitem.tbl” which represents a list of order itemlines having the following structure: For each benchmark run weperform three tests, each applying a different type of processing onthe file: Sort: We will sort theentire file by date, on the 11th column (L_SHIPDATE: seeabove in red) Count: Count the number or order lines by shipment mode ( L_SHIPMOD:see blue column above) and the year of the shipment date. (L_SHIPDATE: see above in bold red ) Average:Averagediscount (L_DISCOUNT) for each item (L_PARTKEY) DBGEN uses a scaling factor representing the total size of all thetables generated. For this test we only use the file named«lineitem.tbl». The table bellow size and number of lines in the«lineitem.tbl» file given each scaling factor. As you can see we start quite small, by processing a file with 6million lines (only !) and go all the way to processing finally 3.3Billion lines in a single file. Scale Number of entries Size 1 6 Million 740 MB 10 60 Million 7,4 GB 100 600 Million 74 GB 300 1,8 Billion 225 GB 550 3,3 Billion 415 GB Hardware Configurations The following table shows the hardware configurationsused for the tests (referred to as X6270), and also the vanilla Xeonbases box used by Talend (referred to as Bi-Xeon) Server X6270 Bi-Xeon CPU 2 x Xeon 5520 quad core with HyperThreading &Turbomode on (2,26GHz) 2 x Intel Xeon 5320 quad core (1,86 GHz) RAM 24 GB DDR III 4 GB DDRII Internal storage 1 x 136 GB 15K tr/min 3 x 250 GBand 2 x 320 GB Seagate 7200 tr/min (all on ext3) 1 x 250GB for system and temporary files 1 x 320GB for input files 1 x 320GB for output files External storage SunStorageTek 2540 connected by Fiber Channel: 12 x SAS136Gb, 15K rpm organized as: 3 volumes of 4 disks using RAID 0(stripping), 544 Gb each. A ZFS pool for each group. None Operating System Solaris 10 update 6 (aka. 10/08) Debian GNU/Linux Etch with Linux 2.6.18 (i686) With respect to the CPU, the X6270 configuration isobviously much more powerful, especiall given the amount of RAM, andthe external storage. However the tests proved to be more CPU and IObound than memory bound. Even if obviously the amount of memory doesmake a difference, the test will give us some indications about theextra performance brought by the Xeon 5500. In order to get closer to the Bi-Xeon configuration,we did also two set of tests on the X6270: with (referred to asX6270-Ext) and without the external storage (Referred to asX6270-Int). In the second case, we are even in a less favorableposition than the Bi-Xeon that uses 3 disks vs. a single disk for theX6270. Results The table bellow presents the final results of thetests done on the three configurations. It's interesting to note acouple of things: When processing a file, at least three times thedisk space is needed to proceed. For this reason, we could onlyprocess a 7.4 GB file for the X6270-Int (Single internal 136 Gb inthe server) Given the much higher processing time needed onthe Bi-Xeon, we didn't even try going further than 74 Gb. We pushed the X6270-Ext up to processing a 415GB file, and could have reasonably gone all the way to 1 Tb if wewere not limited by disk space. Conclusions On the CPU bound tests (Average test) we can clearlysee a 32% to 60% boost of performance on the new Intel Xeon 5500compared to the older generation (depending on the size of the file). Of course the processor matters, and we saw that onthe more CPU bound processing, it has a great impact. But what we canalso see, and that's not new, is that data hungry processors need tobe fed with data, good and fast. To that respect the speed of the IOsub system is very important. Obviously working with files over 400Gb put a lot of pressure on the IO, and plugging a professionalexternal storage device, just makes a huge difference (in our caseanyway) As you can see on the SORT test (scale 10) we get a290 % boost with the Intel Xeon 5500. Once we use the externalstorage, that performance sky rockets to 1075 % (more than 10x theperformance) ! We could of course go on along time analyzing all thefigures, with different file sizes, but without pushing the analysisvery far, it's plain to see the performance gain we get with this newprocessor alone, not to mention if we also take care of the IO subsystem. The Intel Xeon 5500 based Sun servers, such as theSun Blade X6270 we just tested, enhanced with an external storagedevice such as the Sun StorageTek 2540 seems to be a killercombination for large data processing.

This is an article posted back in April 2009 on my previous blog that no longer exists. Upon request, here is a re- post of that article. Enjoy ! Having the chance to test the brand new SunBlade...

sun

Investigating Memory Leaks with Dtrace

Authors: Pascal Danek, Reuters Financial Software, France - Claude Teissedre, Oracle France Introduction This article shows a real case of DTrace framework usage to detect undeleted objects in a C++ application running on Solaris 10. In this case context, undeleted objects refer to temporary business objects that are explicitly created, with the new() operator, but never destroyed. This behavior, comparable in its effects to the so-called memory leak1, may lead to a significant unwanted increase in memory usage and cause paging activity on the system, or even generate new objects creation failures with applications which create objects iteratively. Since the non-deletion of these business objects is not the result of bad pointers but rather of an incorrect cache management in the application, specialized memory-leaks tracking tools which look after allocated memory chunks-pointers inconsistencies do not detect this type of undeleted objects. For instance, Oracle Solaris Discovery tool2 or Oracle Solaris libumem audit facility3, as well as Rational Purify or gdb are ineffective in this situation4. A new tool based on DTrace and perl scripts was developed to address this specific need and is usable with all programs that have iterative objects creation and deletion patterns similar to our case described below. The tool requires no binary change and is easy to use. It has demonstrated its efficiency at a customer site on a pre-production system in finding the leak in a couple of minutes, where the traditional methods failed after days of investigations. The principle Rather than building a fully automatized tool that would be highly dependent on the application and complex to write, we choose a more simple implementation that allowed to get results quickly. The choice was so to write generic scripts (independent of the application) with part of the program logic passed as arguments and a final manual analysis of the selected user stacks. The principle of our method is to record the objects creation and deletion within the program into a file5 and then post-process this file to detect the mis-undeleted objects based on specific program logic data. - The program itself implements the following iterative process: The program is launched An action (A1) (import or whatever process) is started. This initial action allocates temporary memory for itself and permanent memory for object that will exist all along the process life (the cache initialization for ex.). As those latter objects could appear as false positive, A1 is discarded from the scope of the analysis. A second action (A2) is started. It is identical to A1 except that it allocated memory for itself only and free the temporary objects created in A1. A third action (A3) , identical to A2, is started and, similarly to A2, it allocated memory for itself and free the temporary objects of the previous step (A2 so). Since A2 and A3 actions are identical and use the same iterative object’s creation-deletion mechanism, objects created in A2 but not freed after A3 are the potential memory leaks we are looking for. This is illustrated in the fig. below where the letters 'b' and 'e' indicate the area where to search the memory leaks. - The recording step is based on a dtrace script (watch-memory-usage.d) and contains no business logic, it merely traces the new() and delete() operators, records the user stack a timestamp and tags (iterator ids) at the time the probes are fired as described in the next section. - The detection step is a postmortem process based on a perl script (findleaks.pl), also independent of the application, which analyzes the output file of the dtrace script and looks for objects (allocated in A2 and not freed after A3) in a given search area. The full sequence of commands writes: % a.out & // start the program % sudo watch-memory-usage.d pid > leaks.txt Launch action 1, then 2, then 3. Wait for a while between actions and note the launching time for each Stop the dtrace with CTRL-C after 3 % cat leaks.txt | c++filt > leaks-dem.txt // demangle the output file Locate in the file the time range between 1 and 2 and retrieve the appropriate sequence ids (tags) % findleaks.pl -f leaks-dem.txt -b begin_id -e end_id As noted before, the business logic is introduced manually into the perl script through the arguments -b and -e that delimit the search area. The actual implementation somewhat differ slightly from this sequence whose main interest is to detail the necessary steps of the process. The dtrace script The DTrace framework6 provides a set of kernel modules called providers, each of which performs dynamically a particular kind of instrumentation of the kernel or the application. The pid provider which allows to trace functions entry and return in user programs is the most appropriate provider to trace the new() and delete() operators7. Since DTrace instruments the excutable program in which the C++ function names are mangled, those mangled names must be used in the probes specifications, that is: __1c2n6FI_pv_ for new() and 1c2k6Fpv_v_ for delete(). The mangled names can be obtained from the executable program by # dem `nm a.out | awk -F\| '{ print $NF; }'` | egrep "new|delete" __1c2k6Fpv_v_ == void operator delete(void*) __1c2n6FI_pv_ == void*operator new(unsigned) The arguments to entry probes are the values of the arguments to the traced function. The arguments to return probes are the offset in the function of the return instruction (arg0) and the return value (arg1). The full sript watch-memory-usage.d is #!/usr/sbin/dtrace -s #pragma D option quiet /*    __1c2n6FI_pv_ == void*operator new(unsigned)    __1c2k6Fpv_v_ == void operator delete(void*) */ pid$1::__1c2n6FI_pv_:entry {    self->size = arg0;} pid$1::__1c2n6FI_pv_:return /self->size/ {    addresses[arg1] = 1;    /* print details of the allocation */    /* seq_id;event;tid;address;size;datetime */ printf("<__%i;%Y;%d;new;0x%x;%d;\n",   i++, walltimestamp, tid, arg1, self->size);   ustack(50);    printf("__>\n\n");    @mem[arg1] = sum(1);    self->size=0; } pid$1::__1c2k6Fpv_v_:entry /addresses[arg0]/ {    @mem[arg0] = sum(-1);    /* print details of the deallocation */    /* seq_id;event;tid;address;datetime */    printf("<__%i;%Y;%d;delete;0x%x__>\n",    i++, walltimestamp, tid, arg0); } END{    printf("== REPORT ==\n\n");    printa("0x%x => %@u\n",@mem); } Whenever an object is created, the script records it's size (arg0 on entry) and address (arg1 on return), the user stack (ustack()), a timestamp and an iterator id. When the object is deleted, the script records it's address (arg0 on entry) and other parameters. Finally, the aggregating array8 @mem[object's address] set to 1 when the object is created, and set to 0 when it is deleted, is printed when the script ends. This array will be used to find the undeleted objects in the postmortem analysis. Finally, the output file must be demangled for the post-processing phase as shown above.  The perl script The leaks-dem.txt file records demangled raw data from the dtrace script. It contains all the necessary info to sort out the memory leaks but contains no program logic info, such as the timestamps corresponding to the beginning of the 3 actions executed. Parsing the file (this is the main function of the perl script9) and using the hand-noted times when the actions were started allow to retrieve the appropriate sequence ids of these actions (the id is the first field of each new record in leaks-dem.txt). In our case, ids 2968 and 3511 correspond to the boundaries of action A2. The search object's satisfy the following conditions:        @mem[object_address] = 1         2968 ≤ object_id ≤ 3511 The perl script command line writes: % findleaks.pl -f leaks-dem.txt -b 2968 -e 3511 and outputs a list of aggregated stack sorted by memory consumption, with the number of occurrences, corresponding to the potential leaks. That is: Suspicious addresses are: // @mem[object_address] = 1 - 0xf0ed28- 0xf0edc8- 0xf0efa8- 0xf0f048- 0x20dc968- 0x20ea9c8 Addresses in specified range are: / 2968 ≤ object_id ≤ 3511 - 0xf0ed28- 0xf0f048- 0xf0efa8- 0xf0edc8 Found 2 stacks Saw 2 times:It consumed 256 bytesAddresses with this stack: 0xf0ed28,0xf0f048,Stack: libCrun.so.1`void*operator new(unsigned)+0x68libinfracpptools27c.so`void std::deque<ITH_Notifier::Client*>::__allocate_at_end()+0x84libinfracpptools27c.so`bool ITH_Notifier::ClientMgr::dispatch(ITH_NotifyQueue*,ITH_Notifier&,unsigned long)+0x300libinfracpptools27c.so`bool ITH_NTFPipe::dispatch()+0x34libKNETAdapter.so`void*listeningThread(void*)+0x148libinfracpptools27c.so`ITH_TreadFuncWrapper+0x8libc.so.1`_lwp_start Saw 2 times:It consumed 256 bytesAddresses with this stack: 0xf0efa8,0xf0edc8,Stack: libCrun.so.1`void*operator new(unsigned)+0x68libinfracpptools27c.so`void std::deque<ITH_Notifier::Client*>::__allocate_at_end()+0x24libinfracpptools27c.so`bool ITH_Notifier::ClientMgr::dispatch(ITH_NotifyQueue*,ITH_Notifier&,unsigned long)+0x300libinfracpptools27c.so`bool ITH_NTFPipe::dispatch()+0x34libKNETAdapter.so`void*listeningThread(void*)+0x148libinfracpptools27c.so`ITH_TreadFuncWrapper+0x8libc.so.1`_lwp_start Footnotes 1 A memory leak occurs when a computer program consumes memory but is unable to release it back to the operating system or to the application. However, many people refer to any unwanted increase in memory usage, because for instance of a wrong cache management, as a memory leak, though this is not strictly accurate. 2 Oracle Discovery tool is a new tool for memory checking, available in Solaris Studio 12 update 2. 3 Libumem is a library, first introduced in Solaris 9 Update 3, used to detect memory management bugs in applications. See http://blogs.sun.com/dlutz/entry/memory_leak_detection_with_libumem 4 Actually, Discovery reports memory blocks allocated on the heap but not released at program exit. 5 Recording into a file allows to overcome the memory size limit of the script 6 DTrace is a comprehensive dynamic tracing facility built into Solaris which enables administrators and developers to examine the behavior of user programs as well as the behavior of the operating system.Documentation: Solaris Dynamic Tracing Guide, Part No: 817–6223–12, September 2008, http://download.oracle.com/docs/cd/E19253-01/817-6223/817-6223.pdf. Community: http://hub.opensolaris.org/bin/view/Community+Group+dtrace/WebHome 7 The method was initially developed in 2005 in the paper “Using DTrace to Profile and Debug A C++ Program” by Jay Danielsen, published at http://developers.sun.com/solaris/articles/dtrace_cc.html 8 Implementation notes: a) Although it is indexed by an integer value, addresses[] is an associative array. b) Using an aggregation is the only way to be able to print the full array at once. 9 The perl script main function is to parse the output of the dtrace script. It is not included in this paper since it has no educational content related to our topic. It will be provided on demand.

Authors: Pascal Danek, Reuters Financial Software, France - Claude Teissedre, Oracle France Introduction This article shows a real case of DTrace framework usage to detect undeleted objects in a C++...

sun

Leveraging a disaster recovery site for development - Part 2

In our previous post, we introduced the idea of using Disaster Recovery (DR) sites as a private cloud for hosting virtual development and testing environments. The solution we developped for an ISV partner of ours in the Healthcare sector looks like this. The solution is based on the Zones and ZFS features of Oracle Solaris --available from Solaris 10 and up. Solaris Zones(a.k.a. Containers) are an operating system level virtualization technology that provides a complete, light-weight and isolated run-time environment for applications. ZFS is a new kind of filesystem thatprovides simple administration,transactional semantics and immense scalability; it also embeds advanced fonctionnalities in compression, encryption and snapshot, typically expected from proprietary high-end storage systems. A ZFS snapshot is aconsistent point-in-time image of a filesystem. A clone is a writablecopy of a snapshot. Solaris creates ZFS clones quickly using noadditional disk space to start with, because data is not duplicated on disk unless/until the cloned image changes that data. Combined with Solaris Zones, ZFS Clone allows to provision almost instantly a Virtual Image (VM) of Solaris. This is what is at work here.So, how does the overall solution work exactly? Datais replicated between the production system and the DR site. Thereplicated data is stored on the high-performance fiber-channeldrives, but is also duplicated, once, on low-cost high-capacity SATAdrives, managed by the same array head. ASolaris Zone --we call it the “golden image”-- is created with thesoftware stack needed by the developers fully pre-installed andconfigured: Web server, application server, database instance,compilers, version controlling system, you name it. Thisone-time installation and configuration work will be used as atemplate for all development environments to come, thanks to thecloning mechanism. Foreach new development VM, the golden image is simply cloned with a copy of the production data. When logging into the VM, the developer views it as a fully-independent Solarisserver, with its unique hostname and IP address, its own databaseserver running and fed with fresh production data that can beread/updated/deleted with no restriction --remember, it is only aclone. Amajor point is that, once a template is set --many different templatescan be created--, cloning and booting a new virtual server is a matterof seconds. As for the storage footprint, it will start from beingalmost null, since a clone will only store changes that occurs during thelifetime of the server. Finally,when an environment is no longer needed, deleting it completelywithout leaving any trace is again a matter of seconds. Fora developer, this is a huge benefit; you can try somethingcompletely crazy, no worrying about ruining yourenvironment and data. Create a server, test it, breakit, throw it away and create a fresh new server, all day, all you want! Tosummarize, the combination of the Solaris Zones and ZFS technologies fully addresses the requirements of our specs, and more : Solaris Zones assure a total isolation between the development environments and thereplicated production data, and between each developmentenvironment. To play it safer, theaddition of low-cost SATA drives to store the test data assures a hardware isolation from the production replicated data. The creation/deletionof a development environment is extremely fast, can be scripted, andcan be based on different templates with different softwarestacks and/or configurations. Once a Solaris zone is created andconfigured, it can be used as a template and cloned to create newzones. Different templates can coexist. Freshdata is constantly available to each development environment, with minimum storagefootprint. With ZFS Clone, we work around the (impossible) need to duplicate large production databases, which is long and storage-space consuming. Asimple script can shutdown all zones (other than the global zone) is ahandful of seconds, giving back all available resources (CPU,memory, network) to the DR system. It gives a quickand easy way to stop all development environments if a disasteroccurs on the primary production site, so the DR site can take over. Solaris Zones can be detached, exported and attached to another hardwareserver as long as you stay on the same hardware architecture (SPARCor x64) and OS level.This gives the abilityto save/export a development environment to another server. Twolast points are worth mentioning: Inthis specific case, all environments were set to use the same Oracle11gR2 database binaries, installed in the global zone and shared byall local zones. However the isolation offered by a zone allows theinstallation of various versions of a same software in differentzones. We can therefore imagine many other use cases; e.g. anISV can keep a copy of each of its customer's environment in a zonefor support reasons, or perform Q&A on new versions of third-party software. Functionaltests may pass successfully, without giving indications on how a newor updated functionality will perform once in production. One lastbenefit from using the DR site, which has often the same horsepowerthan the production site, is to do performance tests of a new orupdated algorithm or SQL query. However, if the given test issusceptible to change the data, a snapshot should be taken before,to be able to rollback to the initial state, and obviously thereplication should be stopped before and restarted after the test. Now, what could Solaris Zones and ZFS do for you? If you're a Solaris ISV, let's talk!

In our previous post, we introduced the idea of using Disaster Recovery (DR) sites as a private cloud for hosting virtual development and testing environments. The solution we developped for an ISV...

sun

Leveraging a disaster recovery site for development - Part 1

As part of the software development lifecycle, the application testing environments are often overlooked and mostlyleft to the appreciation of each developer, who routinely end up usingtheir own PC or laptop for that. The main advantage hereis that it gives developers full control on the testingenvironment without interfering with other developers or worse, withthe production system. Thereare however some serious drawbacks with this method: Installingand configuring the various layers of software is time consuming andunproductive. Notto mention thatdevelopers (rightfully) test a lot by messing around with their environment ordata, sothe burden of installing and configuring is arepeating one. Developersare unable to leverage and test the scalability of their code on the hundreds of threads that modernproduction servers offer. Copyinghundreds of gigabytes from the production database to every laptopis not an option, leaving the developers test their code on small,often outdated, data sets. With the advent of virtualization, using ready-to-boot virtual images of a fully pre-installed and configured application testing environment on an internal cloud hasbecome a very attractive option, removing the drawbacks listed above while still offering full control on the environment to each developer. Few companies can however afford to run a dedicated grid of servers to host these virtual environments. On the other hand, many companies do have a lot of dormant processing and storage power: the Disaster Recovery (DR) site whose purpose is toensure operational continuity in case of a disaster on the primaryproduction site, and often equipped with equally performing servers than the primary site. Could one think of leveraging the DR site as a testing environment? There would be 3 benefits to it: It offers a testing cloud at no cost, no extra hardware to purchase or deploy. It gives access to the latest production data for testing, thus capturing latest trends in using the application. It makes the DR site itself more secure. On that last point, it is indeed not uncommon to find out too late (i.e. when you fail over) that a DR site --same for backup tapes, stand-by replacement systems, etc by the way-- is not as operational as one needs it; think offaulty memory e.g. Contrarian to the common idea that recovery / backup systems must be sanctuarized and left idle, the best way to be sure that these systems are fully operational is to have them working / worked on, on a regular basis. Of course, whatever you do on the DR site should not interfere with theprimary objective of a DR site: it must continuously run the backup for the production site and, at any point of time, it must be ready to take over operations. This vital requirement callsfor a strong separation between the stand-by production and development testenvironments at the DR site. In a following post, we will describe in details how we used the Oracle Solaris Zones and ZFS technologies to setup such an architecture, at no extra cost --these features come bundled with Solaris 10. An architecture that we designed for one of our partners,a major ISV in the Healthcare sector, eager to set up a quick and flexible framework tocreate pre-configured development / test environments fordevelopers.

As part of the software development lifecycle, the application testing environments are often overlooked and mostly left to the appreciation of each developer, who routinely end up usingtheir own PC...

sun

Traffix scales on Solaris Sparc

Traffix Systems is leading the control plane market, with a range of next-generation network Diameter products and solutions --Diameter is an authentication, authorization and accounting protocol for telco networks, and a successor to RADIUS. The amount of Diameter signaling in LTE and 4G networks is unlike anything telecom operators have seen or been confronted in the past. It is estimated that there will be up to 25x more signaling per subscriber compared to legacy and IN networks. The main reasons for this growth in network signaling are: The network is increasingly fragmented due to the distributed nature of the network architecture and the incorporation of a growing number of new functionalities. The explosion in mobile data adoption and usage requires an ever greater number of network nodes, all of which are Diameter nodes. The newly-introduced advanced multimedia services, advanced charging schemes and policy controls are signaling-hungry services, and create large amounts of signaling flows. As a result, network operators moving to LTE are finding it progressively more difficult to manage their core network architecture and Diameter signaling as it becomes increasingly complex to maintain, manage and scale. With these challenges in mind, and as part of the on-going engineering collaboration between Traffix Systems and Oracle's ISV Engineering, we investigated which Oracle technologies could help decrease and manage the complexity. The first thing we looked at was the SPARC Enterprise T-Series systems. The T-class (a.k.a. SPARC CMT) processor is a low-powered many-core and many-thread processor --the SPARC T3 processor is e.g. the industry's first 16-core processor with 128 threads total. They are ideally suited for highly concurrent applications, for network-centric workloads, where they deliver an order of magnitude of gains in SWaP --see this example at Comverse. Additionally, the T3 features dual, multi-threaded, on-chip 10GbE ports; 16 cryptographic accelerator units; built-in virtualization technology to run up to 128 logical domains on one processor. We benchmarked the Traffix Diameter Router and Load Balancer on a SPARC T5220 under a session-based online-charging scenario. In the testing scenario, each client sent the following DCCA application commands: Initiate; 10 Updates; Terminate. The typical request size was 1,000 bytes and response size was 200 bytes. Throughout the testing, the Traffix software proved to linearly scale across the 64 threads available on the 8-core UltraSPARC T2+ processor, to process Diameter transactions in parallel and to deliver extremely high overall throughput, as shown in the graph below. In addition to the T-Series systems, we also investigated the use of Oracle Solaris Service management facility (SMF), Oracle Solaris 10 Zones with an exclusive IP stack and Oracle Solaris Cluster. The full details of this joint work and a recommended deployment architecture can be found in the whitepaper "How Traffix Systems Optimized Its LTE Diameter Load Balancing and Routing Solutions Using Oracle Hardware and Software".

Traffix Systems is leading the control plane market, with a range of next-generation network Diameter products and solutions --Diameteris an authentication, authorization and accounting protocol...

sun

Talend Integration Suite optimized on Solaris

Continuing with the spirit of the Tunathon program --an innovative enginneering program to study and tune application performance on Solaris, run at Sun Microsystems in the early 2000's--, we at ISV Engineering are still running today "Tunathon" projects with our partners, i.e. tuning their application on Solaris --we have about 5 in flight right now. Tunathon efforts are in fact more and more relevant as computers are becoming more complex, scalable and heteregeneous --think e.g. of a 4-socket quad-core dual-thread system with extra GPU engines. Developers have the impossible job to release new business logic in their code, faster and faster, while being fullyoptimized and scalable on systems that a developer never gets his hands on to test scalability to start with, anyway. And the programming frameworks, good for developer productivity and code quality, comes as additional layers that can make debugging and optimization a real nightmare. Recently, Talend, a fast growing ISV positioned byGartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, contacted us to report a serious performance issue at one of their customers, alarge bank, using the Talend Integration Suite application on a large 32-way quad-core SPARC M-Series server. Although fully multi-threaded, the software simply did not scale on such a large system. We got on it right away, set up a 128-thread Sun T5140 system in our Lab to reproduce the problem, and took a closer look at the Java code. 99% of the time was spent in a hot routine that used hardly any CPU time,memory or IO bandwidth. It turned out that all threadswere fighting for the same resource, a Java synchronized map,generating locks on the entire map and thus creating a hugecontention. This contention was removed thru the use of a special thread-safe hash map offered sinceJava 5.0 SE, called ConcurrentHashMap. This map allows multiple threadsto do updates, as long as it is not on the same value. Therefore asingle thread will only lock what it needs to change and not theentire map. The degree of parallelism, by default 16, can be set in the constructor, e.g. 256 in the example below : Before private static Map syncMap = Collections.synchronizedMap(new HashMap()); After private static ConcurrentHashMap<Integer, Integer> syncMap = new ConcurrentHashMap<Integer, Integer>(16, (float) 0.75, 256); The root cause of this concurrency issue was not easily detectable by the developers because it happened thru the use of Hibernate, a popular framework that facilitates the storage and retrieval of Java objects via Object-Relational mapping mechanisms. While the developers believed to be using the lazy loading mode --adesign pattern commonly used to defer initialization of an object until the point at which it is needed--, we were surprised to see that each time a user object was used, all the user data and the data of its dependent objects where loaded from the database, put into synchronized Java maps and accessed by numerous threads. To make matters worse, the data loading process above was triggered every minute for each connected user, just to count the number of active users logged in the system. We were able to radically change the way active users where counted by means of a simple select count(\*) JDBC call to the database. The figure below shows the improvements in time to log-in and server ping, brought by the Tunathon changes --note the logarithmic scale, we were really hitting a scalability wall in the original code! These changes completely removed the performance bottleneck and are included in the latest release 4.0.2 of Talend Integration Suite. As an ISV supporting Solaris, you do not need to wait for your customers to hit the limits in scalability of your application on their own large enterprise servers. Oracle has an affordable line of SPARC T-Series systems, including the world's first 16-core processor, that you can use to stress your application in-house --a T5140 system, resp. T3-1 system, packs 128 threads in only 1U, resp. 2U, of a standard rack. At ISV Engineering, we welcome opportunities to work with you on such Tunathon projects.

Continuing with the spirit of the Tunathonprogram --an innovative enginneering program to study and tune application performance on Solaris, run at Sun Microsystems in the early 2000's--, we at ISV...

sun

Why Solaris Zones?

Our recent validation exercise of SAP NetWeaver Master Data Management on Oracle Solaris Containers reminded us how great a virtualization technology Solaris Zones are. Why am I saying this? Read on. Starting with Solaris 10, Solaris Zones (a.k.a. Containers) are an operating system level virtualization technology that provides a complete, light-weight and secure run-time environment for applications. Compared to other virtualization solutions, Zones do not use an hypervisor --which in fact is another layer of operating system that translates / gets in the way of your system calls to the hardware--, rather Zones are an integral part of a running Solaris instance. Still, Zones meet the same business objective of server consolidation thru Virtual Machines (VM), isolated from each other and designed to provide fine-grained control over the hardware resources. The following benefits are often put forward about Solaris Zones : Performance : Zones are very light-weight as they do not each run their own Solaris kernel nor involve an hypervisor. Every CPU cycle is spent in useful work, towards running the applications. Platform Choice : Zones are not limited to Intel-based servers. They are supported on all hardware platforms that are supported by Solaris 10 (and up), e.g. SPARC M-Series, SPARC T-Series, Oracle Engineered Systems (unless specified otherwise). They even run on top of / inside Solaris Domains. Price : as an integral part of Solaris, no additional software needs to be purchased nor installed. But this is not why Zones are great. What is it then? Ease of Use / Manageability. Here are a couple of scenarios from our daily lives working with ISV. At development time... One can create (and keep over time) many VM based on the exact same Solaris installation and patch level; this is in fact the default behavior of Zones. When working collaboratively, making sure that everyone is testing --reproducing a bug e.g.-- on identical environments can be a major source of headaches. If everyone is accessing his own Zone on the same Solaris box, they can be sure, while being isolated from each other and keeping the ability to reboot their own VM e.g., that they will be working on the exact same environment and that a patch will be applied to everyone at the same time, when it is needed / performed. Many of the ISV we work with have adopted the Zones technology to give all developers a Unix environment (out of a single Solaris box) to compile and test the code they increasingly develop on Wintel laptops. At deployment time...  Again in its default behavior, dynamic resource allocation is applicable to all Zones. One does not need to specify a dedicated amount of CPU or memory in order to define or run a Zone-based VM. This way, peaks of resource requirements due to changes in the application workload can be satisfied on the fly by Oracle Solaris itself without a sysadmin intervention; Solaris simply allocate all CPU and memory needed and available automatically --just as in a traditional timeshare environment. Of course, it is possible to configure Zones to cap hardware resources to protect other Zones and maintain whatever SLA. Because of dynamic resource allocation, any developer like SAP here can give straightforward deployment recommendations for its users to consolidate multiple parts --or multiple instances, in a horizontal scalability scheme-- of the application without entering into the sysadmin dimension of things. Still not using Zones yourself today? Give it a try. The whitepaper mentionned in the introduction recaps on Page 13-14 the few command lines needed to create a Zone. For the full documentation, check out the Solaris Containers Administration Guide.

Our recent validation exercise of SAP NetWeaver Master Data Management on Oracle Solaris Containers reminded us how great a virtualization technology Solaris Zones are. Why am I saying this? Read on. S...

sun

Avaloq runs on Oracle Solaris x86

Financial Services is probably the vertical industry with the most successful early adopters of Solaris 10 x86 --like Murex e.g., a leading Risk Management vendor, with several customers in production with Solaris x86 today-- and remains a strong area for Solaris x86 adoption. It is now the turn of Avaloq, the Swiss market leader in integrated banking software, to announce that it has released its Avaloq Banking System on Oracle Solaris 10 x86-64. "Oracle Solaris 10 x86-64 as an enterprise class Operating System is a decisive advantage for banks."Klaus Rausch, CTO, Avaloq Evolution Ltd If you are a Solaris Sparc ISV today, Solaris x86 is your safest and quickest path to leverage commodity x86 hardware and its price-performance advantages, where it makes sense --the traditional RISC architecture, the novel CMT architecture and the standard x86 architecture indeed all have different design points and each has the best price-performance when applied to the appropriate workload. Safest because you retain all of the Solaris enterprise-class features that your customers love, notably the Solaris binary compatibility --the Oracle Solaris 10 Binary Application Guarantee Program still accepts submissions until May 2011 by the way. Quickest because you maintain a single source code, i.e. no porting is needed --check out the Oracle Solaris 10 Source Application Guarantee Program, also valid until May 2011. If you are a Windows or Linux ISV today, Solaris x86 is probably your best bet to differentiate yourself. Whether it is with respect to virtualization with Containers, security with the ZFS filesystem, availability with SMF, Oracle Solaris 10 has an award-winning technology that can help you better meet the needs of enterprise customers. The Avaloq press release points particularly to the Solaris "optimisations for the new Intel processors [that makes Solaris 10] a secure, energy-efficient and highly scalable base for mission-critical IT systems which reduces a bank's TCO significantly." Avaloq's Architect Martin Büchi is in fact talking at Oracle Open World 2010 today at 1PM PST. To discuss database application development. Still, if you happen to be there, ask him about Solaris x86.

Financial Services is probably the vertical industry with the most successful early adopters of Solaris 10 x86 --like Murex e.g., a leading Risk Management vendor, with several customers in production...

sun

Turn planned system outage into no application downtime

High-Availability in IT is a strategy to satisfy business availability needs. It is impacted by planned outages and unplanned outages --the result of a system or software fault, which we will not address here. Planned outages are typically a result of a preventive or corrective maintenance task that imposes an interruption to the day-to-day system operation. The traditional approach to planned outage is a careful planning of the intervention to minimize the downtime and the risk of the system not going back up properly. With virtualization, novel approaches can be taken where system downtime and application downtime can be decoupled. Oracle VM Server for SPARC, f.k.a. Solaris Logical Domains, is a virtualization and partitioning solution supported on Sun CMT servers powered by the UltraSPARC T-class processors. Oracle VM Server for SPARC allows the creation of multiple virtual systems on a single physical system. Each virtual system is called a logical domain (LDom) and runs its own copy of the Solaris operating system. Among its many features, LDoms have the ability to do warm migration between two machines, i.e. to checkpoint and migrate an active LDom from one server to another one. In ISV Engineering, we have demonstrated this Domain Mobility feature for a running installation of the Oracle 10gR2 database. During the migration, the database server is not shut down. The migration from one physical host to another one is also transparent to the client applications connecting to the database --as long as no timeouts are encountered, or conversely, timeout values can be appropriately set by the application's admin and/or developers for the warm migration to happen transparently-- such that there is no downtime for the application. This work has been documented in the following whitepaper on the Sun Developer Network : Increasing Application Availability Using Oracle VM Server for SPARC. Good reading!

High-Availability in IT is a strategy to satisfy business availability needs. It is impacted by planned outages and unplanned outages --the result of a system or software fault, which we will not...

sun

You're invited : Oracle Solaris Day, June 28th, Herzliya

Lots have been written about Solaris over the past few months. About its continued investment at Oracle, about OpenSolaris, about its licensing policy, about third-party support, etc. It is time you hear directly from the source the true state of Solaris at Oracle and its future. We would like to invite you to attend a half-day seminar on just Oracle Solaris. We will also cover your migration path, as a Sun Partner Advantage member, to Oracle Partner Network so you can continue to receive support from Oracle for your own investment and specialization in Solaris! Take advantage of this unique opportunity to meet Sun-Oracle Solaris engineers and RSVP now! To register please send an email to isve-toi-israel@sun.com with full name, it's free! YOU'RE INVITED Oracle Solaris Day Date : Monday, June 28th, 2010 Time : 09:30 Location :  Sun Microsystems OfficeAckerstein Tower A 8th floor9 Hamenofim, Herzliya PituachIsrael Agenda : 09:30    Welcome & Introduction09:45    Migrating from Sun Partner Advantage to Oracle Partner NetworkFrédéric Parienté, Oracle ISV Engineering10:00    State of Solaris and OpenSolaris at OracleFrédéric Parienté, Oracle ISV Engineering10:30    Solaris Product DirectionsAmit Hurvitz, Oracle ISV Engineering11:15    Solaris Virtualization TechnologiesOrgad Kimchi, Oracle ISV Engineering Don't hesitate to contact us with any question at isve-toi-israel@sun.com.

Lots have been written about Solaris over the past few months. About its continued investment at Oracle, about OpenSolaris, about itslicensing policy, about third-party support, etc. It is time...

sun

Helia secures MySQL with ZFS

ParisLabs is a French Web 2.0 startup, that came out of the TELECOM ParisTech Entrepreneurs incubator. ParisLabs develops and markets a SaaS platform for building networking sites, built off a Java/ MySQL stack. It is being used, among others, by the French-speaking professional networking site Helia, kinda Facebook meets Monster. Helia stood out of the pack last year and is now reaching 300K monthly unique visitors as of March 2010 --we wish them a continued success! Paris Labs currently hosts the Helia service on a dedicated server atDedibox, a leading French hoster. With a rapidly growing database, Helia needs to make frequent backups, to be able to recover from a disaster. So far, Helia had been using mysqldump to export the data into flat files. The exported files are then zipped, and sent over to a remote server. Apart from being a completely manual process, this process has a major downside: exporting a large database with mysqldump can take a long time, during which thedatabase is stopped, putting the entire site off-line. Also, recovering a database after a disaster from dumped SQL files is a long process, which would create even more downtime for the service. As a member of the SunStartup Essentials program, ParisLabs connected with the Sun-Oracle ISVEngineering team to come up with a solution that would create anacceptable security level for their data backup strategy, withminimum service downtime and minimum hardware/software additions. We ruled out, to start with, the option of setting up a slave MySQL server to act as a hotbackup, which meant a more complicatedarchitecture, more administration and a bigger infrastructurecost that the start-up could not afford. Instead we explore the possibility to leverage the novel ZFS filesystem included in Solaris 10.Why? All filesystem operations are copy-on-write transactions, so the on-disk state isalways valid. Every block is checksummed to prevent silent data corruption. Securing data on disk is the very first level ofdata integrity. ZFS natively supports all common RAID functionalities. By mirroring the data files, we get another levelof data security to protect them from hardware failure. Instantsnapshot of the current state of a file system are performed in only a couple ofseconds and no initial disk space is required. The ZFS send/receive commands allow the backup/restore ofsnapshots to/from another file system (local or remote). From the above, we concluded it was possible to design a poor man's data backup infrastructure usingZFS as the backbone. With ParisLabs, we explored the following scenarios, all based on the ZFS snapshot capability: Simple snapshot of MySQLdata and binary log files Complete backup of a MySQL data files snapshot in a single archivefile Incrementalbackup of a MySQL data files snapshot in a single archive file Incremental backup of a MySQL data files snapshot to a “clone” file system on the local server Incrementalbackup of a MySQL data files snapshot to a “clone” file system ona remote server They range from the most simple and least costly solution (#1) to the most secure and flexible solution (#5) that is combining several advantages: the backup is on adistant server, so we are assured against a disk or system failure; the files are backedup to a clone file system allowing us to browse and restoreindividual files; the backups are donein an incremental manner, reducing the network traffic and thebackup time. These scenarios and their implementations have been documented into a whitepaper that we will soon be publishing on the Oracle Technical Network. In the meantime, feel free to leave a note here if you are alsointerested in using ZFS to implement a backup strategy for your own application. Note that these strategiescan be applied to any kind of files, not only MySQL datafiles. Aslong as your problem comes down to securing and backuping files, ZFS canhelp you. And again, ZFS comes out-of-the-box with Solaris 10, OpenSolaris and the Sun OpenStorage appliances.

ParisLabs is a French Web 2.0 startup, that came out of the TELECOM ParisTech Entrepreneurs incubator. ParisLabs develops and markets a SaaS platform for building networking sites, built off a Java/...

sun

Welcome Freeciv 2.2 to Solaris

Freeciv 2.2.0, a new major release of the open-source civilization-building strategy game, was out last month. I have built it for (Open)Solaris x86 --with the SDL client-- on a Solaris 10 Update 8 system --that's the latest Solaris 10 update to date, released 10/2009-- using the same steps that worked for Freeciv 2.1.9. I did have though to comment out the typedef on line 96 in common/featured_text.h after I ran into the following error : libtool: compile: gcc -DHAVE_CONFIG_H -I. -I.. -I../utility -I./aicore -DLOCALEDIR=\\"/usr/local/share/locale\\" "-DDEFAULT_DATA_PATH=\\".:data:~/.freeciv:/usr/local/share/freeciv\\"" -Wall -Wpointer-arith -Wcast-align -Wmissing-prototypes -Wmissing-declarations -g -O2 -MT featured_text.lo -MD -MP -MF .deps/featured_text.Tpo -c featured_text.c -o featured_text.oIn file included from featured_text.c:34:featured_text.h:96: error: conflicting types for 'offset_t'/usr/sfw/lib/gcc/i386-pc-solaris2.10/3.4.3/include/sys/types.h:233: error: previous declaration of 'offset_t' was here I need to let the Freeciv developers know of the above but, in the meantime, this workaround seems to work. Under Solaris, you will need to install libSDL, libSDL_mixer and libSDL_image under /usr/local on your system, for Freeciv to work. For your convenience, they all are available pre-built at opensolaris.free.fr, along with the new Freeciv 2.2.0 package. Note that Freeciv 2.2 is now run by typing freeciv-sdl and no longer civclient. This is how it looks on my Solaris 10 desktop: As a side note, I am not providing a GTK-based client/package because it does not compile on Solaris 10 to date. Freeciv 2.2.0 uses the GTK_STOCK_EDIT Gnome 2.6 feature that is apparently not available in the version of Gnome that comes with Solaris 10 and Java Desktop System 3. As a result, I am getting the following error at compilation time: libtool: compile: gcc -DHAVE_CONFIG_H -I. -I../.. -I. -I./.. -I./../include -I../../utility -I../../common -I../../common/aicore -I./../agents -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -DLOCALEDIR=\\"/usr/local/share/locale\\" "-DDEFAULT_DATA_PATH=\\".:data:~/.freeciv:/usr/local/share/freeciv\\"" -Wall -Wpointer-arith -Wcast-align -Wmissing-prototypes -Wmissing-declarations -g -O2 -MT chatline.lo -MD -MP -MF .deps/chatline.Tpo -c chatline.c -o chatline.ochatline.c: In function `chatline_init':chatline.c:1263: error: `GTK_STOCK_EDIT' undeclared (first use in this function)chatline.c:1263: error: (Each undeclared identifier is reported only oncechatline.c:1263: error: for each function it appears in.) It should compile on OpenSolaris though and I will try compiling it on the OpenSolaris 2010.03 release due later this month. While I was at it, I also built the latest Freeciv 2.1.11 update from the 2.1 release branch, so the current 2.1 players can be up-to-date with respect to bug fixes. You can download both GTK and SDL clients from opensolaris.free.fr as well.

Freeciv 2.2.0, a new major release of the open-source civilization-building strategy game, was out last month. I have built it for (Open)Solaris x86 --with the SDL client-- on a Solaris 10 Update 8...

sun

Kinamik Data Integrity secures Solaris audit trails

Kinamik Data Integrity is a software company focused on data integrity, whose mission is to provide an easy answer to a tough question: how do I know the digital records I am looking at are correct? Kinamik develops the Secure Audit Vault software solution that centralizes and preserves sensitive data; by applying a digital fingerprint to the secured records, it makes them tamper-evident and provides proof that the sealed data has not been manipulated from the moment of its creation. In a context of increasingly stringent compliance requirements, the Kinamik Secure Audit Vault helps organizations in regaining the trustworthiness of their data. The Kinamik innovative R&D has been recognized by several awards including the 2007 Red Herring Top 100 Europe. Already a partner of Sun and Oracle --Kinamik is a member of the Sun Partner Advantage, Sun Startup Essentials and Oracle Partner Network programs--, Kinamik joined the OpenSolaris community in 2009 and contributed to Sun´s development efforts on the audit_remote plugin by collaborating in the testing processes and providing bug reports to Sun's team. This plugin enables the secure transmission of audit trails to a remote storage, which would prevent an intruder who compromised a system from being able to delete the audit trail of that system. Kinamik developed the receiver part that allows the audit trails to be secured and stored in real time within their product, the Kinamik Secure Audit Vault. This Sun-Kinamik combined solution provides end-to-end trust in the audit information. As far as we know, this is the only product that has done that. There are two key points to note about the Solaris Audit Remote / Kinamik Secure Audit Vault combined solution: Audit events are made cryptographically tamper-evident in real time, as they are generated. This reduces significantly and virtually eliminates the security gap in which any manipulation goes undetected, regardless of access controls privileges: even administrators of applications, OS or databases cannot make a single change without being detected. In contrast, all other solutions are batch processes, which means that they add integrity protection at the end of the day or after specific periods of time. As a result, they may be protecting data that has been already tampered with. End-to-end security trust is achieved with the use of the combined audit_remote plugin in OpenSolaris and the Kinamik Secure Audit Vault. By providing assurances that data is cryptographically tamper-evident from data generation through to long-term retention, storage and archiving, organizations are able to able to reduce risk of unwanted manipulation. We are looking today for organizations and users that can assess this solution, providing us some feedback on desired improvements, capabilities, features and functions. If you are interested in supporting this initiative, please contact Nadeem Bukhari at Kinamik or download directly a virtual appliance with the product at http://www.kinamik.com/opensolaris --the current version of the Kinamik Secure Audit Vault product is also capable of supporting syslog, log4j, JDBC applications, text files and Weblogic Audit trails. Following the example of Kinamik, we are finally encouraging the software vendors and the software developers at large to join the OpenSolaris community in the spirit of --and for the business merits of-- open innovation. "Extending the Kinamik Secure Audit Vault's capabilities for working with OpenSolaris effectively allows Kinamik to embrace open innovation. By working with the OpenSolaris community we have expedited the development of a solution to address an emerging but significant business problem. These efforts would have taken much longer to come to fruition without the imaginative power of this creative network of developers and users all around the world. We look forward to collaborating with them to find new and exciting uses for the common solution provided by the Kinamik Secure Audit Vault and OpenSolaris."Roberto Blanc, Marketing Director, Kinamik Data Integrity

Kinamik Data Integrity is a software company focused on data integrity, whose mission is to provide an easy answer to a tough question: how do I know the digital records I am looking at are correct? Kin...

Oracle

Integrated Cloud Applications & Platform Services