Monday Jun 20, 2016

SPARC Solaris Virtualization ROCKS for SAS Analytics

A global life sciences company recently undertook a complete infrastructure refresh of its SAS Analytics environment to meet critical business requirements, such as industry compliance, scale for growth and resiliency for non-stop operations . This case study documents the strategic choices and the procedural details involved in modernizing their SAS Analytics environment and demonstrates how SPARC Solaris virtualization ROCKS! (see white paper below).

The Challenge:

This company was looking to replace an aged 16-core HPUX Itanium system with performance issues that was currently running SAS to support 150 users.  Their goal was to move to a SAS Grid Computing framework that could support 400 users in a high availability (HA) and disaster recovery architecture for non-stop operation.  In addition, they required high-performance shared file system storage and needed to consolidate three separate SAS releases (9.2; 9.3; and 9.4).  All this while needing to map SAS services to 15+ separate OS instances.


The SPARC Solaris Virtualization Solution:

Combining Oracle’s Solaris 11 OS with SPARC servers and ZFS Storage created a very flexible and powerful virtualization solution for this complex challenge with Oracle VM Server for SPARC (LDoms) and Oracle Solaris Zones. The SPARC Solaris virtualization strategy enabled strict compliance to SAS licensing policy while allowing for prioritized resource allocations for memory, I/O, and network bandwidth – all without adding additional licensing and virtualization costs to the customer.

Leveraging the flexibility of Oracle Solaris virtualization technologies to achieve both business and IT infrastructure needs enabled this pharmaceutical company to transform and optimize their SAS Analytics environment.

The Devil is in The Details:



Read this white paper for best practices, lessons learned, and the detailed deployment anatomy to understand the specifics of this case study.  It also includes the actual scripting of the virtualization services which were created:

White Paper: Modernization of a SAS® Analytics Environment -
Solving Complicated Refresh Challenges with Oracle Solaris and SPARC Virtualization Technologies


Should you have any questions on this case study, you can contact us at isvsupport_ww@oracle.com.

Wednesday Jun 15, 2016

Amazing 4000+TPS with FSS iPAY on Oracle SPARC and Solaris

FSS is a payments and fintech leader, offering business value through a diversified portfolio of software products, hosted payment services and software services built over 25 years of comprehensive experience across payments spectrum.  Headquartered in Chennai, India, FSS serves 100+ customers across the globe that include leading public and private sector banks in India and some of the large Banks, FIs, Processors and Prepaid Card issuers and  the company has an established  presence in America, UK/Europe, ME/Africa and APAC.

A joint performance and scalability testing exercise was conducted by FSS and Oracle Engineering teams to study the performance and scalability of FSS payment gateway iPAY on Oracle's SPARC Servers running Solaris 11.3. The activity was aimed at scaling up the application load in terms of Transactions Per Second (TPS) with a workload that consisted of mix of OLTP scenarios. 

FSS iPAY is PCI 3.0 PA-DSS certified payment gateway that provides highly secured payment transaction zone with its inbuilt fraud prevention and risk mitigation engine. FSS iPAY is compatible with multiple payment options such as debit/credit/prepaid cards, IMPS, internet banking, batch banking and interchanges like VISA & others.

The following is the functional architecture of the application:

Two Oracle SPARC T5-2 servers and an Oracle ZFS Storage ZS3-2 were used to run the test with FSS iPAY 3.1.0, Oracle Database 12c RAC and Oracle Weblogic 12c Cluster along with shared QFS on SAN storage for the application tier. 

Security Compliance

Solaris 11 compliance framework was used to generate the PCI-DSS compliance report for the application systems hosting the iPAY application. The findings from the report were used to make sure that the application systems were configured as per the requirement of PCI-DSS standard.

Scaling to 4000+ TPS with excellent response times on 32 cores

The performance testing exercises were conducted with 32 cores for the application and 32 cores for the database. Response times and average cpu percentages were measured for different levels of workload generated TPS.

The results showed an amazing 4000+TPS and close to 17 million transactions processed in 70 minutes.


 For more details contact us at isvsupport_ww@oracle.com

Wednesday Jun 08, 2016

How to configure IP over Infiniband (IPoIB) on Oracle Solaris and SPARC servers

Recently we worked with an ISV who wanted to certify their application with Oracle SuperCluster in order to support their customers who are using this SPARC and Solaris based engineered system. Their application has a kernel module which needed to be tested with the infiniband driver.

We connected 2 SPARC T5-2 servers using 2 IB cards and an IB switch for this project. This simple configuration can simulate a 2 node connection of an Oracle SuperCluster.

The following steps  re needed to make such an IPoIB configuration work. It is very important to first be sure that the switch is configured properly:

For the IB switch :

define it into DNS
then ssh root@switchIP
passwd is "changeme" ( usually this is the password from the manufacture)
enablesm ( to enable the master )
getmaster ( to check the master )

If the switch master is not working, the IB interface status will show as “down” on the server even if all the cables and cards are connected to the switch correctly.

On the Solaris server (T5 server in this case):

Check the physical network interfaces available on the server: dladm show-phys

LINK MEDIA STATE SPEED DUPLEX DEVICE
net1 Ethernet unknown 0 unknown ixgbe1
net2 Ethernet unknown 0 unknown ixgbe2
net0 Ethernet up 1000 full ixgbe0
net3 Ethernet unknown 0 unknown ixgbe3
net6 Ethernet up 1000 full vsw0
net9 Infiniband down 0 unknown ibp1
net5 Infiniband down 0 unknown ibp0

net4 Ethernet up 10 full usbecm2
net10 Ethernet up 40000 unknown vsw2
net11 Ethernet up 40000 unknown vsw1

Check only the IB cards: dladm show-ib

LINK HCAGUID PORTGUID PORT STATE GWNAME GWPORT PKEYS
net9 10E000015A7460 10E000015A7462 2 down -- -- FFFF
net5 10E000015A7460 10E000015A7461 1 down -- -- FFFF

The following 2 commands are not mandatory, but will help you to easier identify the IB

dladm rename-link net9 ibp1

dladm rename-link net5 ibp0

Check renaming: dladm show-phys

LINK MEDIA STATE SPEED DUPLEX DEVICE
net1 Ethernet unknown 0 unknown ixgbe1
net2 Ethernet unknown 0 unknown ixgbe2
net0 Ethernet up 1000 full ixgbe0
net3 Ethernet unknown 0 unknown ixgbe3
net6 Ethernet up 1000 full vsw0
ibp1 Infiniband down 0 unknown ibp1
ibp0 Infiniband down 0 unknown ibp0
net4 Ethernet up 10 full usbecm2
net10 Ethernet up 40000 unknown vsw2
net11 Ethernet up 40000 unknown vsw1

dladm show-ib

LINK HCAGUID PORTGUID PORT STATE GWNAME GWPORT PKEYS
ibp0 10E000015A7380 10E000015A7381 1 up -- -- FFFF
ibp1 10E000015A7380 10E000015A7382 2 down -- -- FFFF

Create IB default partition:

(the name is under GWPORT column resulted from previous command/)

dladm create-part -l ibp0 -P ffff ffff.ibp0

dladm show-part

LINK PKEY OVER STATE FLAGS
ffff.ibp0 FFFF ibp0 unknown ----

Create and assign the IP address :

ipadm create-ip ffff.ib0
ipadm create-addr -T static -a 10.1.10.11/24 ffff.ib0/v4


Important commands to check the status of the connection:
dladm show-ib
ibhosts
ibstat
ibswitches
iblinkinfo.pl –R

grep pciex15b3 /etc/path_to_inst


Should you need further help with your IPoIB configurations on SPARC servers, you can contact us at isvsupport_ww@oracle.com.


Tuesday Jun 07, 2016

Success Texas Style with SAS and Oracle SuperCluster DBaaS

The first US state to deliver a complete portfolio of citizen services in the Cloud, Texas is the leader in Open Government. With a "my government, my way" promise, Texas relies on Oracle to help 30+ agencies deliver services to nearly 30M citizens. It creates economies and efficiencies to deploy large frames that departments couldn’t otherwise afford.  Departments are seeing a reduction of 20-35% of costs by the move to the Cloud. These agencies depend on Oracle Exadata and SuperCluster for Database as a Service in their Cloud.

Alejandro Farias is a Financial Analyst in one of these agencies: the Texas Parks and Wildlife Department (TPWD).  He enthusiastically shared their great experience using SAS with Cloud DBaaS running on Oracle SuperCluster at SAS Global Forum 2016. Some of the main points he covered were:

  • Their transition to the cloud
  • How they used SAS and Oracle to bring better services and accountability to the citizens of Texas
  • Lessons learned on performance tips and efficiencies
  • Insight into the necessity of having multi-disciplinary skills in today’s organizations

Their solution is based on:

  • SAS Business Intelligence Server 9.4
  • Oracle SuperCluster T5-8  
  • Oracle Solaris 11
  • Oracle Database 11g
  • Oracle E-BIZ Financials 11.5.10-2
  • Oracle Solaris Zones which provide fine grained virtualization 

Greater Performance,  Greater Services, Greater Accountability

Their efforts have really paid off and they are seeing fantastic results that enables them to provide better service to Texas residents.

  • DB user response times improved
  • Performance leveled as additional agencies came on-line
  • Resource management prioritization
  • Advanced reconciliation reporting :

o Automated

o Days to minutes

o Journal and accounting smart matching

o Incorporated validation and integrity checkpoints

Watch Alejandro’s video and presentation slides for more details.

More on the State of Texas cloud enabled services here.

Wednesday May 25, 2016

Teamcenter on Oracle Engineered System 10,000+ Concurrent Users Solution Brief

In order to address PLM requirements for large Automotive, Aerospace and other manufacturing companies, Oracle and Siemens PLM engineering teams recently completed a benchmarking and sizing effort using a huge 50 car program database – nearly 1 TB database - with 10,000 concurrent rich client usage profile users on Oracle SuperCluster M7. The following results were achieved:

  • 5x improvement in database / volume import times vs. SAN storage deployments
  • Best Teamcenter transaction response times on the Solaris platform to date at 10,000 concurrent users
  • Streamlined ability to configure redundancy / failover of Teamcenter tiers with Solaris embedded virtualization (LDOMS)
  • Virtually flat server response times  for most transactions
  • No configuration changes required for OS, DB, or Teamcenter
  • Enormous spare CPU capacity available for growth at all tiers

“Combining Teamcenter with the Oracle SuperCluster delivers to our customers a secure, stable, high-performance platform. Using the embedded virtualization features that come with Oracle Solaris 11 on Oracle SuperCluster systems, we were able to quickly install our entire Teamcenter software stack and transfer a very large Teamcenter database. This high-performance system easily supported 10,000 heavy-profile concurrent Teamcenter users with significant computational room for growth.” - Chris Brosz, Vice president of Technical Operations, Siemens PLM Software.

With its powerful combination of being ready to go right out of the box, extreme performance, and advanced security, Oracle SuperCluster M7 simplifies and accelerates deployment intervals, consolidates infrastructure, accelerates performance, and provides a high-availability mission-critical platform. The Teamcenter on Oracle SuperCluster M7 solution provides the stability, scalability, and performance organizations need to support the increasing role that PLM plays in optimizing the value chain, maintaining a competitive edge, and growing margins— today and tomorrow.

        Read the Solution Brief for details.

Friday May 06, 2016

Best Practices Using ZFS For SAP ASE Database

SAP Adaptive Server Enterprise (ASE) database 16.0 and 15.7 are certified to run on Oracle Solaris 11 systems using ZFS as data storage and logs. Recent testing with OLTP workloads on Oracle SPARC systems with Solaris 11 show better or on-par performance with ZFS compared to UFS or raw devices.  

Oracle Solaris ZFS is a powerful file system that combines file system capabilities with storage features that are traditionally offered by a volume manager. ZFS is the default file system in Oracle Solaris 11 and includes integrated data services, such as compression, encryption, and snapshot and cloning. Oracle Solaris and ZFS are also the foundation of Oracle ZFS Storage Appliance.  

ZFS Best Practices For SAP ASE Database:

  • Create separate zpools for data and the transaction log.
  • Create zpool for data using multiple LUNs for better I/O bandwidth.
  • ZFS record size for data is one of key parameters for optimal performance. The default  value of 128K is too large and causes page overhead.  For Solaris SPARC systems with the DB page size of 8K or less, a ZFS record size of 32K provides  the best performance for both file system block and raw zvol.
  • Place ZIL (ZFS Intent Log) on SSDs for improved latency of both data and transaction log.
For more details see SAP Note  2300958 : ZFS Certification of SAP ASE 16.0 and Sybase ASE 15.7 on Oracle Solaris 11 (requires SAP login).

Monday May 02, 2016

FIS Payment Card Products on Oracle Solaris and SPARC

FIS™ is one of the world's largest global providers dedicated to banking and payments technologies. FIS empowers the financial world with payment processing and banking solutions, including software, services and technology outsourcing. Headquartered in Jacksonville, Florida, FIS serves more than 20,000 clients in over 130 countries. It is a Fortune 500 company and is a member of Standard & Poor’s 500® Index. 

Advanced Security, Extreme Performance and Unmatched Value with Oracle SPARC.

FIS supports its payment card products IST/Switch, IST/Clearing, IST/MAS, Fraud Navigator and Data Navigator on the latest SPARC platforms running Oracle Solaris 11. FIS Fraud Navigator and Data Navigator have also achieved Oracle Solaris Ready and Oracle SuperCluster Ready status. 

Oracle SPARC servers offer the best computing platform for running FIS applications. Customers can benefit from large number of high performance cores along with TBs of memory to run their mission critical environments with greater scalability, performance and security. If you have any questions about running FIS applications on Oracle Solaris SPARC, you can contact us at isvsupport_ww@oracle.com

Monday Apr 25, 2016

Unbeatable Scalability of SAP ASE on Oracle M7

The Oracle SPARC M7 platform has incredible performance and some examples of it can be found in this blog. One very interesting customer example is with SAP ASE performance and scalability. SAP Adaptive Server Enterprise (Sybase) database is used by University hospitals Leuven (UZ Leuven). UZ Leuven is one of the largest healthcare providers in Europe and provides cloud services to 16 other Belgian hospitals, sharing patient records supported by the same IT infrastructure and systems. 

UZ Leuven was looking to increase the scale capacity of its SAP ASE platform to accommodate an anticipated 50% business growth in user transactions, along with added functionality of the application as more hospitals join the network. Their current load was about 80 million transactions per day for a 5 TB database.

420 vs 48 on Intel

The SPARC M7 platform proved to be the only platform that could linearly scale up to 420 SAP ASE clients, while their Intel E7-8857v2 platform scaled to only 48 clients.

The SPARC M7 platform also delivered better performance and response times while ensuring data availability and information security, critical needs for UZ Leuven’s patients. 

“Today, SPARC is the only suitable platform that meets our application needs. We selected SPARC servers over IBM and x86-based solutions because scalability and performance are essential for our mission-critical SAP Adaptive Server Enterprise database infrastructure. With the SPARC M7 servers, we can expand our business and grow at the speed of our customers,” said Jan Demey, Team Leader for IT Infrastructure, University Hospitals Leuven.

You can read the full story here.

In the next blog, we will discuss best practices for using Oracle Solaris ZFS file system for your SAP ASE database.

Wednesday Apr 20, 2016

Oracle OpenWorld and JavaOne 2016 Call for Proposals

The 2016 Oracle OpenWorld and JavaOne call for proposals are open and the deadline for submissions is Friday, May 9. We encourage you to submit proposals to present at this year's conference, which will be held September 18 - 22, 2016 at the Moscone Center in San Francisco. 

See here who from ISV Engineering and partners attended last year and the joint projects they presented. 

Submit your abstracts for Oracle OpenWorld and JavaOne now and take advantage of the opportunity to present at the most important Oracle technology and business conference of the year.


Tuesday Apr 19, 2016

Oracle Solaris 11.3 Preflight Checker - Come Fly With Me!

Announcing the latest update to: Oracle Solaris Preflight Applications Checker 11.3

Consider a pilot deciding to fly a new airplane without knowing that it had been 100% tested to fly.

If you are a Solaris developer who is looking to leverage the security, speed and simplicity of Oracle Solaris 11.3, you need to make sure your application will perform well BEFORE lifting off the ground on that migration. 
At Oracle we call that  preserving application compatibility between releases.  We believe that’s pretty important to the success of your flight, and getting you back onto the ground safely.

Solaris was the first operating system to literally guarantee application compatibility between releases and architectures.  Of course, any good developer knows there are always ways to accidentally break compatibility when you're developing an app, and maybe even get away with it for a while...

That's where the Oracle Solaris Preflight Applications Checker 11.3 (PFC 11.3) tool comes in. 
Think of it as a flight simulator, designed to give the pilot (aka - developer) confidence in the plane they are about to fly.

With PFC 11.3,  it is now quite simple to check an existing Solaris 8, 9, or 10 application for its readiness to be executed on Oracle Solaris 11.3, whether its on SPARC or x86 systems.  A successful check with this tool will be a strong indicator that an application will run unmodified on Oracle Solaris 11.3. 
In other words, start up the engines, lets fly!

A little bit about how PFC 11.3 can do this.
PFC 11.3 includes two modules:

1. The Application Checker - which scans applications for usage of specific Solaris features, interfaces, and libraries and recommends improved methods of implementation in Oracle Solaris 11.3.  It can also alert you to the usage of undocumented or private data structures and interfaces, as well as planned discontinuance of Solaris features.  

2. The Kernel Checker - checks the kernel modules and device drivers and their source code and reports potential compatibility issues with Oracle Solaris 11.3.   It can analyze the source code or binaries of the device driver and report any potential "compliance" issues found against the published Solaris Device Driver Interface (DDI) and the Driver-Kernel Interface (DKI).

These two modules scan and analyze your application in three areas to serve up the pre-flight information for running it on Solaris 11.3:

 1) Analysis of the application binaries for usage of libraries as well as for usage of Solaris data structures and functions.
 2) Static analysis of the C/C++ sources and Shell scripts  for the usage of function or system calls that are deprecated, removed or unsupported on Oracle Solaris 11, as well as the usage of  commands and libraries which have been relocated, deprecated or removed.
 3) Dynamic analysis of the running application, for it's usage of dynamic libraries which have been removed, relocated or upgraded (example: openSSL).

PFC 11.3 not only helps you migrate to the latest release of Solaris,  but also makes recommendations on getting the most out of your Oracle systems hardware. PFC 11.3 even generates an HTLM report which provides pointers to various migration services offered by Oracle.

Oracle Solaris is designed and tested to protect customer investments in software.  PFC 11.3 and The Oracle Solaris Binary Application Guarantee are a powerful combination which reflect Oracle's confidence in the compatibility of applications from one release of Oracle Solaris to the next.

Any technical questions with PFC 11.3 should be directed to the ISV Engineering team:  isvsupport_ww@oracle.com

 Now, sit back, relax, and enjoy your flight!

Monday Apr 18, 2016

IBM Software Products and SPARC Hardware Encryption: Update

Last December, we told you about IBM's GSKit and how it now allows several popular IBM products seamless access to Oracle SPARC hardware encryption capabilities. We thought we'd create a quick Springtime update of that information for our partners and customers.

Obtaining The Proper Version of GSKit

GSKit is bundled with each product that makes use of it; over time, new product releases will incorporate GSKit v8 by default. Until then, the latest GSKit v8 for SPARC/Solaris is available on IBM Fix Central, for download and upgrade into existing products. Installation instructions can be found here.

The support described above is available in GSKit v8.0.50.52 and later. As of April, 2016, the latest GSKit v8.0.50.59 is available for download from Fix Central.

IBM Products that currently make use of GSKit v8 on Solaris (and therefore could take advantage of SPARC on-chip data encryption automatically) include (but are not limited to):

 Product Versions w/bundled GSKit v8.0.50.52 or later Versions requiring manual update of GSKit
DB2 v9.7 FP11, v10.1 FP5, v10.5 FP7
HTTP Server iFix available for v8.0 and v8.5
Security Directory Server (fka Tivoli Directory Server)
v6.3 and later certified with GSKit 8.0.50.59
Informix IDS v11.70 and v12.10 fix available which updates to GSKit 8.0.50.57
Cognos BI Server v10.2.2 IF008 and later
Spectrum Protect (fka Tivoli Storage Manager) v7.1.5 and later
WebSphere MQ v8 Fix Pack 8.0.0.4 and later

Determining Current GSKit Version

  • $ /opt/ibm/gsk8/bin/gsk8ver # 32-bit version
  • $ /opt/ibm/gsk8_64/bin/gsk8ver_64 # 64-bit version

Wednesday Mar 30, 2016

The UNIX® Standard Makes ISV Engineering’s Job Easier

Here at Oracle® ISV Engineering, we deal with hundreds of applications on a daily basis. Most of them need to support multiple operating systems (OS) environments including Oracle Solaris. These applications are from all types of diverse industries – banking, communications, healthcare, gaming, and more. Each application varies in size from dozens to hundreds of millions of lines of code. A sample list of applications supporting Oracle Solaris 11 can be found here.

As we help Independent Software Vendors (ISVs) support Oracle Solaris, we understand the real value of standards. Oracle Solaris is UNIX certified and conforms to the UNIX standard providing assurance of stable interfaces and APIs. (NOTE: The UNIX standard is also inclusive of POSIX interface/API standard).  ISVs and application developers leverage these stable interfaces/APIs to make it easier to port, maintain and support their applications. The stable interfaces and APIs also reduce the overhead costs for ISVs as well as for Oracle’s support of the ISVs – a win-win for all involved. ISVs can be confident that the UNIX operating system, the robust foundation below their application, won't change from release to release.

Oracle Solaris is unique in which it goes the extra mile by providing a binary application guarantee since its 2.6 release. The Oracle Solaris Binary Application Guarantee reflects the confidence in the compatibility of applications from one release of Oracle Solaris to the next and is designed to make re-qualification a thing of the past. If a binary application runs on a release of Oracle Solaris 2.6 or later, including their initial release and all updates, it will run on the later releases of Oracle Solaris, including their initial releases, and all updates, even if the application has not been recompiled for those latest releases. Binary compatibility between releases of Oracle Solaris helps protect your long-term investment in the development, training and maintenance of your applications.

It is important to note that the UNIX Standard does not restrict the underlying implementation. This is key particularly because it allows Oracle Solaris engineers to innovate "under the hood". Keeping the semantics and behavior of system calls intact, Oracle Solaris software engineers deliver the benefits of improved features, security performance, scalability, stability, etc. while not having a negative impact on application developers using Oracle Solaris.

Learn more about Oracle Solaris, a UNIX OS, through the links below:

· Oracle Solaris 11

· The UNIX Evolution: An Innovative History

· Oracle, UNIX, and Innovation

· The Open Group UNIX Landing Page

Oracle Copyright 2016. UNIX® is a registered trademark owned and managed by The Open Group. POSIX® is a registered Trademark of The IEEE. All rights reserved.



Sunday Mar 13, 2016

Increasing Security for SAP Installations with Immutable Zones

In recent blogs we have talked about various aspects of end-to-end application security with Oracle Solaris 11, SPARC M7 and the ISV Ecosystem. We also talked about a white paper that provides best practices for using the Oracle Solaris compliance tool for SAP installations. Another way to increase the security of an SAP installation is to use Oracle Solaris Immutable Zones. 

A Solaris zone is a virtualized operating system environment created within a single instance of the Solaris OS. Within a zone, the operating system is represented to the applications as virtual operating system environments that are isolated and secure. Immutable Zones are Solaris zones with read-only roots. Both global and non-global zones can be Immutable Zones.

Using Immutable Zones is one technique that can protect applications and the system from malicious attacks by applying read-only protection to the host global zone, kernel zones and non-global zones. Oracle Solaris Zones technology is the recommended approach for deploying application workloads in an isolated environment—no process in one zone can monitor or affect processes running in another zone. Immutable Zones extend this level of isolation and protection by enabling a read-only file system, preventing any modification to the system or system configuration.

As an SAP system requires write access to some directories, it is not possible to install SAP inside an Immutable Zone without further configuration. A new paper provides instructions and best practices on how to create and manage an SAP installation on an Oracle Solaris Immutable Zone. Read the white paper for details or see SAP Note 2260420 (requires SAP login).

Thursday Feb 18, 2016

How To Tell If SPARC HW Crypto Is Being Used? (2016 Edition)

We’ve been blogging here recently about the advantages of SPARC M7’s on-chip hardware encryption, as well as some Oracle partners whose software already works with it. Some readers have been asking “how can I tell if XXXX software is automatically making use of it?” A very good question, which we’d like to answer via an update on Dan Anderson’s seminal 2012 blog post, How to tell if SPARC T4 crypto is being used?

Back then, SPARC T4 hardware encryption was access mostly via userland calls, which could be observed via DTrace. Since then, the Solaris Cryptographic Framework in Solaris 11 makes more direct utilization of native SPARC hardware encryption instructions. This impacts numerous third-party applications, including recent versions of the bundled openssl). While a cleaner approach, it makes DTrace less effective as a way to observe encryption in action.

Enter cpustat and cputrack.

These Solaris commands allow access to SPARC CPU performance counters, and it just so happens that one of these counters tracks on-chip hardware encryption. For SPARC T4 and later, on Solaris 11:

# # Run on a single-socket SPARC T4 server
#
# # Show instruction calls: all processes, all vCPUs, once for 1 sec 
# cpustat –c pic0=Instr_FGU_crypto 1 1
  time cpu event      pic0 
1.021    0 tick         5 
1.021    1 tick         5 
1.021    2 tick         5 
1.021    3 tick        11 
1.010    4 tick         5 
1.014    5 tick         5 
1.016    6 tick        11 
1.010    7 tick         5
1.016    8 tick       106 
1.019    9 tick       358 
1.004   10 tick        22 
1.003   11 tick        54 
1.021   12 tick        25 
1.014   13 tick       203 
1.006   14 tick        10 
1.019   15 tick       385 
1.008   16 tick      2652 
1.006   17 tick        15 
1.009   18 tick        20 
1.006   19 tick       195 
1.011   20 tick        15 
1.019   21 tick        83 
1.015   22 tick        49 
1.021   23 tick       206 
1.020   24 tick       485 
1.019   25 tick        10 
1.021   26 tick        10 
1.021   27 tick       471 
1.014   28 tick      1396 
1.021   29 tick        10 
1.018   30 tick        26 
1.012   31 tick        10 
1.021   32 total     6868 

# # Show number of instruction calls for all processes, per CPU socket
# cpustat –c pic0=Instr_FGU_crypto –A soc 1 1
time soc event      pic0 
1.014    0 tick      7218
1.014  256 total     7218

# # Show number of instruction calls for existing process 10221
# cputrack –c pic0=Instr_FGU_crypto –p 10221 –o outputfile

Note 1: Oracle VM for SPARC (aka LDoms) before v3.2 did not allow these command inside a Guest LDom; starting with v3.2, one can set an LDom’s perf-counter property to strand or htstrand.

Note 2: By default, Solaris 11 does not allow these commands in non-global zones; to do this, set limitpriv=”default,cpc_cpu” and reboot the zone.

Now you can see these numbers go up and down as hardware encryption is used (or not). For something just a bit more intuitive, I whipped up a little bash script which shows relative usage over time. Feel free to adapt to fit your needs. Here’s the script and a run done just before a command was issued in another window which makes serious use of hardware crypto (this on a SPARC M7 server):

# cat crypto_histo.bash
#! /bin/bash

while (true); do 
    echo `cpustat  -c pic0=Instr_FGU_crypto -A soc 1 1 | \
        awk '/total/ {
            num=4*int(log($NF)/log(10)); 
            hist=""; 
            for (i=0; i<num; i++) hist=hist"="; 
            print hist
        }'`
done
#
# # Run this, then run ‘openssl speed -evp AES-192-CBC’ in another window
# ./crypto_histo.bash 
============
============
============
============================
================================
====================================
====================================
====================================
====================================
====================================
====================================
============
================
============
============
============

SPARC hardware encryption: Always On, Blazingly Fast, and now Eminently Observable.


Thursday Feb 11, 2016

SAS and Oracle SPARC M7 Silicon Secured Memory

In an earlier blog we talked about Solaris /SPARC features that enable you to increase end-to end security of your applications. One of the key security risk areas in applications is memory corruption.

Applications are vulnerable to memory corruption due to both common software programming errors as well as malicious attacks that exploit software errors. 317 million new malicious programs and 24 zero-day vulnerabilities were reported in 2014 alone*.

Memory corruption causes unpredictable application behavior and system crashes. A victim thread encounters incorrect data sometime after the run-time error occurred making these bugs extremely hard to locate and fix. Buffer overflows are a major source of security exploits. In-memory databases increase this exposure as terabytes of critical data reside in-memory. Databases and Operating Systems have tens of millions of lines of code, developed by distributed teams of thousands of developers, so errors introduced by a subsystem could adversely affect one or more other subsystems.

Oracle Silicon Secured Memory (SSM) is a feature of Oracle SPARC T7/M7 systems that detects invalid data accesses based on memory tagging. A version number is stored by software in spare bits of memory. Dedicated non-privileged load/store instructions provide the ability to assign a 4-bit version to each 64-byte cache line. Metadata stored in memory is maintained throughout the Cache hierarchy and all Interconnects. On load/store operations, the processor compares the version set in the pointer with the version assigned in the target memory and generates an exception if there is a mismatch.

3 hours vs 1 minute

SAS recently completed a proof of concept using SSM with SAS 9.4 and the Oracle Studio Discover tool.

SAS 9.4 is a large memory intensive enterprise application predominantly written in C.  Using a standard debug track that uses malloc(3) for memory allocation, SAS test programs could be run by optionally interposing the Oracle Studio discover ADI shared library to intercept malloc() calls.  This transparently enables discover ADI to utilize SPARC M7 Silicon Secured Memory to check for memory corruptions at the silicon layer and produce full stack walk backs if a memory corruption was found.

They were able to realize the following immediate results:

Tag Cross platform bugs in just 2-3 days of testing

Find, triage, fix and put back bugs in less than 2 hours

Identify bugs 180x faster

Other memory validation tool: 3 hours

Silicon Secured Memory and Discover tool: 1 minute

Memory Validation Testing In QA Cycles

SSM along with the Oracle Studio discover ADI allows ISVs to perform full QA runs running them at near real time speed, whereas traditional memory validation tools cannot be used as such due to their high performance overhead and instead are typically only used to debug memory corruptions after bug reports come in.

If you develop or deploy large scale memory intensive applications, can you afford not knowing how SSM can help you with your products quality and security?  

For more on the SAS as well as the Oracle Database experience with SSM, see the OOW 2015 presentation CON8216: “Inoculating Software, Boosting Quality: Oracle DB & SAS Experience with Silicon Secured Memory” (PDF).

To learn more about SSM and how your applications can take advantage of it, read the article Detecting Memory Errors with Silicon Secured Memory.


* Based on the April 2015 Internet Security Threat Report from Symantec.


Wednesday Feb 03, 2016

IBM Java Applications Taking Advantage of SPARC Hardware Encryption

We’ve been talking recently about IBM’s GSKit, through which many IBM applications can automatically take advantage of SPARC Hardware Encryption (including the latest SPARC M7-based systems). We’ve since been asked whether this was also possible for Java-based IBM applications (such as WebSphere Application Server) or other applications written against IBM’s SDK Java Technology Edition to take similar advantage. This post is written to help answer those questions.

What is the IBM SDK?
IBM has traditionally licensed Oracle’s Java Runtime Environment and Java Developer Kit, modified it slightly, and released it as the IBM SDK. This combination of Java Runtime and Developer Kit is designed to support many IBM products, and can also be used for new development (although the recommended Java platform on Solaris is Oracle’s own Java Runtime Environment and Java Developer Kit). Oracle Solaris ships with both Java 7 and Java 8, but most IBM apps include the Java 7 version of their SDK.

What is the Advantage of Using Hardware Cryptography on SPARC?
Sometimes quite a bit, depending on the size of the chunks of data being encrypted and decrypted. Take this simple Java program, which does an adequate (if somewhat artificial) job at demonstrating the use of Hardware Crypto from Java:


This code simply creates an array of random data of size specified at runtime, and then encrypts using the common AES128 cypher. This algorithm happens to be one of the many accelerated by recent SPARC CPUs. When run on out-of-the-box Oracle and IBM implementations of Java 7 on SPARC, we can see the advantage to the code taking advantage of SPARC hardware crypto:



Figure 1: AES128 Encryption on SPARC M7 (no workaround)

Again, this is a very artificial test case used to make a point.The benefit from hardware acceleration will vary by workload and use case, but the key point to keep in mind is that this hardware assist is always available on SPARC M7 (the differences are proportional on SPARC T4 and T5). In those cases where it makes a difference, one should make an effort to take advantage of it.

Whither WebSphere Application Server?

IBM WebSphere Application Server v8, like other J2EE application servers, is written in Java, and could therefore in theory take advantage of the workaround described in the next section. But you don’t have to go with an unsupported solution for WAS, because Best Practice is usually to stand up the IBM’s included HTTP Server in front of WAS, and HTTP Server is built with GSKit 8. Check to see that the version of HTTP Server you use with WAS v8 supports SPARC hardware encryption – if so, you’re good to go!

How To Make Use of SPARC Hardware Crypto from IBM Java
Central to the Java Cryptography Architecture is the notion of JCA Providers, which allow developers to create and ship security mechanisms which can ‘plug-in’ to a Java Runtime via well-defined APIs. All Java runtimes ship with a default set of providers, usually found in the instance’s java.security file. Since Java 7, the OracleUcrypto provider has been provided in Solaris releases of Java, specifically to interface with the underlying Solaris Ucrypto library (part of the Solaris Cryptographic Framework). On platforms based on SPARC T4, T5, M5, M6 and M7 CPUs, the Ucrypto library automatically takes advantage of any available underlying SPARC hardware cryptography features.

Those developing Java applications on Solaris with Oracle’s implementation of Java will find that this functionality is available by default on SPARC; in fact, the OracleUcrypto provider has the highest priority in the instance’s java.security file. Here’s an excerpt from the default java.security file in Oracle JDK 1.7:



As mentioned above, Oracle’s Java implementations are recommended on Solaris, but for those developers who must make use of the IBM SDK, you’ll notice that the IBM version of the
java.security file is not quite the same as that above. In fact, it is missing the OracleUcrypto provider:



What, then, can a developer do to reproduce the desired functionality?

1) The Officially-Supported Solution

Build and deploy against Solaris 11’s built-in Oracle JDK and JRE.

2) The Currently-Unsupported Solution

As you might have already surmised, Java’s Security Provider mechanism allows for quick and easy addition or substitution of additional Crypto providers (in the cases of third-party cryptographic hardware modules. By adding the UcryptoProvider to IBM’s java.security file, Java executables will get that provider and the advantage it gives. Note: these instructions are correct for Java 7/SDK 7, but have not been tested on other major releases of Java:

Step 1: Add ucrypto-solaris.cfg to lib/security
Copy the ucrypto-solaris.cfg file from the Oracle Java 7 instance (in jre/lib/security) to the lib/security directory in the IBM SDK instance.

Step 2: Add UcryptoProvider as the first entry in the IBM lib/security/java.security file
Assuming
you add to the top of the list, and keep the existing providers, the file above would end up looking as follows:


3) The (Hopefully) Future-Supported Solution

The above workaround does indeed work, but it’s not yet supported by IBM. That’s not to say we’ve not asked for it – we’ve submitted a feature request with IBM, and the good news is that any IBM customer who would also like to see this (perhaps you?) can upvote it now!


[Link to Java code snippet above] 

Thursday Jan 28, 2016

More Free/Open-Source Software Now Available for Solaris 11.3

Building on the program established last year to provide evaluation copies of popular FOSS components to Solaris users, the Solaris team has announced the immediate availability of additional and newer software, ahead of official Solaris releases:

Today Oracle released a set of Selected FOSS Component packages that can be used with/on Solaris 11.3. These packages provide customers with evaluation copies of new and updated versions of FOSS ahead of officially supported Oracle Solaris product releases.

These packages are available at the Oracle Solaris product release repository for customers running Oracle Solaris 11.3 GA. The source code used to build the components is available at the Solaris Userland Project on Java.net. The packages are not supported through any Oracle support channels. Customers can use the software at their own risk.

detailed how-to guide outlines how to access the selected FOSS evaluation packages, configure IPS publishers, determine what FOSS components are new or updated, and identify available packages and download/install them. The guide also contains recommendations for customers with support contracts.

The table of components below contains the available selected FOSS components that are new or updated since the release of Oracle Solaris 11.3 GA:

New Components

asciidoc 8.6.8 aspell 0.60.6.1 cppunit 1.13.2 daq 2.0.2
dejagnu 1.5.3 libotr 4.1.0 pidgin-otr 4.0.1 isl 0.12.2
jjv 1.0.2 qunit 1.18.0 libdnet 1.12 libssh2 1.4.2
nettle 3.1.1 cx_Oracle 5.2 R 3.2.0 re2c 0.14.2
nanliu-staging 1.0.3 puppetlabs-apache 1.4.0 puppetlabs-concat 1.2.1 puppetlabs-inifile 1.4.1
puppetlabs-mysql 3.6.1 puppetlabs-ntp 3.3.0 puppetlabs-rabbitmq 3.1.0 puppetlabs-rsync 0.4.0
puppetlabs-stdlib 4.7.0 saz-memcached 2.7.1 scons 2.3.4 wdiff 1.2.2
yasm 1.3.0

Updated Components

ant 1.9.4 (was 1.9.3) mod_jk 1.2.41 (was 1.2.40) mod_perl 2.0.9 (was 2.0.4)
apache2 2.4.16 (was 2.2.29, 2.4.12) autoconf 2.69 (was 2.68) autogen 5.16.2 (was 5.9)
automake 1.15, 1.11.2, 1.10 (was 1.11.2, 1.10, 1.9.6) bash 4.2 (was 4.1) binutils 2.25.1 (was 2.23.1)
cmake 3.3.2 (was 2.8.6) conflict 20140723 (was 20100627) coreutils 8.24 (was 8.16)
libcurl 7.45.0 (was 7.40.0) diffstat 1.59 (was 1.51) diffutils 3.3 (was 2.8.7)
doxygen 1.8.9 (was 1.7.6.1) emacs 24.5 (was 24.3) findutils 4.5.14 (was 4.2.31)
getopt 1.1.6 (was 1.1.5) gettext 0.19.3 (was 0.16.1) grep 2.20 (was 2.14)
git 2.6.1 (was 1.7.9.2) gnutls 3.4.62.8.6 (was 2.8.6) gocr 0.50 (was 0.48)
tar 1.28 (was 1.27.1) hexedit 1.2.13 (was 1.2.12) hplip 3.15.7 (was 3.14.6)
httping 2.4 (was 1.4.4) iperf 2.0.5 (was 2.0.4) less 481 (was 458)
lftp 4.6.4 (was 4.3.1) libarchive 3.1.2 (was 3.0.4) libedit 20150325-3.1 (was 20110802-3.0)
libpcap 1.7.4 (was 1.5.1) libxml2 2.9.3 (was 2.9.2) lua 5.2.1 (was 5.1.4)
lynx 2.8.8 (was 2.8.7) m4 1.4.17 (was 1.4.12) make 4.1 (was 3.82)
meld 1.8.6 (was 1.4.0) mysql 5.6.25, 5.5.43 (was 5.6.21, 5.5.43, 5.1.37) ncftp 3.2.5 (was 3.2.3)
openscap 1.2.6 (was 1.2.3) openssh 7.1p1 (was 6.5p1) openssl 1.0.2e plain and fips-140 (was 1.0.1p of each)
pcre 8.38 (was 8.37) perl 5.20.1, 5.16.3 (was 5.12.5) DBI 1.623 (was 1.58)
gettext 1.0.5 (was 0.16.1) Net-SSLeay 1.52 (was 1.36) Tk 804.33 (was 804.31)
pmtools 1.30 (was 1.10) XML-Parser 2.41 (was 2.36) XML-Simple 2.20 (was 2.18)
pv 1.5.7 (was 1.2.0) astroid 1.3.6 (was 0.24.0) cffi 1.1.2 (was 0.8.2)
CherryPy 3.8.0 (was 3.1.2) coverage 4.0.1 (was 3.5) Django 1.4.22 (was 1.4.20)
jsonrpclib 0.2.6 (was 0.1.3) logilab-common 0.63.2 (was 0.58.2) Mako 1.0.0 (was 0.4.1)
nose 1.3.6 (was 1.2.1) pep8 1.6.2 (was 1.5.7) ply 3.7 (was 3.1)
pycurl 7.19.5.1 (was 7.19.0) pylint 1.4.3 (was 0.25.2) Python 3.5.1, 3.4.3, 2.7.11 (was 3.4.3, 2.7.9, 2.6.8)
quilt 0.64 (was 0.60) readline 6.3 (was 5.2) rrdtool 1.4.9
screen 4.3.1 (was 4.0.3) sed 4.2.2 (was 4.2.1) slrn 1.0.1 (was 0.9.9)
snort 2.9.6.2 (was 2.8.4.1) sox 14.4.2 (was 14.3.2) stunnel 5.18 (was 4.56)
swig 3.0.5 (was 1.3.35) text-utilities 2.25.2 (was 2.24.2) timezone 2015g (was 2015e)
tomcat 8.0.30 (was 8.0.21, 6.0.43) vim 7.4 (was 7.3) w3m 0.5.3 (was 0.5.2)
wget 1.16.3 (was 1.16) wireshark 1.12.8 (was 1.12.5) xmlto 0.0.26 (was 0.0.25)
xz 5.2.1 (was 5.0.1)

Tuesday Jan 26, 2016

Preparing for the Upcoming Removal of UCB Utilities from the Next Version of Solaris

(Note: Edited after errata noted and suggestions made by the Solaris team)

For those of you who are holding on to the /usr/ucb commands as a last vestige of Solaris's origins in the UC Berkeley UNIX distribution, it's time to act. The long-anticipated demise of the /usr/ucb and /usr/ucblib directories is planned for the next major version of Solaris. If you are building software that uses these components, now's the time to switch to alternatives.  Shell scripts are often used during software installation and configuration, so dependency on /usr/ucb commands could stop your app from installing properly.

If you don't know if your software package depends on the commands or libraries in these directories, here's a simple heuristic:  if you're not requiring Solaris 11 users to run "pkg install ucb" that means you're not using ucb command or libraries, and you can skip the rest of this writeup.

If you're still reading, perform these checks:

  • Do you explicitly add /usr/ucb in the PATH for your shell commands, so that you get the /usr/ucb versions of commands instead of the /usr/bin?
  • Do any shell scripts use /usr/ucb in PATH variables or explicit command paths?
  • Do any system() or exec*() calls in your application use /usr/ucb?

You can go to the top level directory of your software, either your build tree or your distribution and run:

    # find ./ \( -type f -a -print -a -exec strings '{}' \; \) | \
    nawk '/^\.\// {file =  $0; first = 1}; /usr\/ucb/ {if (first) {print "FILE:", file; first = 0}; print "\t" $0}'


This shell code will print out any files with strings containing "usr/ucb", so it catches /usr/ucb and /usr/ucblib. Here's the output from running it in a test directory:

FILE:  ./mywd/Makefile
                  INSTALL = /usr/ucb/install
FILE:  ./d.ksh
            /usr/ucb/date -r $seconds $format
FILE:  ./DragAndDrop/core.1
        /usr/ucblib
FILE:  ./%s
        PATH=/usr/openwin/bin:/usr/local/bin:/usr/bin:/usr/etc:/usr/sbin:/usr/ucb:.
FILE:  ./uid.c
        printf( "/usr/ucb/whoami: " );
        system( "exec /usr/ucb/whoami" );
FILE:  ./SUNWvsdks/reloc/SUNWvsdk/examples/docs/README.makefiles
        Please make sure the "ld" used is /usr/ccs/bin/ld rather than /usr/ucb/ld.

Note: The command sees lines that start with "./" as file names, so the command mistook the "./%s" it found in ./DragAndDrop/core.1 as a file name. The next line was really found in the file  DragAndDrop/core.1. If you see a file name in the output that doesn't exist, then the script was was confused in just this way. Ignore the FILE: line for the non-existent file, and the rest of the output will make sense.

Commands most likely to come from /usr/ucb:

  • ps -- now the /usr/bin version also accepts the /usr/ucb/ps arguments
  • echo -- "-n" for no newline vs. "\r"
  • whereis -- No direct replacement.
  • whoami  -- Replacement in /usr/bin
  • sum -- /usr/ucb and /usr/bin versions return different checksums, see manpage sum(1B)
  • touch -- /usr/ucb/touch has a -f option not in /usr/bin/touch

The good news is that of the 76 commands in /usr/ucb as of Solaris 11.3, 45 of them are links back to /usr/bin, and only 31 are unique to /usr/ucb.  This means that many of the commands in /usr/ucb are available in /usr/bin by default now and in some cases, /usr/ucb may not be required at all.  "ls -la /usr/ucb" shows the commands that are linked to /usr/bin.

The man pages for a /usr/ucb command can be displayed with "man -s1b <cmd>", e.g.

# man -s1b echo

Now check for libraries. Look back at the find output, or peruse your own build files. Do you see /usr/ucblib in any makefiles, or any LD_LIBRARY_PATH / LD_PRELOAD variables?

Libraries in /usr/ucblib and /usr/ucblib/sparcv9:

  • libcurses.so
  • libbdbm.so
  • librpcsoc.so
  • libtermcap.so
  • libucb.so

So get ready for the changes in Solaris, and clear out the last remnants of UCB, along with your SunOS 4.x documentation and your Joy of UNIX button. Fix up your software before Oracle releases the  version of Solaris without /usr/ucb and /usr/ucblib.

Tuesday Jan 19, 2016

EMC NetWorker Performance & Scalability with SPARC T5

New White Paper

EMC and Oracle have thousands of mutual customer installations worldwide who need a storage management suite which centralizes, automates, and accelerates data backup and recovery across their IT environment.  The EMC NetWorker software provides the power and flexibility to meet these challenges on Oracle SPARC servers.

A new white paper is now released which reviews how NetWorker takes advantage of the capabilities of the SPARC T5-2 server and performs better compared to previous SPARC or x86 platforms due to the T5-2 server's enhanced resources and capabilities.  This paper also highlights the best practices for optimal performance using EMC NetWorker on the SPARC T5-2 server platform.

A series of performance test results are covered in this paper indicating that NetWorker backup performance on the SPARC T5-2 increases linearly with an increased number of CPUs.  Some highlights of the test results were:

 - NetWorker backup performance on the SPARC T5-2 server scales linearly with increased hardware threads.
 - CPU and memory utilization on the SPARC T5-2 server increases with the increase in number of save sets.
 - Backup performance on the Oracle Solaris ZFS file system is 2.4x times better than a UFS file system.

For nearly 20 years EMC and Oracle have been building best practices in data management for their joint customers, optimizing the capabilities EMC products (like NetWorker) on Oracle SPARC servers.  EMC Corporation is also an Oracle PartnerNetwork Platinum level partner, which enables a deeper collaboration discussion, ensuring that our customers have a great experience from day one forward.



Tuesday Jan 12, 2016

Best Practices Using Oracle Solaris Compliance Tool for SAP

In a recent blogs we talked about end-to-end security with Oracle Solaris 11, SPARC M7 and the ISV Ecosystem  and one of the main elements: the built-in Solaris 11 compliance tools.

Organizations such as banks, hospitals, and governments have specialized compliance requirements. Auditors, who are unfamiliar with an operating system, can struggle to match security controls with requirements. Therefore, tools that map security controls to requirements can reduce time and costs by assisting auditors.

Oracle Solaris 11 lowers the cost and effort of compliance management by designing security features to easily meet worldwide compliance obligations; documenting and mapping technical security controls for common requirements like PCI-DSS to Oracle Solaris technologies. The simple-to-use tool Oracle Solaris compliance tool provides users with not only reporting but also simple instructions on how to mitigate any compliance test failure. It also provides compliance report templates.

Available since release 11.2, Oracle Solaris provides scripts that assess and report the compliance of Oracle Solaris to two security benchmarks:

  • Oracle Solaris Security Benchmark and
  • Payment Card Industry-Data Security Standard (PCI-DSS).

The new command, compliance (1M), is used to run system assessments against security/compliance benchmarks and to generate HTML reports from those assessments. The reports indicate which system tests failed and which passed, and they provide any corresponding remediation steps.

A new whitepaper introduces the compliance report on Oracle Solaris and provides information and best practices on how to assess and report the compliance of an Oracle Solaris system to security standards for SAP Installations. The procedure in this whitepaper was tested on an Oracle Solaris global zone, non-global zone, kernel zone, Oracle SuperCluster, Oracle Solaris Cluster, as well as various SAP Advanced Business Application Programming (ABAP) and Java releases with Oracle Database 11g and 12g. The document concludes with information on an additional new SAP benchmark for SAP applications with special security requirements. Read the whitepaper for details. There is also a related SAP note 2114056  "Solaris compliance tool for SAP installation" published (requires SAP login).

About

Application tuning, sizing, monitoring, porting on Solaris 11

Search

Categories
  • Technology Tid Bits
Archives
« June 2016
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
9
10
11
12
13
14
16
17
18
19
21
22
23
24
25
26
27
28
29
30
  
       
Today