Friday May 06, 2016

Best Practices Using ZFS For SAP ASE Database

SAP Adaptive Server Enterprise (ASE) database 16.0 and 15.7 are certified to run on Oracle Solaris 11 systems using ZFS as data storage and logs. Recent testing with OLTP workloads on Oracle SPARC systems with Solaris 11 show better or on-par performance with ZFS compared to UFS or raw devices.  

Oracle Solaris ZFS is a powerful file system that combines file system capabilities with storage features that are traditionally offered by a volume manager. ZFS is the default file system in Oracle Solaris 11 and includes integrated data services, such as compression, encryption, and snapshot and cloning. Oracle Solaris and ZFS are also the foundation of Oracle ZFS Storage Appliance.  

ZFS Best Practices For SAP ASE Database:

  • Create separate zpools for data and the transaction log.
  • Create zpool for data using multiple LUNs for better I/O bandwidth.
  • ZFS record size for data is one of key parameters for optimal performance. The default  value of 128K is too large and causes page overhead.  For Solaris SPARC systems with the DB page size of 8K or less, a ZFS record size of 32K provides  the best performance for both file system block and raw zvol.
  • Place ZIL (ZFS Intent Log) on SSDs for improved latency of both data and transaction log.
For more details see SAP Note  2300958 : ZFS Certification of SAP ASE 16.0 and Sybase ASE 15.7 on Oracle Solaris 11 (requires SAP login).

Monday May 02, 2016

FIS Payment Card Products on Oracle Solaris and SPARC

FIS™ is one of the world's largest global providers dedicated to banking and payments technologies. FIS empowers the financial world with payment processing and banking solutions, including software, services and technology outsourcing. Headquartered in Jacksonville, Florida, FIS serves more than 20,000 clients in over 130 countries. It is a Fortune 500 company and is a member of Standard & Poor’s 500® Index. 

Advanced Security, Extreme Performance and Unmatched Value with Oracle SPARC.

FIS supports its payment card products IST/Switch, IST/Clearing, IST/MAS, Fraud Navigator and Data Navigator on the latest SPARC platforms running Oracle Solaris 11. FIS Fraud Navigator and Data Navigator have also achieved Oracle Solaris Ready and Oracle SuperCluster Ready status. 

Oracle SPARC servers offer the best computing platform for running FIS applications. Customers can benefit from large number of high performance cores along with TBs of memory to run their mission critical environments with greater scalability, performance and security. If you have any questions about running FIS applications on Oracle Solaris SPARC, you can contact us at isvsupport_ww@oracle.com

Monday Apr 25, 2016

Unbeatable Scalability of SAP ASE on Oracle M7

The Oracle SPARC M7 platform has incredible performance and some examples of it can be found in this blog. One very interesting customer example is with SAP ASE performance and scalability. SAP Adaptive Server Enterprise (Sybase) database is used by University hospitals Leuven (UZ Leuven). UZ Leuven is one of the largest healthcare providers in Europe and provides cloud services to 16 other Belgian hospitals, sharing patient records supported by the same IT infrastructure and systems. 

UZ Leuven was looking to increase the scale capacity of its SAP ASE platform to accommodate an anticipated 50% business growth in user transactions, along with added functionality of the application as more hospitals join the network. Their current load was about 80 million transactions per day for a 5 TB database.

420 vs 48 on Intel

The SPARC M7 platform proved to be the only platform that could linearly scale up to 420 SAP ASE clients, while their Intel E7-8857v2 platform scaled to only 48 clients.

The SPARC M7 platform also delivered better performance and response times while ensuring data availability and information security, critical needs for UZ Leuven’s patients. 

“Today, SPARC is the only suitable platform that meets our application needs. We selected SPARC servers over IBM and x86-based solutions because scalability and performance are essential for our mission-critical SAP Adaptive Server Enterprise database infrastructure. With the SPARC M7 servers, we can expand our business and grow at the speed of our customers,” said Jan Demey, Team Leader for IT Infrastructure, University Hospitals Leuven.

You can read the full story here.

In the next blog, we will discuss best practices for using Oracle Solaris ZFS file system for your SAP ASE database.

Wednesday Apr 20, 2016

Oracle OpenWorld and JavaOne 2016 Call for Proposals

The 2016 Oracle OpenWorld and JavaOne call for proposals are open and the deadline for submissions is Friday, May 9. We encourage you to submit proposals to present at this year's conference, which will be held September 18 - 22, 2016 at the Moscone Center in San Francisco. 

See here who from ISV Engineering and partners attended last year and the joint projects they presented. 

Submit your abstracts for Oracle OpenWorld and JavaOne now and take advantage of the opportunity to present at the most important Oracle technology and business conference of the year.


Tuesday Apr 19, 2016

Oracle Solaris 11.3 Preflight Checker - Come Fly With Me!

Announcing the latest update to: Oracle Solaris Preflight Applications Checker 11.3

Consider a pilot deciding to fly a new airplane without knowing that it had been 100% tested to fly.

If you are a Solaris developer who is looking to leverage the security, speed and simplicity of Oracle Solaris 11.3, you need to make sure your application will perform well BEFORE lifting off the ground on that migration. 
At Oracle we call that  preserving application compatibility between releases.  We believe that’s pretty important to the success of your flight, and getting you back onto the ground safely.

Solaris was the first operating system to literally guarantee application compatibility between releases and architectures.  Of course, any good developer knows there are always ways to accidentally break compatibility when you're developing an app, and maybe even get away with it for a while...

That's where the Oracle Solaris Preflight Applications Checker 11.3 (PFC 11.3) tool comes in. 
Think of it as a flight simulator, designed to give the pilot (aka - developer) confidence in the plane they are about to fly.

With PFC 11.3,  it is now quite simple to check an existing Solaris 8, 9, or 10 application for its readiness to be executed on Oracle Solaris 11.3, whether its on SPARC or x86 systems.  A successful check with this tool will be a strong indicator that an application will run unmodified on Oracle Solaris 11.3. 
In other words, start up the engines, lets fly!

A little bit about how PFC 11.3 can do this.
PFC 11.3 includes two modules:

1. The Application Checker - which scans applications for usage of specific Solaris features, interfaces, and libraries and recommends improved methods of implementation in Oracle Solaris 11.3.  It can also alert you to the usage of undocumented or private data structures and interfaces, as well as planned discontinuance of Solaris features.  

2. The Kernel Checker - checks the kernel modules and device drivers and their source code and reports potential compatibility issues with Oracle Solaris 11.3.   It can analyze the source code or binaries of the device driver and report any potential "compliance" issues found against the published Solaris Device Driver Interface (DDI) and the Driver-Kernel Interface (DKI).

These two modules scan and analyze your application in three areas to serve up the pre-flight information for running it on Solaris 11.3:

 1) Analysis of the application binaries for usage of libraries as well as for usage of Solaris data structures and functions.
 2) Static analysis of the C/C++ sources and Shell scripts  for the usage of function or system calls that are deprecated, removed or unsupported on Oracle Solaris 11, as well as the usage of  commands and libraries which have been relocated, deprecated or removed.
 3) Dynamic analysis of the running application, for it's usage of dynamic libraries which have been removed, relocated or upgraded (example: openSSL).

PFC 11.3 not only helps you migrate to the latest release of Solaris,  but also makes recommendations on getting the most out of your Oracle systems hardware. PFC 11.3 even generates an HTLM report which provides pointers to various migration services offered by Oracle.

Oracle Solaris is designed and tested to protect customer investments in software.  PFC 11.3 and The Oracle Solaris Binary Application Guarantee are a powerful combination which reflect Oracle's confidence in the compatibility of applications from one release of Oracle Solaris to the next.

Any technical questions with PFC 11.3 should be directed to the ISV Engineering team:  isvsupport_ww@oracle.com

 Now, sit back, relax, and enjoy your flight!

Monday Apr 18, 2016

IBM Software Products and SPARC Hardware Encryption: Update

Last December, we told you about IBM's GSKit and how it now allows several popular IBM products seamless access to Oracle SPARC hardware encryption capabilities. We thought we'd create a quick Springtime update of that information for our partners and customers.

Obtaining The Proper Version of GSKit

GSKit is bundled with each product that makes use of it; over time, new product releases will incorporate GSKit v8 by default. Until then, the latest GSKit v8 for SPARC/Solaris is available on IBM Fix Central, for download and upgrade into existing products. Installation instructions can be found here.

The support described above is available in GSKit v8.0.50.52 and later. As of April, 2016, the latest GSKit v8.0.50.59 is available for download from Fix Central.

IBM Products that currently make use of GSKit v8 on Solaris (and therefore could take advantage of SPARC on-chip data encryption automatically) include (but are not limited to):

 Product Versions w/bundled GSKit v8.0.50.52 or later Versions requiring manual update of GSKit
DB2 v9.7 FP11, v10.1 FP5, v10.5 FP7
HTTP Server iFix available for v8.0 and v8.5
Security Directory Server (fka Tivoli Directory Server)
v6.3 and later certified with GSKit 8.0.50.59
Informix IDS v11.70 and v12.10 fix available which updates to GSKit 8.0.50.57
Cognos BI Server v10.2.2 IF008 and later
Spectrum Protect (fka Tivoli Storage Manager) v7.1.5 and later
WebSphere MQ v8 Fix Pack 8.0.0.4 and later

Determining Current GSKit Version

  • $ /opt/ibm/gsk8/bin/gsk8ver # 32-bit version
  • $ /opt/ibm/gsk8_64/bin/gsk8ver_64 # 64-bit version

Wednesday Mar 30, 2016

The UNIX® Standard Makes ISV Engineering’s Job Easier

Here at Oracle® ISV Engineering, we deal with hundreds of applications on a daily basis. Most of them need to support multiple operating systems (OS) environments including Oracle Solaris. These applications are from all types of diverse industries – banking, communications, healthcare, gaming, and more. Each application varies in size from dozens to hundreds of millions of lines of code. A sample list of applications supporting Oracle Solaris 11 can be found here.

As we help Independent Software Vendors (ISVs) support Oracle Solaris, we understand the real value of standards. Oracle Solaris is UNIX certified and conforms to the UNIX standard providing assurance of stable interfaces and APIs. (NOTE: The UNIX standard is also inclusive of POSIX interface/API standard).  ISVs and application developers leverage these stable interfaces/APIs to make it easier to port, maintain and support their applications. The stable interfaces and APIs also reduce the overhead costs for ISVs as well as for Oracle’s support of the ISVs – a win-win for all involved. ISVs can be confident that the UNIX operating system, the robust foundation below their application, won't change from release to release.

Oracle Solaris is unique in which it goes the extra mile by providing a binary application guarantee since its 2.6 release. The Oracle Solaris Binary Application Guarantee reflects the confidence in the compatibility of applications from one release of Oracle Solaris to the next and is designed to make re-qualification a thing of the past. If a binary application runs on a release of Oracle Solaris 2.6 or later, including their initial release and all updates, it will run on the later releases of Oracle Solaris, including their initial releases, and all updates, even if the application has not been recompiled for those latest releases. Binary compatibility between releases of Oracle Solaris helps protect your long-term investment in the development, training and maintenance of your applications.

It is important to note that the UNIX Standard does not restrict the underlying implementation. This is key particularly because it allows Oracle Solaris engineers to innovate "under the hood". Keeping the semantics and behavior of system calls intact, Oracle Solaris software engineers deliver the benefits of improved features, security performance, scalability, stability, etc. while not having a negative impact on application developers using Oracle Solaris.

Learn more about Oracle Solaris, a UNIX OS, through the links below:

· Oracle Solaris 11

· The UNIX Evolution: An Innovative History

· Oracle, UNIX, and Innovation

· The Open Group UNIX Landing Page

Oracle Copyright 2016. UNIX® is a registered trademark owned and managed by The Open Group. POSIX® is a registered Trademark of The IEEE. All rights reserved.



Sunday Mar 13, 2016

Increasing Security for SAP Installations with Immutable Zones

In recent blogs we have talked about various aspects of end-to-end application security with Oracle Solaris 11, SPARC M7 and the ISV Ecosystem. We also talked about a white paper that provides best practices for using the Oracle Solaris compliance tool for SAP installations. Another way to increase the security of an SAP installation is to use Oracle Solaris Immutable Zones. 

A Solaris zone is a virtualized operating system environment created within a single instance of the Solaris OS. Within a zone, the operating system is represented to the applications as virtual operating system environments that are isolated and secure. Immutable Zones are Solaris zones with read-only roots. Both global and non-global zones can be Immutable Zones.

Using Immutable Zones is one technique that can protect applications and the system from malicious attacks by applying read-only protection to the host global zone, kernel zones and non-global zones. Oracle Solaris Zones technology is the recommended approach for deploying application workloads in an isolated environment—no process in one zone can monitor or affect processes running in another zone. Immutable Zones extend this level of isolation and protection by enabling a read-only file system, preventing any modification to the system or system configuration.

As an SAP system requires write access to some directories, it is not possible to install SAP inside an Immutable Zone without further configuration. A new paper provides instructions and best practices on how to create and manage an SAP installation on an Oracle Solaris Immutable Zone. Read the white paper for details or see SAP Note 2260420 (requires SAP login).

Thursday Feb 18, 2016

How To Tell If SPARC HW Crypto Is Being Used? (2016 Edition)

We’ve been blogging here recently about the advantages of SPARC M7’s on-chip hardware encryption, as well as some Oracle partners whose software already works with it. Some readers have been asking “how can I tell if XXXX software is automatically making use of it?” A very good question, which we’d like to answer via an update on Dan Anderson’s seminal 2012 blog post, How to tell if SPARC T4 crypto is being used?

Back then, SPARC T4 hardware encryption was access mostly via userland calls, which could be observed via DTrace. Since then, the Solaris Cryptographic Framework in Solaris 11 makes more direct utilization of native SPARC hardware encryption instructions. This impacts numerous third-party applications, including recent versions of the bundled openssl). While a cleaner approach, it makes DTrace less effective as a way to observe encryption in action.

Enter cpustat and cputrack.

These Solaris commands allow access to SPARC CPU performance counters, and it just so happens that one of these counters tracks on-chip hardware encryption. For SPARC T4 and later, on Solaris 11:

# # Run on a single-socket SPARC T4 server
#
# # Show instruction calls: all processes, all vCPUs, once for 1 sec 
# cpustat –c pic0=Instr_FGU_crypto 1 1
  time cpu event      pic0 
1.021    0 tick         5 
1.021    1 tick         5 
1.021    2 tick         5 
1.021    3 tick        11 
1.010    4 tick         5 
1.014    5 tick         5 
1.016    6 tick        11 
1.010    7 tick         5
1.016    8 tick       106 
1.019    9 tick       358 
1.004   10 tick        22 
1.003   11 tick        54 
1.021   12 tick        25 
1.014   13 tick       203 
1.006   14 tick        10 
1.019   15 tick       385 
1.008   16 tick      2652 
1.006   17 tick        15 
1.009   18 tick        20 
1.006   19 tick       195 
1.011   20 tick        15 
1.019   21 tick        83 
1.015   22 tick        49 
1.021   23 tick       206 
1.020   24 tick       485 
1.019   25 tick        10 
1.021   26 tick        10 
1.021   27 tick       471 
1.014   28 tick      1396 
1.021   29 tick        10 
1.018   30 tick        26 
1.012   31 tick        10 
1.021   32 total     6868 

# # Show number of instruction calls for all processes, per CPU socket
# cpustat –c pic0=Instr_FGU_crypto –A soc 1 1
time soc event      pic0 
1.014    0 tick      7218
1.014  256 total     7218

# # Show number of instruction calls for existing process 10221
# cputrack –c pic0=Instr_FGU_crypto –p 10221 –o outputfile

Note 1: Oracle VM for SPARC (aka LDoms) before v3.2 did not allow these command inside a Guest LDom; starting with v3.2, one can set an LDom’s perf-counter property to strand or htstrand.

Note 2: By default, Solaris 11 does not allow these commands in non-global zones; to do this, set limitpriv=”default,cpc_cpu” and reboot the zone.

Now you can see these numbers go up and down as hardware encryption is used (or not). For something just a bit more intuitive, I whipped up a little bash script which shows relative usage over time. Feel free to adapt to fit your needs. Here’s the script and a run done just before a command was issued in another window which makes serious use of hardware crypto (this on a SPARC M7 server):

# cat crypto_histo.bash
#! /bin/bash

while (true); do 
    echo `cpustat  -c pic0=Instr_FGU_crypto -A soc 1 1 | \
        awk '/total/ {
            num=4*int(log($NF)/log(10)); 
            hist=""; 
            for (i=0; i<num; i++) hist=hist"="; 
            print hist
        }'`
done
#
# # Run this, then run ‘openssl speed -evp AES-192-CBC’ in another window
# ./crypto_histo.bash 
============
============
============
============================
================================
====================================
====================================
====================================
====================================
====================================
====================================
============
================
============
============
============

SPARC hardware encryption: Always On, Blazingly Fast, and now Eminently Observable.


Thursday Feb 11, 2016

SAS and Oracle SPARC M7 Silicon Secured Memory

In an earlier blog we talked about Solaris /SPARC features that enable you to increase end-to end security of your applications. One of the key security risk areas in applications is memory corruption.

Applications are vulnerable to memory corruption due to both common software programming errors as well as malicious attacks that exploit software errors. 317 million new malicious programs and 24 zero-day vulnerabilities were reported in 2014 alone*.

Memory corruption causes unpredictable application behavior and system crashes. A victim thread encounters incorrect data sometime after the run-time error occurred making these bugs extremely hard to locate and fix. Buffer overflows are a major source of security exploits. In-memory databases increase this exposure as terabytes of critical data reside in-memory. Databases and Operating Systems have tens of millions of lines of code, developed by distributed teams of thousands of developers, so errors introduced by a subsystem could adversely affect one or more other subsystems.

Oracle Silicon Secured Memory (SSM) is a feature of Oracle SPARC T7/M7 systems that detects invalid data accesses based on memory tagging. A version number is stored by software in spare bits of memory. Dedicated non-privileged load/store instructions provide the ability to assign a 4-bit version to each 64-byte cache line. Metadata stored in memory is maintained throughout the Cache hierarchy and all Interconnects. On load/store operations, the processor compares the version set in the pointer with the version assigned in the target memory and generates an exception if there is a mismatch.

3 hours vs 1 minute

SAS recently completed a proof of concept using SSM with SAS 9.4 and the Oracle Studio Discover tool.

SAS 9.4 is a large memory intensive enterprise application predominantly written in C.  Using a standard debug track that uses malloc(3) for memory allocation, SAS test programs could be run by optionally interposing the Oracle Studio discover ADI shared library to intercept malloc() calls.  This transparently enables discover ADI to utilize SPARC M7 Silicon Secured Memory to check for memory corruptions at the silicon layer and produce full stack walk backs if a memory corruption was found.

They were able to realize the following immediate results:

Tag Cross platform bugs in just 2-3 days of testing

Find, triage, fix and put back bugs in less than 2 hours

Identify bugs 180x faster

Other memory validation tool: 3 hours

Silicon Secured Memory and Discover tool: 1 minute

Memory Validation Testing In QA Cycles

SSM along with the Oracle Studio discover ADI allows ISVs to perform full QA runs running them at near real time speed, whereas traditional memory validation tools cannot be used as such due to their high performance overhead and instead are typically only used to debug memory corruptions after bug reports come in.

If you develop or deploy large scale memory intensive applications, can you afford not knowing how SSM can help you with your products quality and security?  

For more on the SAS as well as the Oracle Database experience with SSM, see the OOW 2015 presentation CON8216: “Inoculating Software, Boosting Quality: Oracle DB & SAS Experience with Silicon Secured Memory” (PDF).

To learn more about SSM and how your applications can take advantage of it, read the article Detecting Memory Errors with Silicon Secured Memory.


* Based on the April 2015 Internet Security Threat Report from Symantec.


Wednesday Feb 03, 2016

IBM Java Applications Taking Advantage of SPARC Hardware Encryption

We’ve been talking recently about IBM’s GSKit, through which many IBM applications can automatically take advantage of SPARC Hardware Encryption (including the latest SPARC M7-based systems). We’ve since been asked whether this was also possible for Java-based IBM applications (such as WebSphere Application Server) or other applications written against IBM’s SDK Java Technology Edition to take similar advantage. This post is written to help answer those questions.

What is the IBM SDK?
IBM has traditionally licensed Oracle’s Java Runtime Environment and Java Developer Kit, modified it slightly, and released it as the IBM SDK. This combination of Java Runtime and Developer Kit is designed to support many IBM products, and can also be used for new development (although the recommended Java platform on Solaris is Oracle’s own Java Runtime Environment and Java Developer Kit). Oracle Solaris ships with both Java 7 and Java 8, but most IBM apps include the Java 7 version of their SDK.

What is the Advantage of Using Hardware Cryptography on SPARC?
Sometimes quite a bit, depending on the size of the chunks of data being encrypted and decrypted. Take this simple Java program, which does an adequate (if somewhat artificial) job at demonstrating the use of Hardware Crypto from Java:


This code simply creates an array of random data of size specified at runtime, and then encrypts using the common AES128 cypher. This algorithm happens to be one of the many accelerated by recent SPARC CPUs. When run on out-of-the-box Oracle and IBM implementations of Java 7 on SPARC, we can see the advantage to the code taking advantage of SPARC hardware crypto:



Figure 1: AES128 Encryption on SPARC M7 (no workaround)

Again, this is a very artificial test case used to make a point.The benefit from hardware acceleration will vary by workload and use case, but the key point to keep in mind is that this hardware assist is always available on SPARC M7 (the differences are proportional on SPARC T4 and T5). In those cases where it makes a difference, one should make an effort to take advantage of it.

Whither WebSphere Application Server?

IBM WebSphere Application Server v8, like other J2EE application servers, is written in Java, and could therefore in theory take advantage of the workaround described in the next section. But you don’t have to go with an unsupported solution for WAS, because Best Practice is usually to stand up the IBM’s included HTTP Server in front of WAS, and HTTP Server is built with GSKit 8. Check to see that the version of HTTP Server you use with WAS v8 supports SPARC hardware encryption – if so, you’re good to go!

How To Make Use of SPARC Hardware Crypto from IBM Java
Central to the Java Cryptography Architecture is the notion of JCA Providers, which allow developers to create and ship security mechanisms which can ‘plug-in’ to a Java Runtime via well-defined APIs. All Java runtimes ship with a default set of providers, usually found in the instance’s java.security file. Since Java 7, the OracleUcrypto provider has been provided in Solaris releases of Java, specifically to interface with the underlying Solaris Ucrypto library (part of the Solaris Cryptographic Framework). On platforms based on SPARC T4, T5, M5, M6 and M7 CPUs, the Ucrypto library automatically takes advantage of any available underlying SPARC hardware cryptography features.

Those developing Java applications on Solaris with Oracle’s implementation of Java will find that this functionality is available by default on SPARC; in fact, the OracleUcrypto provider has the highest priority in the instance’s java.security file. Here’s an excerpt from the default java.security file in Oracle JDK 1.7:



As mentioned above, Oracle’s Java implementations are recommended on Solaris, but for those developers who must make use of the IBM SDK, you’ll notice that the IBM version of the
java.security file is not quite the same as that above. In fact, it is missing the OracleUcrypto provider:



What, then, can a developer do to reproduce the desired functionality?

1) The Officially-Supported Solution

Build and deploy against Solaris 11’s built-in Oracle JDK and JRE.

2) The Currently-Unsupported Solution

As you might have already surmised, Java’s Security Provider mechanism allows for quick and easy addition or substitution of additional Crypto providers (in the cases of third-party cryptographic hardware modules. By adding the UcryptoProvider to IBM’s java.security file, Java executables will get that provider and the advantage it gives. Note: these instructions are correct for Java 7/SDK 7, but have not been tested on other major releases of Java:

Step 1: Add ucrypto-solaris.cfg to lib/security
Copy the ucrypto-solaris.cfg file from the Oracle Java 7 instance (in jre/lib/security) to the lib/security directory in the IBM SDK instance.

Step 2: Add UcryptoProvider as the first entry in the IBM lib/security/java.security file
Assuming
you add to the top of the list, and keep the existing providers, the file above would end up looking as follows:


3) The (Hopefully) Future-Supported Solution

The above workaround does indeed work, but it’s not yet supported by IBM. That’s not to say we’ve not asked for it – we’ve submitted a feature request with IBM, and the good news is that any IBM customer who would also like to see this (perhaps you?) can upvote it now!


[Link to Java code snippet above] 

Thursday Jan 28, 2016

More Free/Open-Source Software Now Available for Solaris 11.3

Building on the program established last year to provide evaluation copies of popular FOSS components to Solaris users, the Solaris team has announced the immediate availability of additional and newer software, ahead of official Solaris releases:

Today Oracle released a set of Selected FOSS Component packages that can be used with/on Solaris 11.3. These packages provide customers with evaluation copies of new and updated versions of FOSS ahead of officially supported Oracle Solaris product releases.

These packages are available at the Oracle Solaris product release repository for customers running Oracle Solaris 11.3 GA. The source code used to build the components is available at the Solaris Userland Project on Java.net. The packages are not supported through any Oracle support channels. Customers can use the software at their own risk.

detailed how-to guide outlines how to access the selected FOSS evaluation packages, configure IPS publishers, determine what FOSS components are new or updated, and identify available packages and download/install them. The guide also contains recommendations for customers with support contracts.

The table of components below contains the available selected FOSS components that are new or updated since the release of Oracle Solaris 11.3 GA:

New Components

asciidoc 8.6.8 aspell 0.60.6.1 cppunit 1.13.2 daq 2.0.2
dejagnu 1.5.3 libotr 4.1.0 pidgin-otr 4.0.1 isl 0.12.2
jjv 1.0.2 qunit 1.18.0 libdnet 1.12 libssh2 1.4.2
nettle 3.1.1 cx_Oracle 5.2 R 3.2.0 re2c 0.14.2
nanliu-staging 1.0.3 puppetlabs-apache 1.4.0 puppetlabs-concat 1.2.1 puppetlabs-inifile 1.4.1
puppetlabs-mysql 3.6.1 puppetlabs-ntp 3.3.0 puppetlabs-rabbitmq 3.1.0 puppetlabs-rsync 0.4.0
puppetlabs-stdlib 4.7.0 saz-memcached 2.7.1 scons 2.3.4 wdiff 1.2.2
yasm 1.3.0

Updated Components

ant 1.9.4 (was 1.9.3) mod_jk 1.2.41 (was 1.2.40) mod_perl 2.0.9 (was 2.0.4)
apache2 2.4.16 (was 2.2.29, 2.4.12) autoconf 2.69 (was 2.68) autogen 5.16.2 (was 5.9)
automake 1.15, 1.11.2, 1.10 (was 1.11.2, 1.10, 1.9.6) bash 4.2 (was 4.1) binutils 2.25.1 (was 2.23.1)
cmake 3.3.2 (was 2.8.6) conflict 20140723 (was 20100627) coreutils 8.24 (was 8.16)
libcurl 7.45.0 (was 7.40.0) diffstat 1.59 (was 1.51) diffutils 3.3 (was 2.8.7)
doxygen 1.8.9 (was 1.7.6.1) emacs 24.5 (was 24.3) findutils 4.5.14 (was 4.2.31)
getopt 1.1.6 (was 1.1.5) gettext 0.19.3 (was 0.16.1) grep 2.20 (was 2.14)
git 2.6.1 (was 1.7.9.2) gnutls 3.4.62.8.6 (was 2.8.6) gocr 0.50 (was 0.48)
tar 1.28 (was 1.27.1) hexedit 1.2.13 (was 1.2.12) hplip 3.15.7 (was 3.14.6)
httping 2.4 (was 1.4.4) iperf 2.0.5 (was 2.0.4) less 481 (was 458)
lftp 4.6.4 (was 4.3.1) libarchive 3.1.2 (was 3.0.4) libedit 20150325-3.1 (was 20110802-3.0)
libpcap 1.7.4 (was 1.5.1) libxml2 2.9.3 (was 2.9.2) lua 5.2.1 (was 5.1.4)
lynx 2.8.8 (was 2.8.7) m4 1.4.17 (was 1.4.12) make 4.1 (was 3.82)
meld 1.8.6 (was 1.4.0) mysql 5.6.25, 5.5.43 (was 5.6.21, 5.5.43, 5.1.37) ncftp 3.2.5 (was 3.2.3)
openscap 1.2.6 (was 1.2.3) openssh 7.1p1 (was 6.5p1) openssl 1.0.2e plain and fips-140 (was 1.0.1p of each)
pcre 8.38 (was 8.37) perl 5.20.1, 5.16.3 (was 5.12.5) DBI 1.623 (was 1.58)
gettext 1.0.5 (was 0.16.1) Net-SSLeay 1.52 (was 1.36) Tk 804.33 (was 804.31)
pmtools 1.30 (was 1.10) XML-Parser 2.41 (was 2.36) XML-Simple 2.20 (was 2.18)
pv 1.5.7 (was 1.2.0) astroid 1.3.6 (was 0.24.0) cffi 1.1.2 (was 0.8.2)
CherryPy 3.8.0 (was 3.1.2) coverage 4.0.1 (was 3.5) Django 1.4.22 (was 1.4.20)
jsonrpclib 0.2.6 (was 0.1.3) logilab-common 0.63.2 (was 0.58.2) Mako 1.0.0 (was 0.4.1)
nose 1.3.6 (was 1.2.1) pep8 1.6.2 (was 1.5.7) ply 3.7 (was 3.1)
pycurl 7.19.5.1 (was 7.19.0) pylint 1.4.3 (was 0.25.2) Python 3.5.1, 3.4.3, 2.7.11 (was 3.4.3, 2.7.9, 2.6.8)
quilt 0.64 (was 0.60) readline 6.3 (was 5.2) rrdtool 1.4.9
screen 4.3.1 (was 4.0.3) sed 4.2.2 (was 4.2.1) slrn 1.0.1 (was 0.9.9)
snort 2.9.6.2 (was 2.8.4.1) sox 14.4.2 (was 14.3.2) stunnel 5.18 (was 4.56)
swig 3.0.5 (was 1.3.35) text-utilities 2.25.2 (was 2.24.2) timezone 2015g (was 2015e)
tomcat 8.0.30 (was 8.0.21, 6.0.43) vim 7.4 (was 7.3) w3m 0.5.3 (was 0.5.2)
wget 1.16.3 (was 1.16) wireshark 1.12.8 (was 1.12.5) xmlto 0.0.26 (was 0.0.25)
xz 5.2.1 (was 5.0.1)

Tuesday Jan 26, 2016

Preparing for the Upcoming Removal of UCB Utilities from the Next Version of Solaris

(Note: Edited after errata noted and suggestions made by the Solaris team)

For those of you who are holding on to the /usr/ucb commands as a last vestige of Solaris's origins in the UC Berkeley UNIX distribution, it's time to act. The long-anticipated demise of the /usr/ucb and /usr/ucblib directories is planned for the next major version of Solaris. If you are building software that uses these components, now's the time to switch to alternatives.  Shell scripts are often used during software installation and configuration, so dependency on /usr/ucb commands could stop your app from installing properly.

If you don't know if your software package depends on the commands or libraries in these directories, here's a simple heuristic:  if you're not requiring Solaris 11 users to run "pkg install ucb" that means you're not using ucb command or libraries, and you can skip the rest of this writeup.

If you're still reading, perform these checks:

  • Do you explicitly add /usr/ucb in the PATH for your shell commands, so that you get the /usr/ucb versions of commands instead of the /usr/bin?
  • Do any shell scripts use /usr/ucb in PATH variables or explicit command paths?
  • Do any system() or exec*() calls in your application use /usr/ucb?

You can go to the top level directory of your software, either your build tree or your distribution and run:

    # find ./ \( -type f -a -print -a -exec strings '{}' \; \) | \
    nawk '/^\.\// {file =  $0; first = 1}; /usr\/ucb/ {if (first) {print "FILE:", file; first = 0}; print "\t" $0}'


This shell code will print out any files with strings containing "usr/ucb", so it catches /usr/ucb and /usr/ucblib. Here's the output from running it in a test directory:

FILE:  ./mywd/Makefile
                  INSTALL = /usr/ucb/install
FILE:  ./d.ksh
            /usr/ucb/date -r $seconds $format
FILE:  ./DragAndDrop/core.1
        /usr/ucblib
FILE:  ./%s
        PATH=/usr/openwin/bin:/usr/local/bin:/usr/bin:/usr/etc:/usr/sbin:/usr/ucb:.
FILE:  ./uid.c
        printf( "/usr/ucb/whoami: " );
        system( "exec /usr/ucb/whoami" );
FILE:  ./SUNWvsdks/reloc/SUNWvsdk/examples/docs/README.makefiles
        Please make sure the "ld" used is /usr/ccs/bin/ld rather than /usr/ucb/ld.

Note: The command sees lines that start with "./" as file names, so the command mistook the "./%s" it found in ./DragAndDrop/core.1 as a file name. The next line was really found in the file  DragAndDrop/core.1. If you see a file name in the output that doesn't exist, then the script was was confused in just this way. Ignore the FILE: line for the non-existent file, and the rest of the output will make sense.

Commands most likely to come from /usr/ucb:

  • ps -- now the /usr/bin version also accepts the /usr/ucb/ps arguments
  • echo -- "-n" for no newline vs. "\r"
  • whereis -- No direct replacement.
  • whoami  -- Replacement in /usr/bin
  • sum -- /usr/ucb and /usr/bin versions return different checksums, see manpage sum(1B)
  • touch -- /usr/ucb/touch has a -f option not in /usr/bin/touch

The good news is that of the 76 commands in /usr/ucb as of Solaris 11.3, 45 of them are links back to /usr/bin, and only 31 are unique to /usr/ucb.  This means that many of the commands in /usr/ucb are available in /usr/bin by default now and in some cases, /usr/ucb may not be required at all.  "ls -la /usr/ucb" shows the commands that are linked to /usr/bin.

The man pages for a /usr/ucb command can be displayed with "man -s1b <cmd>", e.g.

# man -s1b echo

Now check for libraries. Look back at the find output, or peruse your own build files. Do you see /usr/ucblib in any makefiles, or any LD_LIBRARY_PATH / LD_PRELOAD variables?

Libraries in /usr/ucblib and /usr/ucblib/sparcv9:

  • libcurses.so
  • libbdbm.so
  • librpcsoc.so
  • libtermcap.so
  • libucb.so

So get ready for the changes in Solaris, and clear out the last remnants of UCB, along with your SunOS 4.x documentation and your Joy of UNIX button. Fix up your software before Oracle releases the  version of Solaris without /usr/ucb and /usr/ucblib.

Tuesday Jan 19, 2016

EMC NetWorker Performance & Scalability with SPARC T5

New White Paper

EMC and Oracle have thousands of mutual customer installations worldwide who need a storage management suite which centralizes, automates, and accelerates data backup and recovery across their IT environment.  The EMC NetWorker software provides the power and flexibility to meet these challenges on Oracle SPARC servers.

A new white paper is now released which reviews how NetWorker takes advantage of the capabilities of the SPARC T5-2 server and performs better compared to previous SPARC or x86 platforms due to the T5-2 server's enhanced resources and capabilities.  This paper also highlights the best practices for optimal performance using EMC NetWorker on the SPARC T5-2 server platform.

A series of performance test results are covered in this paper indicating that NetWorker backup performance on the SPARC T5-2 increases linearly with an increased number of CPUs.  Some highlights of the test results were:

 - NetWorker backup performance on the SPARC T5-2 server scales linearly with increased hardware threads.
 - CPU and memory utilization on the SPARC T5-2 server increases with the increase in number of save sets.
 - Backup performance on the Oracle Solaris ZFS file system is 2.4x times better than a UFS file system.

For nearly 20 years EMC and Oracle have been building best practices in data management for their joint customers, optimizing the capabilities EMC products (like NetWorker) on Oracle SPARC servers.  EMC Corporation is also an Oracle PartnerNetwork Platinum level partner, which enables a deeper collaboration discussion, ensuring that our customers have a great experience from day one forward.



Tuesday Jan 12, 2016

Best Practices Using Oracle Solaris Compliance Tool for SAP

In a recent blogs we talked about end-to-end security with Oracle Solaris 11, SPARC M7 and the ISV Ecosystem  and one of the main elements: the built-in Solaris 11 compliance tools.

Organizations such as banks, hospitals, and governments have specialized compliance requirements. Auditors, who are unfamiliar with an operating system, can struggle to match security controls with requirements. Therefore, tools that map security controls to requirements can reduce time and costs by assisting auditors.

Oracle Solaris 11 lowers the cost and effort of compliance management by designing security features to easily meet worldwide compliance obligations; documenting and mapping technical security controls for common requirements like PCI-DSS to Oracle Solaris technologies. The simple-to-use tool Oracle Solaris compliance tool provides users with not only reporting but also simple instructions on how to mitigate any compliance test failure. It also provides compliance report templates.

Available since release 11.2, Oracle Solaris provides scripts that assess and report the compliance of Oracle Solaris to two security benchmarks:

  • Oracle Solaris Security Benchmark and
  • Payment Card Industry-Data Security Standard (PCI-DSS).

The new command, compliance (1M), is used to run system assessments against security/compliance benchmarks and to generate HTML reports from those assessments. The reports indicate which system tests failed and which passed, and they provide any corresponding remediation steps.

A new whitepaper introduces the compliance report on Oracle Solaris and provides information and best practices on how to assess and report the compliance of an Oracle Solaris system to security standards for SAP Installations. The procedure in this whitepaper was tested on an Oracle Solaris global zone, non-global zone, kernel zone, Oracle SuperCluster, Oracle Solaris Cluster, as well as various SAP Advanced Business Application Programming (ABAP) and Java releases with Oracle Database 11g and 12g. The document concludes with information on an additional new SAP benchmark for SAP applications with special security requirements. Read the whitepaper for details. There is also a related SAP note 2114056  "Solaris compliance tool for SAP installation" published (requires SAP login).

Friday Jan 08, 2016

Informatica Analytics on Oracle SPARC: Up-to 9X Faster with In-Memory DB

Oracle and Informatica have a very close working relationship and one of the recent results of this collaboration is the joint project done by Informatica and our Oracle ISV Engineering team to test the performance of Informatica software with Oracle Database 12c In-memory on Oracle SPARC systems. 

Informatica previously optimized their PowerCenter and Data Quality applications on Oracle Engineered Systems, achieving up-to five times faster performance with Oracle Exadata Database Machine and the SPARC-based Oracle SuperCluster (see announcement). They have been Oracle SuperCluster Optimized as well as Oracle Exadata and Exalytics Optimized since 2014. Now they have taken a step further by successfully testing PowerCenter with the Oracle Database 12c In-memory feature achieving extreme performance on SPARC.

A significant number of Informatica customers use Oracle as their main database platform. With the introduction of the Oracle Database 12c In-Memory Option, it is now possible to run real-time, ad-hoc, analytic queries on your business data as it exists right at this moment and receive results in sub-seconds. True real-time analytics! The Oracle SPARC big memory machines with up to 32 terabytes of memory are the perfect match, delivering extreme performance for in-memory databases and business analytics applications. 

Informatica PowerCenter and Oracle Database 12c were both installed on the same machine and Informatica leveraged the In-memory feature really well and was able to scale very well on the Oracle SPARC machine.

The following are some of the test results showing the Oracle Database 12c in-memory advantage on SPARC for Informatica:

  • TPC-H Q6 Performance: 9x in-memory over buffer cache for the workload tested
  • TPC-H Q10 Performance: 1.5x in-memory over buffer cache for the workload tested
  • Oracle Writer Throughput Performance: 2.5x performance improvement in Ram Disk to Ram Disk over Disk to Disk
  • PDO Performance:  Aggregator Tx Throughput: 1.5x in-memory over buffer cache for the workload tested

The tests were run with the following software/hardware stack: 

  • Informatica 9.6.1
  • Oracle SPARC, 8 PROCESSOR, 12 core , 2TB RAM, 4.3 TB Disk
  • Oracle Solaris 11.2
  • Oracle DB 12c
  • Network 10GBps
  • Setup: Domain and DB on same machine

Oracle SPARC servers and Oracle SuperCluster with Oracle Database 12c In-memory prove to be a great platform for Informatica customers to run their analytic queries. 

For more details you can contact isvsupport_ww@oracle.com.


Thursday Dec 24, 2015

The Advantage of Running Temenos on Oracle Engineered Systems

Temenos, a market leader in banking applications, recently won the prestigious Oracle Excellence Award for Exastack ISV Partner of the Year for EMEA.

In the following video Simon Henman and Martin Bailey from Temenos discuss their banking applications and the main challenges their customers face.  They also discuss how Oracle Engineered Systems address these challenges and allow their customers to focus on their banking task itself and not on the infrastructure. That is why they recommend them to their customers and have become Oracle SuperCluster, Exadata and Exalogic Optimized.

In previous blogs we discussed how their main application T24 is SuperCluster Optimized as well as their WealthManager application.

Watch the Video.

Monday Dec 21, 2015

Scaling Intellect MH on Oracle SPARC to 25,000 TPS


Intellect Design Arena Ltd, a Polaris Group company, is a global leader in Financial Technology for Banking, Insurance and other Financial Services.

A joint performance and scalability testing exercise was conducted by Intellect Design and Oracle Engineering teams to study the performance and scalability of Intellect MH on Oracle SPARC systems. The activity was aimed at scaling up the application load in terms of Transactions Per Second (TPS) with a workload that consisted of a mix of 10 OLTP scenarios with audit logging enabled, as well as some related batch scenarios. 

Intellect FT Message Hub (MH) is a lightweight Java based integration platform that facilitates seamless and transparent integration of business applications. It reduces the complexity of integrating disparate applications by leveraging the principles of Service Oriented Architecture (SOA).   

MH provides a function to exchange data online and in batch mode, and enables various interfaces, integration of customer access channels like PCs connected to the Internet and mobile phones, and connection with external financial intelligence institutions and settlement networks.

Message Hub serves as a pass-through station between business applications. It provides a common platform for the customer to do business transactions. The Listeners will be an entry point for front-end systems to perform straight through transaction processing. Transaction Rule Engine (TRE) communicates with the Communication Engine and the Message Engine for communication with the host and message formatting requirements respectively. These engines coordinate the operations based on configured rules. 

The following are the key features of Intellect FT Message Hub:

  • Routing
  • Message transformation
  • Message enhancement
  • Protocol transformation
  • Transaction workflow management
  • Synchronous/Asynchronous transaction
  • Pre/Post process transaction
  • Post dated/ scheduled transaction 
  • Fail-over
  • Support custom action 

MH supports all industry standard protocols including SOAP over HTTP, SOAP over JMS, RESTful, TCP/IP, MQ, JMS, HTTP/s, EJB, File, FTP, SFTP, SMTP, IMAP, POP3. The product also supports a wide range of messaging standards such as SWIFT, ISO 8583, XML, SOAP, JSON, Fixed Length, NVP, Delimited, EBCDIC, POJO, MAP.

The diagram below shows the technical architecture of Intellect FT Message Hub.


Test Details 

OLTP Tests:

The most common MH transactions were covered in the tests. Different transactions were tested with different listeners and communication engines. A mix of the following 10 OLTP Scenarios was tested:

Audit logging:

Audit records  that contain the request and message details are inserted into  the MH database, once at the point of receipt of the message in MH and a second time after the message has just been processed but before the transmission of the message to the external system using communication engines.

Batch Tests:

In batch processing, records are picked up by Intellect MH from a preconfigured location. The files are processed and records are submitted to external systems (stubs) using JMS communication engines.  Multiple files are processed by the managed servers in parallel. The following batch processes were tested:

Hardware Details:

The application was deployed on Oracle SPARC T5 systems, FS1 Flash storage and ZS3 storage. 

Software Details:

  • Oracle Solaris 11.2 
  • Oracle Database 12c RAC 
  • Oracle Weblogic 12c Cluster 
  • Oracle HTTP Server 12c 
  • Oracle JDK 8 
  • Apache JMeter  
  • IBM MQ 
  • Polaris FT Message Hub 15.1 


Test Results:

The systems near linearly scaled up to 25,000 TPS, with an average response time of 323 ms and about 52K concurrent users.  For the batch tests around 10 million records (1000 Files, each containing 10000 records) were processed in 21 minutes. 

These results are 6x better than results seen currently on typical large customer deployments.

More Information: 

For more information, details and system sizing help you can contact the team via isvsupport_ww@oracle.com.



Wednesday Dec 16, 2015

Quartet FS In-Memory Analytics Java Advantage on Oracle SPARC

In a previous blog we had talked about the benefits of scaling up ActivePivot on large memory SPARC machines. ActivePivot, is the in-memory analytics suite from QuartetFS, which is implemented by customers such as HSBC and Société Générale.

ActivePivot is an in-memory analytic database written in pure Java that is frequently deployed in a terabyte of memory. Earlier this year, Quartet FS partnered with Oracle to deploy a large credit risk use case in 16 TB of memory and the 384 cores of an M6-32 SPARC server from Oracle. Antoine Chambille, the global head of research and development at Quartet FS presented "Operating a 16-Terabyte JVM...and Living to Tell the Tale" at JavaOne 2015 twice. These sessions were very popular and standing room only! You can download the presentation here.

During JavaOne 2015 we also had a chance to sit down with Antoine and discuss how they work with Oracle to push the limits of Java and take advantage of Oracle Big Memory machines to deliver interactive calculations for a new generation of credit and market risk applications. The new proposals of the Fundamental Review of the Trading Book (FRTB) run by the Basel Committee on Banking Supervision (BCBS) increase the need for such applications and scenarios.Watch the video for more details.


Tuesday Dec 08, 2015

IBM GSKit Supports SPARC M7 Hardware Encryption

Oracle and IBM have a very close working relationship running IBM software on Oracle hardware. One of the recent results of this collaboration is the announcement by IBM that its GSKit v8 now supports SPARC M7 hardware encryption (as well as SPARC T4 and T5 processors). This, in turn, means that several IBM software products can now make use of on-chip SPARC hardware encryption today, automatically, without significant performance impact

What Is GSKit?

The IBM Global Security Kit (aka GSKit) is not a product offering in itself, but instead a security framework used by many IBM software products for its cryptographic and SSL/TLS capabilities. Example IBM products making use of GSKit today include DB2, Informix, IBM HTTP Server and WebSphere MQ. This latest version of GSKit ( aka "IBM Crypto for C" ), version 8, was validated as a FIPS 140-2 Cryptographic Module within the past earlier this year.

Obtaining The Proper Version of GSKit

GSKit is bundled with each product that makes use of it; over time, new product releases will incorporate GSKit v8 by default. Until then, the latest GSKit v8 for SPARC/Solaris is available on IBM Fix Central, for download and upgrade into existing products. Installation instructions can be found here.

The support described above is available in GSKit v8.0.50.52 and later. As of this writing, the latest GSKit v8.0.50.55 is available for download from Fix Central.

IBM Products that currently make use of GSKit v8 on Solaris (and therefore could take advantage of SPARC on-chip data encryption automatically) include (but are not limited to):

Determining Current GSKit Version

  • $ /opt/ibm/gsk8/bin/gsk8ver # 32-bit version
  • $ /opt/ibm/gsk8_64/bin/gsk8ver_64 # 64-bit version

What This Means

In many cases (such as SSL/TLS over-the-wire communication), products using the proper version of GSKit on Solaris/SPARC will automatically take advantage of hardware encryption. Situations with larger client-server packets will benefit more than those with small packet sizes.  

This will allow these products to make use of the increased security that encryption offers with extremely low performance overhead (something that is not possible with software-only crypto or hardware crypto on other platforms).

Because each of these IBM products has specific use cases, we'll cover more details for each in future blogs.

About

Application tuning, sizing, monitoring, porting on Solaris 11

Search

Categories
Archives
« May 2016
SunMonTueWedThuFriSat
1
3
4
5
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today