Friday Oct 10, 2014

Tomcat and Tuxedo - Perfect together

Tuxedo provides numerous integration options to work with existing applications and other technologies.  For example the most recent release of SALT provides RESTful Web services for your Tuxedo services without writing or changing a line of code.  At Oracle Open World last week we demonstrated how you can leverage the Spring Framework and Tuxedo transaction management capabilities inside the Tuxedo Java server.  In this post I'll show how one can run Tomcat inside the Tuxedo Java Server and utilize Tuxedo services directly from Tomcat.  Although the example is trivial, showing calling the Tuxedo TOUPPER service from the simpapp sample application, it illustrates the minimal amount of effort necessary to run Tomcat inside the Tuxedo Java server and call a Tuxedo service.

To use the Tuxedo Java server, the server needs to be configured and added to the Tuxedo UBBCONFIG file.  A new server entry needs to be added similar to:


TMJAVASVR SRVGRP=GROUP1 SRVID=2  CLOPT="-A"
         CLOPT="-- -c TJSconfig.xml"
         MINDISPATCHTHREADS=2 MAXDISPATCHTHREADS=10

For this example, the minimum and maximum dispatch threads could be set to 1 as we're not hosting any Tuxedo services in this instance of the Java Server.  These lines could be added directly to the UBBCONFIG file for the simpapp sample application.

Next a configuration file is needed for the Tuxedo Java server to set things like classes to load, classpaths, and the like.  Here is a copy of TJSconfig.xml as used by the above server definition:

<?xml version="1.0" encoding="UTF-8"?>
<TJSconfig version="2.0">
    <classpath-config>
        <classpath>${CATALINA_HOME}/bin/*.jar</classpath>
        <classpath>${CATALINA_HOME}/lib/*.jar</classpath>
    </classpath-config>
    <tux-server-config>
        <server-class name="MyTuxedoJavaServer"></server-class>
    </tux-server-config>
</TJSconfig>

This adds the Tomcat jar files to the classpath and adds a class called MyTuxedoJavaServer to the configuration.

Let's look at the source for MyTuxedoJavaServer and see how this class allows us to start up Tomcat.

import com.oracle.tuxedo.tjatmi.TuxedoJavaServer;
import com.oracle.tuxedo.tjatmi.TuxException;
public class MyTuxedoJavaServer extends TuxedoJavaServer {
    private TomCatThread tc = null;
    public MyTuxedoJavaServer() {
        // Create the thread that will be used to bootstrap Tomcat
        tc = new TomCatThread();
        return;
    }
    public int tpsvrinit() throws TuxException {
        // Startup Tomcat
        tc.start();
        return 0;
    }
    public void tpsvrdone() {
        // Shutdown Tomcat
        tc.shutdown();
        return;
    }
}

The constructor for the class creates an instance of the thread that will be used to bootstrap Tomcat. The only other two methods are the standard startup and shutdown methods called by the Tuxedo Java server at server startup and shutdown.  These methods simply start and shutdown the Tomcat thread.

Next let's look at the TomCatThread class that is used to bootstrap Tomcat.  It looks like: 

import org.apache.catalina.startup.*;
public class TomCatThread extends Thread {
    private Bootstrap bs = null;
    public void run() {
        try {
            bs = new Bootstrap();
            bs.setCatalinaHome("/home/oracle/projects/tc/apache-tomcat-7.0.54");
            bs.start();
            System.out.println("After starting Tomcat Thread.");
        } catch (Exception e) {
            System.out.println("Received exception" + e);
        }
    }
    public void shutdown() {
        try {
            bs.stop();
            bs.destroy();
        } catch (Exception e) {
            System.out.println("Received exception while shutting down Tomcat thread" + e);
        }
    }
}

This class simply extends the Thread class and in its run method creates an instance of the Tomcat bootstrap, sets the home directory for Tomcat, and starts Tomcat running.  The shutdown method just stops the Tomcat embedded server.

Next we need a helper class to help keep track of the Tuxedo context that will be associated with each Tomcat thread.  To make a Tuxedo service call, or most other Tuxedo calls, a thread needs to be associated with an application context in Tuxedo.  This is done using the tpappthrinit() method that creates a new Tuxedo context and associates it with the thread.  We'll use thread local storage to store the created context to be used whenever the thread needs to make a Tuxedo call.  I've used a class called TuxedoThreads to maintain this association:

package todd;
import com.oracle.tuxedo.tjatmi.TuxATMITPException;
import com.oracle.tuxedo.tjatmi.TuxAppContext;
import com.oracle.tuxedo.tjatmi.TuxAppContextUtil;
public class TuxedoThreads {
    public static final ThreadLocal<TuxAppContext> userThreadLocal = new ThreadLocal<TuxAppContext>();
    public static void set(TuxAppContext tc) {
        userThreadLocal.set(tc);
    }
    public static TuxAppContext get() throws TuxATMITPException {
        TuxAppContext tc = userThreadLocal.get();
        if (null == tc) {
            // No Tuxedo context gotten for this thread yet, so get one
            TuxAppContextUtil.tpappthrinit(null);
            tc = TuxAppContextUtil.getTuxAppContext();
            userThreadLocal.set(tc);
        }
        return userThreadLocal.get();
    }
} 

To access the thread's Tuxedo context, we'll simply call the get() class method and if the thread hasn't already created a Tuxedo application context, it will create one and store it in thread local storage.  Otherwise it just returns the previously created Tuxedo context.  One thing to keep in mind is that each thread Tomcat uses will eventually end up with a Tuxedo context.  This means that you need to allow for this in the MAXACCESSORS setting in the UBBCONFIG file.

All that left now is to either use the TuxedoThreads class in a JSP or servlet file or use it in another Java class that they call.  I'll opt for the later as it makes the JSP easier to read.  So here is the MyTest Java class that makes the Tuxedo service call to the TOUPPER service:

package todd;
import com.oracle.tuxedo.tjatmi.*;
import weblogic.wtc.jatmi.TypedString;
public class MyTest {
    public String toUpper(String inStr) throws TuxATMITPReplyException,
            TuxATMITPException {
        // Get the threads Tuxedo context
        TuxAppContext tc = TuxedoThreads.get();
        // Create the Tuxedo request buffer and call the Tuxedo service
        TuxATMIReply rply = null;
        TypedString req = new TypedString(inStr);
        TypedString rplyData = null;
        rply = tc.tpcall("TOUPPER", req, 0);
        rplyData = (TypedString) rply.getReplyBuffer();
        return rplyData.toString();
    }
}

The class has a single method that takes a Java String and it creates a Tuxedo TypedString buffer and calls the Tuxedo TOUPPER service.  It then gets the returned buffer and returns that to the caller.  Here is a trivial JSP using this class to uppercase a string: 

<%@ page import="todd.*"%>
<%@ page language="java" contentType="text/html; charset=UTF-8"
    pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Test of Tuxedo service call</title>
</head>
<body>
    <% MyTest foo = new MyTest(); %>
    Hello World uppercased is <%= foo.toUpper("Hello World") %>
</body>
</html> 

The JSP simply creates an instance of the MyTest class and uses the toUpper method to get the upper case version of Hello World.

Wednesday Oct 01, 2014

Why is old bad?

I keep coming across situations where someone tells me that Tuxedo is old like that is a bad thing.  I'm old, well, maybe older, and it seems fine to me.  I like to think I'm like fine wines and single malt scotches that get better with age.  So why is old necessarily a bad thing?

I believe it comes from the notion that old technology has been replaced by something newer.  In a sense this is certainly true.  I don't see a lot of 8 track players these days because they were replaced by cassette tapes, which were replaced by CDs, which were replaced by digital audio players.  Yet there are purists who content vinyl records still offer the best sound quality, although my ears certainly can't confirm that.   

Most people would argue that Unix is modern technology yet Unix was first developed in 1969 making it one of the oldest technologies currently in use.  So why is technology that is 45 years old considered modern?   I believe it is because Unix has kept up with the times.  It has adopted to changes in hardware, operating system design, programming styles and the like.  Tuxedo was first developed in 1982 and likewise has kept up with changes in hardware, application server design, programming languages, and communication techniques.  So where is the difference?

For example, in the 1990s messaging, CORBA, and multi-threaded programming were all the rage.  Tuxedo adopted messaging, CORBA, and multi-threading.  In the 2000s SOAP based web services, the Service Component Architecture, and dynamic programming languages such as Python, Ruby, and PHP became popular and Tuxedo adapted to support those.  More recently Oracle introduced Engineered Systems where the hardware and software was designed to work together to optimize performance.  Tuxedo has been updated to leverage these engineered systems to provide enhancements that can dramatically improve application performance.  Recent releases of Tuxedo now offer support for developing applications in Java, support for RESTful based web services, and tight integration with Oracle Real Application Clusters.

The above may make it sound like Tuxedo lagged behind the technology curve, yet nothing could be farther from the truth.  The Open Group's standard for distributed transaction support known as XA primarily came from Tuxedo.  Service Oriented Architecture or SOA has also been the rage over the last decade, yet Tuxedo has been SOA based since its inception.  Everything in Tuxedo is a service.  This has allowed Tuxedo and Tuxedo applications to adopt newer technologies often with no application changes required.  A service written 20 years ago in C can transparently be used as a SOAP or REST web service with absolutely no changes to the service implementation.  That same service can also be called from Python and appears to the Python developer as just another method.  Want to re-implement that C service in Java without impacting any existing usage of that service?  No problem.

Another word for old that has mixed connotations is mature.  Like old, it can be a bad thing if it has remained unchanged.  However in the world of enterprise software maturity is usually a good thing as it brings with it stability.  Who puts a version 1.0 product into production?  I believe there are few products on the market that have as low a defect rate as Tuxedo has.  This is largely because the code is tested extensively by Oracle and used by the thousands of customers over the last 32 years.  As one customer stated in an Oracle Open World presentation a few years ago that in 15 years of using Tuxedo, they never had a production outage.  That's stability.

So I'm left wondering why is old bad?

Friday Sep 19, 2014

Tuxedo at Oracle Open World 2014

The Tuxedo team will be present in force at Oracle Open World 2014.  We have 7 sessions directly related to Tuxedo presented by engineers, customers,  architects, and product management.  Please add them to your schedule builder so you don't miss them.  They are:

Monday 2:45-3:30PM Moscone South 304
Oracle Tuxedo 12.1.3: What’s New and Why You Should Care [CON8261]

Come hear Frank Xiong - VP for the Tuxedo product family, describe what we've been working on.  He'll be joined by Deepak Goel - Tuxedo Product Manager, and Anastasio Garcia - Middleware, SOA & Delivery Manager from Telefónica España.

Monday 4:15-5:15PM Hotel Nikko - Nikko Ballroom III
Simplify and Optimize Oracle Tuxedo Application Deployments and Management [HOL9448]

Jared Li - Tuxedo development manager, and Chris Guo - Principle Member of Technical Staff will conduct a hands-on lab using Oracle TSAM Plus, Oracle Enterprise Manager, and Oracle Business Transaction Management to monitor, manage, and administer Tuxedo applications. 

Monday 5:45-6:45PM Hotel Nikko - Nikko Ballroom III
Use Java, the Spring Framework, and Oracle Tuxedo to Extend Existing C/C++/COBOL Apps [HOL9447]

A cast of thousands (Maggie Li, Chris Guo, Maurice Gamanho, and myself) will conduct a hands on lab demonstrating how easy it is to use Java and Spring to develop and extend Tuxedo applications. 

Tuesday 10:45-11:30AM Moscone South 200
Oracle Tuxedo Makes It Easy to Develop Composite Apps with Java, C, C++, and COBOL [CON8228]

Jared Li and Deepak Goel will present ways customers can develop composite applications that leverage the performance of C/C++/COBOL while still allowing the rapid creation of business logic in Java, all in a single environment.

Tuesday 7:00-7:45PM Moscone South 301
Oracle Tuxedo Monitoring and Management: Birds-of-a-Feather Meeting [BOF9641]

Come to a Birds of a Feather sessions where Deepak Goel, myself, and Mark Rakhmilevich will present some information on monitoring and managing best practices as well as an open discussion about how to best manage, monitor, and administer Tuxedo applications.

Wednesday 3:30-4:15PM Moscone South 200
The ART and Practice of Mainframe Migration and Modernization [CON8229]

Mark Rakhmilevich - Sr. Director Product Management/Strategy, Rui Pereira - Principal Sales Consultant, and Jeffrey Dolberg - Senior Principal Product Manager, will describe how customers are migrating their mainframe CICS, IMS and batch applications from costly IBM mainframe environments to open systems with Tuxedo and Tuxedo ART.

Thursday 9:30-10:15AM Marriott Marquis - Salon 14/15
Management and Monitoring of Oracle Tuxedo: Integrated, Automated [CON8273]

I will be presenting on the new features in TSAM Plus 12c that can be used to efficiently manage, monitor, and administer Tuxedo applications.  I will cover the recent integration of the Tuxedo observer for BTM, and how to decide on the right tool and strategy to address common performance and availability issues. 

And as always please stop by the Tuxedo booth in the Oracle DEMOgrounds in Moscone South.  We love to see customers and answer their questions.  This is your chance to meet product developers, product managers, engineering managers, and architects all focused on Tuxedo!

Hope to see you there!!

Todd Little
Oracle Tuxedo Chief Architect 

Saturday Mar 15, 2014

High Availability Part 6

This concludes for now my posts on improving the MTBF, just one of the two major factors in improving the availability of a system.  In my next post I’ll start to cover ways to decrease the MTTR, the other factor in determining availability.[Read More]

Thursday Jan 23, 2014

High Availability Part 5

Tuxedo provides an extremely high availability platform for deploying the world's most critical applications.  This post covers some of the way Tuxedo supports redundancy to improve overall application availability.  Upcoming posts will explain additional ways that Tuxedo ensures high availability as well as way to improve application availability of a Tuxedo application.[Read More]

Wednesday Jan 15, 2014

High Availability Part 4

To Err is Human; To Survive is High Availability

In this post I’d like to look at the various causes of unavailability or outages. The most obvious although often overlooked is that of scheduled system maintenance. Now whether that is included in your measurement of availability depends upon the stack holders for a system or application. The ideal systems have no scheduled maintenance that causes the system to be unavailable. That isn’t to say they don’t receive maintenance, but that the maintenance doesn’t cause the system to be unavailable. This can be done via rolling upgrades, site switchovers, etc. For now it suffices to say that this type of down time is intentional, known, and typically scheduled.

The interesting part comes in looking at other causes of unavailability, in particular those caused by failures. The most commonly thought of failure is that of a hardware failure such as a disk drive failing, or a server failing. These failures tend to be obvious and easily remedied. Most people then guess that software failures make up the next significant portion of failures. But as is all too often the case, the most common failures in highly available systems are those caused by people. Estimates place hardware failures at around 10% of the causes of an outage. This low percentage is largely due to the ever improving MTBF of hardware. Software is estimated to cause about 20% of outages for highly available systems. The remaining 70% of outages are attributable to human action, and increasingly these actions are intentional, i.e., purposeful interruptions of service for malicious intent such as denial of service attacks.

To give an example a study was done on replacing a failed hard drive in a software RAID configuration. A seemingly simple task, yet a surprising number of cases of replacing the wrong drive occurred in the first few times an engineer was asked to repair the systems. This indicates that putting procedures in place to repair a system isn’t adequate, but that actually performing the procedures several times is needed to eliminate human error. But more importantly it points out the need to eliminate human intervention as much as possible as any human intervention either for normal operation or for remediating a failure has a significant possibility of being done incorrectly. That incorrect intervention could be relatively catastrophic as in replacing the wrong drive in the above study caused a complete loss of data in some instances.

So what is the takeaway from this information? Minimize or eliminate human intervention as much as possible in order to minimize outages attributable to human error. Typically this means automating as much as possible any necessary steps to resume normal operation after a failure or even during normal operation. Every manual step taken by an administrator has some probability of causing an outage. It also suggests that repair procedures be well tested, preferably in a test environment that duplicates the production environment.

More on how Tuxedo can help solve these problems in my next entry.

Saturday Jan 11, 2014

High Availability Part 3

In my previous posts on High Availability I looked at the definition of availability and ways to increase the availability of a system using redundant components.  In this post I'll look at another way to increase the availability of a system.  Let’s go back to the calculation of availability:

Availability Formula

Based upon this formula, we can see that if we can decrease the MTTR, we can increase the overall availability of the system. For a computer system, let’s look at what makes up the time to repair the system. It includes some time that may not be obvious, but in fact is extremely important. The timeline for a typical computer system failure might look light:

  1. Normal operation
  2. Failure or outage occurs
  3. Failure or outage detected
  4. Action taken to remediate the failure or outage
  5. System placed back into normal operation
  6. Normal operation

Most people only consider item (4) above, the time taken to remediate the outage. That might be something like replacing a failed hard drive or network controller. It could even be as simple as reconnecting an accidentally disconnected network cable, a 30 second repair. But the MTTR isn't 30 seconds. It’s the time included in (3), (4), and (5) above. For the network cable example, the amount of time taken in (3) will depend upon network timers at multiple levels and could be many minutes if just relying on the operating system network stack. The time taken for (4) may be as low as the 30 seconds needed to reconnect the cable although finding the cable might take a bit longer than 30 seconds. The time for (5) again depends upon the service resumption steps such as re-establishing a DHCP address, reconnection of applications or servers, etc. So on the surface the MTTR may be assumed to be 30 seconds, the actual time could be many minutes, especially in the extreme case where systems, servers, applications, etc., need to be restarted or rebooted manually to recover.

So how does this impact system design for highly available systems? It indicates that whatever can be done to decrease items (3), (4), and (5) above, will improve overall system availability. The more of these steps that can be automated, the lower the MTTR one can achieve, and the higher the availability of the system. Too often the detection phase (3) is left up to someone calling a help desk to say they can’t access or use the system. As well items (4) and (5) often require manual intervention or steps. When one wants to achieve 99.99% availability, manual repairs or remediation is going to make that very difficult to achieve.

More on the causes of failures in my next post.

Monday Jan 06, 2014

High Availability Part 2

To compute the availability of a system, you need to examine the availability of the components that make up the system.  To combine the availability of the components, you need to determine if the components failure prevents the system from being usable, or if the system can still be available regardless of the failure. Now that sounds strange until you consider redundancy. In a non-redundant subsystem, if it fails, the system is unavailable. So in a completely non-redundant system, the availability of the system is simply the product of each component’s availability:

A very simplified view of this might be:

     Client => LAN => Server => Disk

If we take the client out of the picture as it really isn't part of the system, we at least have a network, a server, and a disk drive to be available in order for the system to be available. Let’s say each has an availability of 99.9%, then the system availability would be:

or 99.7% available. That’s roughly equivalent to a day’s worth of outage a year. So although each subsystem is only unavailable about 9 hours a year, the 3 combined ends up being unavailable for over a day. As the number of required subsystems or components grows the availability of the overall system decreases. To alleviate this, one can use redundancy to help mask failures. With redundant components, the availability is determined by the formula:

Let’s look at just the server component. If instead of a single server with 99.9% availability , we have two servers each with 99.9% availability, but only one of them is needed to actually have the system be available, then the availability of the server component of the system increases from 99.9% to 99.999% or 5 nines of availability just by adding an additional server. As you can see, redundancy can dramatically increase the availability of a system. If we have redundant LAN and disk subsystems in the example above, instead of 99.7% availability, we get 99.997% availability or about 16 minutes of down time a year instead of over a day of down time.

OK, so what does all of this have to do with creating highly available systems? Everything! What it tells us is that all things being equal, simpler systems have higher availability. In other words, the fewer required components you have the more available your system will be. And it tells us that to improve availability we can either purchase components with higher availability, or we can add some redundancy into the system. Buying more reliable or available components is certainly an option, although generally that is a fairly costly option. Mainframe computers are an example of this option. They generally provide better availability than blade servers, but do so at a very high premium. Using cheaper redundant components is typically much cheaper and can even better overall availability.

More on high availability in my next post. 

Thursday Jan 02, 2014

High Availability

As companies become more and more dependent upon their information systems just to be able to function, the availability of those systems becomes more and more important.  Outages can costs millions of dollars an hour in lost revenue, let alone potential damage done to a company’s image. To add to the problem, a number of natural disasters have shown that even the best data center designs can’t handle tsunamis and tidal waves, causing many companies to implement or re-evaluate their disaster recovery plans and systems. Practically every customer I talk to asks about disaster recovery (DR) and how to configure their systems to maximize availability and support DR. This series of articles will contain some of the information I share with these customers.

The first thing to do is define availability and how it is measured. The definition I prefer is availability represent the percentage of time a system is able to correctly process requests within an acceptable time period during its normal operating period. I like this definition as it allows for times when a system isn’t expected to be available such as during evening hours or a maintenance window. However, that being said, more and more systems are being expected to be available 24x7, especially as more and more businesses operate globally and there is no common evening hours.

Measuring availability is pretty easy. Simply put it is the ratio of the time a system is available to the time the system should be available. I know, not rocket science. While it’s good to measure availability, it’s usually better to be able to predict availability for a given system to be able to determine if it will meet a company’s availability requirements. To predict availability for a system, one needs to know a few things, or at least have good guesses for them. The first is the mean time between failures or MTBF. For single components like a disk drive, these numbers are pretty well known. For a large computer system the computation gets much more difficult. More on MTBF of complex systems later. Then next thing one needs to know is the mean time to repair or MTTR, which is simply how long does it take to put the system back into working order.

Obviously the higher the MTBF of a system, the higher availability it will have and the lower the MTTR of a system the higher the availability of the system. In mathematical terms the system availability in percent is: 

 

So if the MTBF is 1000 hours and the MTTR is 1 hour, then the availability would be 99.9% or often called 3 nines. To give you an idea about how much down time in a year equates to various number of nines, here is a table showing various levels or classes of availability:

Availability

Total Down Time per Year

Class or # of 9s

Typical application or type of system

90%

~36 days

1

99%

~4 days

2

LANs

99.9%

~9 hours

3

Commodity Servers

99.99%

~1 hour

4

Clustered Systems

99.999%

~5 minutes

5

Telephone Carrier Servers

99.9999%

~1/2 minute

6

Telephone Switches

99.99999%

~3 seconds

7

In-flight Aircraft Computers

As you can see, the amount of allowed downtime gets very small as the class of availability goes up. Note though that these times are assuming the system must be available 24x365, which isn’t always the case.

More about high availability in my next entry. 

Thursday Sep 05, 2013

Tuxedo vs MQ Series or other MOMs - No comparison

Tuxedo is only the multi-language enterprise application server that can scale to 100's of thousands of requests per second, deliver end-to-end service response times under 30 microseconds, and regularly provide 99.999% availability or better.  Whether you want to build extreme performance applications in C or C++, migrate existing CICS or IMS COBOL applications to distributed systems, or build applications using a variety of implementation languages, each selected based upon their suitability for the programming problem at hand, Tuxedo offers by far the best option to do this in the least amount of time.[Read More]

Friday Aug 02, 2013

Integrating Tuxedo Global Transactions across Web Services

Global Transactions

A global transaction is a series of service calls where the services involved write to a resource (typically update or create a record in a database), and all updates or creations must be completed or none at all so that no inconsistency exists.

For example, imagine performing a balance transfer from one account to another, and that the information pertaining to those accounts is stored in two different databases. The succession of service calls would be as follows:

  • withdraw amount from database 1,

  • deposit amount to database 2,

  • commit (withdrawal and deposit become effective and are reflected in future balance displays).

Applications running on Oracle Tuxedo, combined with a database resource such as Oracle Database can guarantee what is called in computer science Atomicity, Consistency, Isolation and Durability (or ACID properties).

Web Services

In world more and more connected, Web Services and SOAP standards have been developed to address needs to exchange information irregardless of the system on which it is available. A Web Service is a “public” interface to a business operation that is exposed in a standardized way.

Other standards are developed as needs arise, such as WS-Addressing, WS-ReliableMessaging or WS-Security, and software vendors implement those in order to provide more features.

Such features are usually advertised in service interfaces so that provider and consumer can agree on levels of functionality and automatically adjust interactions. For instance, a service provider may offer a secure version of its services but still allow non-secure consumers to see and use a scaled-down version of the same services, even though they do not implement the full stack of security standards.

The standard that combines Global Transactions and Web Services is WS-AtomicTransaction or WS-AT. Consider the example below:


Each of the different actors in this use-case may be housed in completely different organizations, with their own software, networks and databases. Using Web Services standards ensures that the applications will communicate with each other despite potentially using different software vendors, having different software life-cycles and so on.

The SALT gateway is a Tuxedo system process that adds Web Services support to Tuxedo applications. Tuxedo services can be exposed as Web Services, or Tuxedo client programs can invoke Web Services seamlessly, that is by making it seem like the Web Services are simply other Tuxedo services.

In that spirit, integrating Tuxedo services with Web Services Atomic Transactions is as simple as changing some elements of configuration:

  • Add a transaction log so a record of prepared transactions is kept, so that in the case of a failure those in-flight transactions can be resolved, usually rolled back but in some cases committed.

  • In the Tuxedo-to-external Web Service direction, associate a standard policy descriptor to instruct the SALT gateway on what to do when a transaction propagation is requested: mandatory or optional propagation, or no propagation at all (no policy present). This policy file will look as follows:

<?xml version="1.0"?>

<wsp:Policy wsp:Name="TransactionalServicePolicy"

    xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy"

    xmlns:wsat="http://docs.oasis-open.org/ws-tx/wsat/2006/06">

<wsat:ATAssertion wsp:Optional="true"/>

</wsp:Policy>

When exposing a Tuxedo service as a Web Service, the SALT gateway will generate the proper WSDL containing the WS-AT capabilities. A WS-AT transaction will propagate into Tuxedo and the remote side will coordinate it.

When invoking a Web Service, the assertion will be contained in the remote WSDL, and the SALT utilities used to import the Web Service configuration will process those automatically and generate a WS-AT policy file such as seen above. Then when a transaction is started on the Tuxedo side it can be propagated to the outside, and in this case coordinated by Tuxedo.

It is possible to expand existing applications to Web Services, and of course develop new ones, and take advantage of WS-AT by way of the SALT gateway.

Summary

For Oracle Tuxedo, Oracle SALT provides a native Web Services implementation that ties global transactions and Web Services together.

Oracle Tuxedo users are already used to the scalability and high-availability of their applications. Oracle SALT brings Web Services interoperability to Oracle Tuxedo, and does so in a configuration-oriented manner, that is it is not even necessary to modify existing applications or develop new ones in order for them to inter-operate with Web Services. 

Tuesday Jul 30, 2013

Using Tuxedo application service version with Oracle SALT

To expand on this previous entry, here are some more details on how to use application service version with Web Services through the Oracle SALT gateway.

Using Tuxedo application service version in conjunction with Tuxedo services exposed as web services

  • The GWWS gateway gets REQUEST_VERSION and VERSION_RANGE from UBBCONFIG,
  • calls to actual Tuxedo service are made with REQUEST_VERSION inherited from configuration,
  • if different settings are needed, such as specific traffic from specific gateway to be routed to specific services, another gateway instance can be configured in a group with different REQUEST_VERSION value and started 

Example (UBBCONFIG excerpt): 


... 

*GROUPS 

GROUP1 

        LMID=L1 GRPNO=2 VERSION_RANGE="1-2" 

GROUP2 

        LMID=L1 GRPNO=2 VERSION_RANGE="3-4" 

GWWS_GRPV1 

        LMID=L1 GRPNO=3 REQUEST_VERSION=1 

GWWS_GRPV2 

        LMID=L1 GRPNO=3 REQUEST_VERSION=2

... 

*SERVERS 

mySERVER SRVGRP=GROUP2 SRVID=30 

... 

GWWS SRVGRP=GWWS_GRPV1 SRVID=30 CLOPT="-A -- -i GW1"

GWWS  SRVGRP=GWWS_GRPV2 SRVID=30 CLOPT="-A -- -i GW2"

...

 


In the example above GWWS in group GWWS_GRPV1 inherits request version "1" from its UBBCONFIG settings, and therefore exposes services that are advertised by Tuxedo application servers which include "1" in their VERSION_RANGE settings, such as GROUP1 here. If a service exposed by GWWS is actually performed by a server in GROUP2 the result will be a TPENOENT error forwarded to the remote Web Services client.

Using this mechanism, it is possible to map different endpoints to services with different versions. Since versions are per-group, this is done by placing GWWS servers in their own group, and either use proxy mapping in front of GWWS (via Apache server or other), or by directly accessing the endpoints of the Web Services. For example, these settings would be added to the UBBCONFIG above:

SALTDEP:


...

    <GWInstance id="GW1">

       <Inbound>

           <Endpoint use="http_port_v1"/>

        <Inbound>

    </GWInstance> 

    <GWInstance id="GW2">

        <Inbound>

            <Endpoint use="http_port_v2"/>

        <Inbound>

    </GWInstance> 

... 


Service WSDF: 


<wsdf:Definition>

    <wsdf:WSBinding id="svc_binding">

        <wsdf:Servicegroup id="svc_PortType">

            <wsdf:Service name="STOCK_QUOTE"/>

        </wsdf:Servicegroup>

        <wsdf:SOAP>

            <wsdf:AccessingPoints>

                <wsdf:Endpoint address="http://my.server:3331/quote" id="http_port_v1"/>

                <wsdf:Endpoint address="http://my.server:3332/quote" id="http_port_v2"/>

            </wsdf:AccessingPoints>

...


Using Tuxedo application service version in conjunction with External web services imported into Tuxedo using SALT

  • Since 1 GWWS instance cannot advertise more than 1 service with same name, that same service would have to be in different instance,
  • for that reason, the existing mechanism can simply be used: configure multiple GWWS instances with VERSION_RANGE in its *GROUP settings accordingly.

Example (UBBCONFIG excerpt): 


... 

*GROUPS 

GROUP2 

        LMID=L1 GRPNO=2 VERSION_RANGE="1-2" 

GROUP3 

        LMID=L1 GRPNO=3 REQUEST_VERSION=1 VERSION_RANGE="3-4" 

... 

*SERVERS 

GWWS SRVGRP=GROUP2 SRVID=30 

... 

GWWS SRVGRP=GROUP3 SRVID=30 

... 


In the above example, Tuxedo programs (client or server) call an external Web Service exposed by both GWWS in groups GROUP2 and GROUP3. Programs using version 1 or 2 will be routed to the service exposed by GWWS in GROUP2 which may connect to endpoint 1, and programs using version 3 or 4 will be routed to the service exposed by GWWS in GROUP3 which may connect to a different endpoint than GWWS in GROUP2. 

Stay Connected

Follow Tuxedo on:
LinkedIn
YouTube
Tuxedo Blog

Follow Cloud Application Foundation (CAF):

Twitter

  Blog

Thursday May 16, 2013

Sterci processes financial messages 7x faster while lowering TCO

Headquartered in Geneva, Sterci Group is a market-leading financial messaging solutions company with subsidiary divisions in London, Brussels, Toronto, New York, Paris, Riyadh, Singapore and Zurich. Sterci’s products and services provide banks, corporations and financial institutions with integrated business solutions for transactional banking, multi-bank connectivity, full data integration, reconciliation, cash management, zero balancing and market data management.

Sterci partners with Oracle to deliver mission critical and best-in-class solutions their clients can depend upon. Many of their customers were running old financial messaging switches like IBM mainframes and HP’s Tandem type platforms that are very expensive to support. Sterci’s view was to help those organizations lower their total cost of ownership. Sterci wanted an application server environment that had transactional monitoring capabilities that were robust, high performing, easy to distribute, and widely supported in the market.

Oracle Tuxedo was an obvious fit. Tuxedo is widely distributed, widely used, mature, highly available and highly performing. With Oracle Tuxedo and Exalogic, Sterci went from processing half a million to 3.5 million financial messages per hour while lowering the total cost of ownership. Watch the video, Sterci Clients up to 7x Faster with Oracle Tuxedo, with Rob Kotlarz, Business Development Director, of Sterci to learn more.

Stay Connected

Follow Tuxedo:


Stay Connected
Follow Cloud Application Foundation:

Thursday May 02, 2013

The Realities of Rehosting – Four Customer Stories, by Mark Rakhmilevich, Product Mgmt, Strategy Director Product Development

Mainframe customers have options.  Not that mainframe vendors, like IBM, would tell them.  In fact, IBM recently put together a presentation on “The Reality of Rehosting” and, as you can guess, they weren’t enthusiastic about the notion of moving applications from the mainframe to open systems.  On the other hand, some mainframe users who presented at a recent Oracle OpenWorld are way more enthusiastic about their options when leveraging Tuxedo to migrate applications off the mainframe.  These are their stories on the “realities of rehosting.” 

Banco Bilbao Vizcaya Argentaria (BBVA)

BBVA, a global bank with 100,000 MIPS of IBM mainframe capacity deployed across Europe and the Americas has began rehosting their core banking transactions and other mainframe applications to Oracle Tuxedo and Oracle Database.  With over 3000 MIPS already rehosted in 2012 and another 12,000 under way in 2013, the bank has crunched the numbers and estimated $1M/year savings for every 1000 MIPS of rehosted workloads.  They rely on robust Tuxedo and Oracle Database foundation, coupled with Tuxedo Mainframe Adaptors for integration with IBM CICS and IMS, and GoldenGate for database synchronization, to run a hybrid core banking infrastructure with full security and global transaction coordination across rehosted transactions and those that are still on the mainframe. 

Operating in many countries presents complex regulatory challenges for BBVA, including requirements for managing local customer data and transactions in-country. Using Tuxedo and Oracle Database, BBVA is able to deploy a small “bank-in-a-box” datacenter configuration in those countries that have this requirement. Speaking at Oracle Open World  2012 , Antonio Gelart, head of BBVA modernization program, stated that the bank’s goal is zero MIPS – no mainframes – and with the success of early rehosting projects they can see a clear path towards this goal.

Mazda

Mazda, long known as an innovator in automobile industry, is right sizing its IT infrastructure to meet the challenges of the current economic disruptions. Shedding the legacy mainframe environment, Mazda has chosen to migrate its cost accounting system off the mainframe to a Linux environment powered by Tuxedo and Tuxedo’s Application Runtime (ART) for Batch. 

Accomplishing this migration in about one year, Mazda has revamped some of its application programs in Java and successfully married the traditional mainframe batch framework provided by Tuxedo ART with Java programs.  Describing this migration project at Oracle Open World 2012, Masuhiro Yoshioka, Mazda’s IT infrastructure manager, said  that they chose Oracle Tuxedo for its strong reliability and availability characteristics, which are critical for Mazda’s cost accounting system that feeds into quarterly and annual financial reporting. In addition to significant cost savings expected from decommissioning one of the two mainframes, Mazda has seen significant performance improvements from parallelizing the overnight batch processing across a distributed batch farm supported by Tuxedo’s distributed batch framework.  Once acquired, the taste for rehosting stays strong.  Mazda is now looking at rehosting an IBM IMS application from its last remaining mainframe to Tuxedo’s Application Runtime for IMS.


Caja de Valores (CdV)

CdV, an IT arm of Buenos Aires Stock Exchange, began its mainframe modernization quest in 2001 with a 3-yr project to re-write about 4M lines of COBOL code to Java.  Fast forward to 2007, and, in the words of Alejandro Wyss, the CdV CIO, no critical subsystem has been migrated, the budget for re-write was overrun by 3X, and re-write project has achieved less than 30% of the total scope, while COBOL code base grew to 6M LoC. 

A new approach was required, one that moved the application functionality to more flexible open systems infrastructure in a short timeframe and with low risk.  CdV determined that rehosting the applications to Oracle Tuxedo, where the application logic is preserved intact in COBOL and only technical APIs are adapted or emulated to run on a Linux or UNIX platform, was a more promising option.  Starting with the Stock Trading application as a pilot, the entire migration was accomplished in 20 months.  Deploying on Linux to achieve HW vendor independence, CdV was able to leverage Tuxedo’s built-in clustering capabilities to move to an application grid enabling Active/Active fault-tolerant services infrastructure, while increasing throughput by 200% at a fraction of a mainframe cost.  Leveraging Tuxedo’s standards-based integration options, CdV is able to reduce overall risk and cut time to market for new capabilities by 30%.   

A Top 5 Global Bank

It’s difficult to imagine a more critical banking system than a SWIFT Financial Messaging application.  Lifeblood of any major bank is its connectivity into the global financial fabric managed by SWIFT that interconnects over 8000 banks and many of their corporate customers. For a bank that’s in the top 5 of SWIFT messaging volumes, an aging SWIFT Financial Messaging solution is a serious risk. 

Setting criteria for a mainframe replacement solution that could consolidate disparate SWIFT messaging systems and perform at 10x current system’s throughput, the Bank embarked on a series of performance benchmarks.  The clear winner – a Tuxedo-based GT Exchange application from Sterci, a long term Tuxedo ISV that specializes in SWIFT financial messaging market, deployed on Exalogic Elastic Cloud System. In fact, once the Bank has seen its 4x throughput advantage over IBM AIX/pSeries (2.58M complex SWIFT messages/hr on Exalogic compared to 620K on IBM p750 servers), they’ve decided to deploy it across 2 countries and 4 datacenters using 8 quarter-rack Exalogic systems.  While this is a mainframe application replacement example rather than rehosting of an existing application, it underlines the performance advantages that can be achieved through Tuxedo optimizations on Exalogic.

Migrating off expensive, inflexible mainframe systems to Tuxedo

These four customers are not alone in migrating off expensive and inflexible mainframe systems to Tuxedo-powered open systems infrastructure. They are just the more recent examples of mainframe migrations leveraging Tuxedo that demonstrate significant cost and risk reduction benefits, and highlight the gains in performance, datacenter flexibility, and business agility customers can achieve using Tuxedo-based migration approach.  In subsequent posts we’ll highlight the technical details of these migrations, and share best practices for migrating mainframe applications to Tuxedo.

More Information

Press Release:  Oracle Enhanced Mainframe Rehosting for Oracle Tuxedo 12c

Web Page:  Tuxedo page on oracle.com

Stay Connected

Follow Tuxedo:


Stay Connected
Follow Cloud Application Foundation:

Wednesday Apr 17, 2013

Increase the Availability of Your Tuxedo Applications and Improve IT Productivity with TSAM Plus--By Deepak Goel, Senior Director, Software Development

Find out how you can increase the productivity of your IT staff and the availability of your Tuxedo applications using Oracle Tuxedo System and Application Monitor Plus 12c (TSAM Plus 12c).  Check out YouTube video below by Todd Little, Managing Tuxedo Applications with TSAM Plus 12c and OEM CC12c.  


TSAM Plus 12c is a management and monitoring solution for Tuxedo 12c applications.  It helps improve performance and availability of Tuxedo applications and expedite  problem resolution in both dev/test and production environments, while monitoring several domains at the same time.  TSAM 12c has many features, which help automate day-to-day operations such as resource deployments, scale up and out of application nodes and service level management, increasing the productivity of IT staff as they do not need to worry about writing scripts, or moving from one console to another console or correlating messages from one product to another in order to diagnose a critical production problem. 

TSAM Plus 12c includes a plugin for Oracle Enterprise Manager Cloud Control 12c, which allows Tuxedo applications to be monitored and managed from the same console as other Oracle products, including WebLogic and Database.

TSAM Plus 12c Functionality can be broadly categorized as follows:

  1. Application Performance Management:  TSAM Plus 12c greatly improves application performance by providing unique functionality to automatically detect performance bottlenecks; quickly diagnose these performance problems, and identify their root cause
  2. Operations Automation: TSAM Plus 12c automates common manual and error prone operations allowing administrators to focus on more strategic initiatives. With TSAM Plus 12c , Tuxedo applications can be packaged in a self contained application package along with required configuration artifacts and stored in a central repository, ready for deployment, to an existing domain, or to interactively create a new Tuxedo domain or to add additional nodes to an existing domain. Both physical and virtual environments are supported.  In addition, With TSAM Plus 12c, it is much easier to make changes in configuration of Tuxedo applications in production environment without having to restart the application, thus avoiding costly downtime. With TSAM Plus 12c, A Tuxedo domain can be changed dynamically, in addition to creating a new Tuxedo domain from scratch. TSAM Plus 12c also helps with day-to-day operational tasks, such as manually start and stop applications and start new instances of an application server.
  3. Service Level Management: TSAM Plus 12c helps IT organizations to achieve high availability, performance, and optimized service levels for their business services.

More Information

Datasheet:  Oracle Tuxedo System and Application Monitor

Web Page:  Tuxedo page on oracle.com

Stay Connected

Follow Tuxedo:


Stay Connected
Follow Cloud Application Foundation:

About

This is the Tuxedo product team blog.

Search

Categories
Archives
« April 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today