Wednesday Jan 14, 2015

Oracle MFT Part 1 - Transform Batch Files

I have been working with a customer looking at implementing an enterprise file transfer solution. Very timely as Oracle released Managed File Transfer (MFT) as part of the SOA 12c launch which is a great fit for this use case.

I'm going to put together a series of MFT posts to help customers implement some advance use cases I have implemented in order to demonstrate the MFT capability.

So I don't reinvent the wheel I should make mention of the following tutorials that have been released by product management. I highly recommend doing these tutorials before trying this post as this blog assumes some familiarity.

Getting Started with MFT

MFT with SOA Suite

To complete this example I recommend you complete these tutorials to become familiar with the basics as these blog entries focus on more advance use cases and assumes you have a basic level of MFT functionality.

Introduction

This first post is going to show how you can transform large batch files from a comma delimited file to a positional file. I have not performed any tuning on my environment and have only tested to 200MB without issue. Having said that I have modified my JMV settings to the following:

-Xms2048m -Xmx4096m -XX:PermSize=1024m -XX:MaxPermSize=4096m

If you find you start to hit some memory issues refer to the SOA Developers Guide which provides all the tuning tips for managing large files.

One of the new features of SOA Suite 12C is the introduction of BPEL templates. I will be utilising these to accelerate the development of this process.

If you didn't know by now, MFT is a file transfer tool. Having said that, it does not possesses transformation capability. Therefore if you want to transform files to a corporate standard or canonical model you need to pass the transformation capability to something else like SOA Suite or ODI. In this example I will transform a simple csv and transform it to a positional file using Oracle BPEL 12.1.3.

I want to have complete visibility of the process so I'm implementing the the Pass-By-Reference use case where MFT sends the file to BPEL for transformation as a reference. BPEL then transforms the file to a temporary location, then MFT sends the file to the correct location.

Using this methodology you can also resubmit documents at any stage in the process.

To complete this example you will need to complete the following prerequisites:

Prerequisites

Download the following file that has all at resources required to complete the steps in this example.

An explanation of each file has been provided in the table below:

File Name
 Description
 address_book_200meg.zip  Input example
address_csv.xsd
Source xsd
address_fixedLength.xsd
Target xsd
MFTBatch.tmpl
The SOA Project Template that will be used to create the composite
mftTransferBatch.zip
The MFT service that is invoked from the SOA Composite once it has completed its transformation
mftTransform.zip
This is the MFT process that receives the file from the embedded FTP Server and passes it to the SOA Composite
ProcessBatchFile.tmpl
This is the custom activity that implement the chunk read so it can transform large files.

  1.  Unzip this file to a temporary location on your development PC.

  2. Start JDeveloper and navigate to the following location

    Tools -> Preferences -> SOA -> Templates


    This will provide the location where you need to load the templates ProcessBatchFile.tmpl and MFTBatch.tmpl. If you want you can create your own location or used the SOA MDS.



  3. Copy the 2 files ProcessBatchFile.tmpl and MFTBatch.tmpl from the resource file to the template location identified above.

Before I start to create the MFT configuration I need to have the BPEL process created as this will be the destination. If you want to build from scratch you can follow the tutorials posted above. In this example I will be utilizing templates to minimize the repetitive tasks.

Create BPEL Process 

  1. Create a SOA Application with the name MFTBatchProcessor.

  2. Create a SOA Project MFTBatchProcess, and choose the SOA Template MFTBatch downloaded above. If you don't see make sure the templates are loaded into the correct directory. To find the destination, in JDeveloper got to the Tools -> Preferences -> SOA -> Templates. This will specify the location.



  3. This should create a composite that looks like this



  4. If you double click the BPEL process you will see most of the code in the tutorial above already completed for you. This template provides more functionality than is required, but I wanted a template that was generic for type of load. In this example I'm MFT is used as the embedded FTP Server therefore the channel that will be used is the FileRefFile.

    In the the BPEL process under the AssignFileDirectory drag the custom activity template ProcessBatchFile. Once again if the custom activity doesn't appear check the template location.


  5. Click Finish to accept the defaults. This will load the ReadChunkFile configuration for the batch load. If the partner link for InvokeReadBatchFile doesn't appear, close the BPEL process and reopen it. Also for some reason the labels don't appear they should read Yes for left hand side, and No for right hand side.



    What you have done is basically done all the heavy lifting in 2 simple steps. All that is required now is to implement the transformation and make some small changes in the BPEL process to be unique to this service.

  6. When I configured the template I did not specify a format as this will be unique for each process. Therefore we must edit the ReadBatchFile file adapter. Go to the composite view and double click the ReadBatchFile reference and click Next to step 7. Ignore the previous steps as these variables are set dynamically by MFT.

    Uncheck the Native format translation is not required check box and import the address_csv.xsd and select the element <Root-Element>



  7. Click Next and Finish

  8. We have now completed the steps to read the file into BPEL from the directory reference sent by MFT. What we need to do now is transform the file and send the reference back to MFT. To do this we need to use the file adapter. We want to avoid putting large payloads into large XML DOMs to avoid memory issues.

    In the Composite add a File Adapter to the Reference swimlane.

  9. Name the adapter WriteBatchFile

  10. Click Next to Step 4 and Select the Write radio box.

  11. In step 5 specify a temporary file location that will store the transformed file. MFT will pick this file up from this location. It does not matter what you name this file as the template provides a unique name. It is also important that you select the Append to existing file checkbox. Remember the file location as you will need to update an Assign activity to reflect the same.



  12. Import the address-fixedLength.xsd as the target format and select the <Root-element>



  13. Open the Assign activity AssignMFTVariables that is at the bottom of the BPEL process and edit the file location specified in step 11. This is the location where BPEL saves the transformed file for MFT to pick up. Make sure you only edit the bit highlighted in blue, it must be prefixed with file://



  14. Wire the new File Adapter to the BPEL Process, your composite should now look like this.



  15. I have placed an empty DO_TRANSFORMATION activity in the ProcessBatchFile custom activity. Delete this activity and replace it with 2 activities,

    XSLT Transform - TransformFixedLength
    Invoke - InvokeWrtiteBatch



  16. Link the InvokeWriteBatch activity with the partner link WriteBatchFile. Also specify a new input variable



  17. Click the Properties tab and enter the 2 properties, the jca.file.Directory should be the same the the file location as specified in the WriteBatchFile configuration.



  18. Now that the adapters have been configured we can now implement the Transformation. Double click the TransformFixedLength activity.

    Set the Source variable to the Output of the ReadBatchFile partnerlink. This is specified as a scope variable as it was included in the custom activity template.

    Set The Target to the the Input the the WriteBatch partnerlink.



  19. This blog entry is not focusing on the Transformation so the mapping is very basic. The key point in the transformation is to specify the For-Each capability. At the end of the transformation it should look like this.



  20. The last step is to change the location of the MFT service that takes the file reference and sends it to the desired location. I will cover this off in the following section. Until this step is done the deployment wont work.

Create the MFT Transfer Service

  1. Login to the MFT Console

  2. Select the Administration tab and select import/export

  3. Import the MFT file mftTransferBatch.zip from the archive location.



  4. This should create the transfer mftTransfer with a source and target. Click the Design tab to see if it exists.



  5. You may notice that the target has a location that does not suit your environment. This can be changed by going to the mftFileTarget Target. This location is the final destination where the completed transform file will reside.



Complete SOA Composite and Deploy

 As mentioned above there was a final step in the SOA Composite that needed to be completed once the Transfer MFT had been deployed. This was completed in the previous step.

  1. Open up JDeveloper and Open the MFTBatchProcess Composite.

  2. Open the mftBatchReference and click Next to step 3. Here you need to create a connection you your MFT Server. This is the same as any other application server connection. Make sure you specify the correct Domain and use the AdminServer port.



  3. Choose the MFT Source mftSOASource and click Finish.



  4. You can now deploy your SOA Composite.

  5. Once the BPEL process has been deployed you can change the the number of rows read in each chunk read. It is currently set to 100 which is very low for large files. In this step we will raise this value to 50,000. With the file I have provided in this example this is about a 5MB chunk. Other files you create will be different, i.e. this file has 2 Million rows for a 200MB file. If you have larger rows you will reduce the chunk size accordingly. (if your JVM settings are still at the original settings you may get the error - java.lang.OutOfMemoryError: GC overhead limit exceeded)

    Login to the SOA EM console and navigate to the MFTBatchProcess composite.

    From the SOA Composite drop down button select Service/Reference Properites -> ReadBatchFile



  6. Change the ChunkSize parameter to 10,000. Then Apply the changes.


Create the Transform MFT Service

This service is the service that receives the file from the embedded FTP server and sends it to the SOA Composite created above.

  1. Login to the MFT Console

  2. Select the Administration tab and select import/export

  3. Import the MFT file mftTransform.zip from the archive location downloaded in the prerequisite.



  4. This should create the transfer mftTransform with a source and target. Click the Design tab to see if it exists.



  5. You will notice that the target mftSOATarget is pointing to a wrong location for the SOA Composite. This needs to be corrected to point to your environment.

    To get this location login the the SOA EM Console and navigate to the mftBatchProcess and click the Test button

    Copy the location minus the ?WSDL on the end.



  6. Back in the MFT Console open the mftSOATarget and change the location for your environment.



  7. Make sure you save and Deploy the changes

Testing

The process is now complete and is now ready for testing.

  1.  Using your FTP client, connect to the MFT Embedded FTP server and change the directory to /mft. (Remember this is a MFT username, not an OS username)

  2. FTP the address_book_200meg.dat file to that location.

  3. Once the FTP process has completed in the MFT Console you will see the process in action.



  4. Once the BPEL service has completed the transformation you will see the service completed in the MFT Console. You will also notice that the the file has grown. This is because the fix length file introduces spaces making the file bigger.



  5. One of the nice features in MFT is the association MFT makes with the BPEL service. If you click the link for the completed mftFTPEmbeddedSource. This will display the MFT service.



    Click the mftSOATarget and navigate down to see the Correlation ID link


    Click this link and this will bring up the BPEL service that performed the transformation. As you can see it did a bit of chunking.



  6. The final step is to check whether the file is in the correct format.

    From the Dashboard, if you click the completed mftSOASource link, the the mftFileTarget you will get the information to find the file it created. In my instance it created the file mft_6.dat.



  7. Login to the OS a look to see if the file exists.

    [oracle@homer bin]$ cd /u01/oracle/interface/outbound/mft/mft
    [oracle@homer mft]$ ls -l mft_6.dat
    -rw-r-----. 1 oracle dba 298504514 Jan 16 11:18 mft_6.dat


  8. Now view this file.



That is Part 1 completed. I hope you find this useful.

Wednesday Dec 03, 2014

Oracle MFT Remote SFTP

There are lots of examples out there around configuring MFT using the Embedded FTP server as the initiation of a transfer. I wanted to try out polling from a remote location so I started looking into configuring a sFTP remote.

Configuring the remote location was a simple task using password authentication.

This all worked fine on my test environment, but when I took this configuration to the customer I got an error (found in the mft_server1.out) in the logs.

<SFTP Channel validation failed>
<SFTPChannel is null>

 I then went about testing it manually using my trusty WinSCP tool. Everything seemed to work. I then tested using the command line on the server. Using scp worked but if I used sftp I got the error:

subsystem request failed on channel 0
Couldn't read packet: Connection reset by peer


So obviously it was an environment issue. This was resolved  using the following fix (I'm sure there are many other ways).

As the root user edit the file

/etc/ssh/sshd_config

find the line

Subsystem      sftp        /usr/lib/ssh/sftp-server

In my instance this was commented out. Add the line

Subsystem      sftp        internal-sftp

Save and restart the service

service sshd restart

The remote server on MFT should now come to life and start processing.



Wednesday Nov 12, 2014

OSB 12.1.3 My First Service

This blog post is really for my reference as I started playing with OSB 12.1.3 this week and noticed there is some tooling changes, especially in regards to xquery. I had a few issues trying to run my OSB when I used xquery. This may have been a red herring as there are other changes in 12.1.3 that I needed to come to grips with, i.e. splitting pipelines proxy services and the addition of adapters and transports in the one component pallet. This blog post steps through the procedures I used to implement a simple OSB service with a Webservice Proxy Service and a DB Adapter business service. I will call out what I initially tried and failed so you don't make the same mistakes.

  1. Firstly, create an OSB Application, your choice if you choose with OSB project or not.

  2. Create or use the OSB project created in step 1 and open up the composite overview.

  3. Right-Click the External Services swim lane and select Insert Adapters... -> Database. Configure the DB as you always have in SOA. As an optional extra you can deploy now to test that your business service works.

  4. Up until now there are no real issues, the tooling feels very similar to SOA Suite. The creation of the proxy service was slightly different to my experience in SOA Suite. When I develop an Exposed service in SOA Suite I create the service then link it to mediator, or BPEL, etc. I followed the same process in OSB and created the Proxy service before the pipeline and I think I ran into my first issue. What I do now is create a Pipeline first and configure the proxy service at the same time.

    Before we can create a proxy service or pipe line we need to have an xsd to be used as the base for the Proxy service.

  5. Drag a pipeline from the component pallet to the pipeline / split-join swimlane, and give a name for the pipeline.

  6. Click the create WSDL icon



  7. The WSDL configuration is very similar to what you do in SOA Suite. I have provided a screen shot of mine. I have 1 xsd called product.xsd, and it has the request and response elements in that file.



  8. The pipeline config should look like this. In my initial development that did not work I chose ws for webservice. I'm unsure if this gave me issues down the track but I would keep the transport set to http.



  9. Before I start to connect the pipeline to the business service I want to create the xQueries that will transform the Proxy service request and the the DB select.

    Right-click a folder (I created an xquery folder) in the OSB Project and select XQuery File 1.0 (you will notice that OSB 12.1.3 does not support XQuery 2004 a design time, only run time. Therefore if you need to edit upgraded 11g OSB projects with XQuery you need to do this manually)



  10. The creation of xQueries looks different from 11g. Don't stress if you follow these steps you will soon be in the comfort of mapping source and target schemas as we did in 11g in just a few quick steps.

    Name the xQuery, I also name the function the same a the xQuery



  11. Click the plus sign to add a source schema. At first impression it looks like everything is greyed out and nothing working. This is not the case, specify a name and click the the pencil to for sequence to specify the source schema.



  12. Once again the screen looks like nothing is enabled, this is expected. Click on the Schema Object Reference icon to select the schema



  13. When I first attempted creating my first xQuery I selected the xsd under the Application Schema Files folder, as this is what you do in a SOA composite. Then I remembered in OSB 11g you select the WSDL. The issue here is that there are 2 folders presented here. Select the element under the Imported Schema as presented below.



    I did try the element under the blank folder, but when I did this the xQuery map would not draw. Therefore if you have WSDLs as the source and target select the Imported Schemas. I have not tried this for WSDLs with the schema inline within the WSDL, you may have to select another folder.

  14. Your completed parameter should look something like this.



  15. The parameter should look like this



  16. Do the same for the target and repeat steps 12 - 15. In my instance I used a DB adapter, this creates the WSDL in the resources folder, it also provide a concrete WSDL. When I choose the target I choose the actual WSDL as the screenshot below shows.



  17. You have now completed the configuration of the xQuery, it should look something like this.



  18. Now its just a matter of mapping from source to target. I'm not going to go into details here as all mappings are different.

  19. In this example I have a synchronous webservice, I have mapped the request, but now need to map the response. Create a response xQuery using the steps 9 - 18.

  20. The final step is to configure the pipeline to route to the business service and assign all the variables.

    Open the pipeline and add a pipeline pair under the  Proxy service.

  21. In the Request Pipeline in Stage 1 add and Assign, and under that add a Replace, Do the same for the Response Pipeline. You will notice some red markers, these just mean that the actions have not been configured. We will do that in the following steps.



  22. Click on the Assign for the Request Pipeline and in the properties panel change the expression to a xQuery Resource.



  23. Choose the xQuery and assign the binding. It should look something like this.



  24. Now provide a variable that holds the mapped output, in my instance I created it as requestVar.



  25. In the Replace under the Assign and enter the values as per below. It may be difficult to see but there is a '.' under the location text box.

  26. Do the same for the Response Pipeline.

  27. The last step in the process is to route the business service. Under the Pipeline add a routing action and select the business service created in step 1.







  28. Save your project, you can now deploy your OSB and test. If testing in the service bus console make sure you test with the pipeline, not the proxy service. Testing the proxy service you see all the assigns and service calls as you got in 11g. Also introduced in 12C is the debugger in JDeveloper. You can set break points so you can step through the process for greater analysis of your code.

Have fun.

Tuesday Nov 11, 2014

Configure Client to Access API Catalog Import Export Utility

Oracle released API Catalog earlier this week and looks a great addition to the Oracle API Management story.

I was keen to have a play and see how I could harvest and discover services. If you have had experience with Oracle Enterprise Repository you would know that to configure the detailed assets you launched a client via the Java Web Start Launcher. In order to launch this you need ensure you have the right version of java installed and that the proxies are set correctly.

The best place to start is to go to your control panel and double-click Java, select the Java tab and select view. Make sure you a version 7 installed and enabled. As you can see from a screen shot below I have version 7 and 6 installed.

When you launch the Import /  Export utility you will be given the option to launch using the Java Web Start Launcher. If you have multiple Java versions installed as I do you will need to pick the right version.

It is hard to tell which is what version so it has to be trial and error. To know you have the right version you will see the splash screen Java 7. If you don't see this choose another file.

Hopefully at this stage you are presented with this screen and you are good to go.


If not you may be presented with a login screen like below.

When you enter the correct details you may be presented with the following error java.net.SocketException: Unknown Proxy Type: HTTP


If you get this error it means that you Java has not been configured correctly for your environment. Go back to the Control Panel and double-click Java as you did earlier.

Navigate to the General Table and select Network Settings.

If you need to go through a proxy server to get to the server set the proxy server setting, otherwise select direct.

Hopefully now you can start the client utility.

Friday Oct 03, 2014

Oracle SOA Suite DB Adapter for Ingris

As the Oracle DB adapter supports the ANSI SQL 92 language specification, it enables connectivity to a number of databases through a common framework. This post demonstrates connecting to an Ingres database via the Oracle DB adapter. It should also be noted that this blog can be applied to any 3rd party DB.

 The versions that are used here are:

  • Oracle SOA Suite 12.1.3
  • Ingres DB 10.1 (Linux)

If you have different versions the procedure to connect should be the same. The screenshots will be different though.

This blog post is broken into 2 stages, client connectivity via JDeveloper, and server connection via the Weblogic Console. For both deployments the jdbc file is required. for ingress this can be found in the following location:

$INGRES_HOME/lib/iijdbc.jar

JDeveloper

In order to develop against a database you need the client IDE connectivity to the database. With JDeveloper you can utilize the SQL Worksheet providing a consistent development tool across all databases.

  1. Download the jdbc file  iijdbc.jar and put in in the following location:

    %JDEV_HOME%\jdeveloper\jdev\lib

  2. Start up JDeveloper and create a new DB connection



  3. Enter in the following details, go to step 4 to configure the Driver Class



  4. When you select Generic JDBC Driver you will need to import the Driver Class the screen shots provide the required information.



    When you click OK you should see something like this



  5. Now Test the connectivity.


You can now develop a service using the DB Adapter connecting to an Ingres database.

The next steps are for configuring Weblogic so the adapter will work at run time.

Configuring Weblogic

  1. On the server where SOA Suite is installed, copy the iijdbc.jar file to the following

    $WLS_HOME/server/lib

  2. Edit the WEBLOGIC_CLASSPATH to include the iijdbc.jar. This is done by editing the following file:

    $MW_HOME/oracle_common/common/bin/commEnv.sh

    append the following to the WEBLOGIC_CLASSPATH

    ${CLASSPATHSEP}${WL_HOME}/server/lib/iijdbc.jar

  3. If the Weblogic is started you will need to restart in order for the jar file to be initialized.

  4. Login to the Weblogic console

  5. Navigate

    domain -> Services -> Data Sources

  6. If in production mode Lock & Edit the session in the Change Centre

  7. Click the New Button

  8. Enter in details to meet your requirements:

    Name: ingres
    JNDI Name: jdbc/ingres
    Database Type: Ingres



  9. Select the JDBC driver, I selected the XA version.



  10. On the transaction option page just select Next

  11. Enter in your connection details

    Database Name: iidbdb
    Hostname: hostname.oracle.com
    Port: 21071
    Username: ingres
    Password: *****



  12. Change the driver to, and test the connection

    com.ingres.jdbc.IngresDriver



  13. Click Finish, then go back into the configuration for Ingres and choose your targets

  14. Also on the Configuration tab, scroll down and expand the Advanced options. Make the following changes:

    Check the "Test Connections on Reserve" check box
    Enter "SQL SELECT 1 FROM iitables" in the init SQL text box.



  15. Save and Activate your changes.

  16. You have now established connectivity to Ingres now you need to configure the adapter.

  17. Create a new Lock & Edit session in Weblogic console

  18. Navigate to

    Deployments -> DbAdapter -> Configuration -> Outbound Connection Pool -> New

  19. Select the javax.resource.cci.ConnectionFactory  radio box then Next



  20. Enter the JNDI name, and click Finish

    eis/DB/Ingres



  21. If this is the first DB Adapter you have created it will ask to create a deployment plan. Follow your standards for this and continue.

  22. Navigate back to the configuration for the Ingres Database

    Deployments -> DbAdapter -> Configuration -> Outbound Connection Pool -> eis/DB/ingres

  23. 2 changes are required.

    PlatformClassName:  org.eclipse.persistence.platform.database.DatabasePlatform
    XADataSourceName: jdbc/ingres



  24. Save and Activate the Changes

  25. The last step is to recompile the DB adapter to pick up all the changes.

  26. Create a new Lock & Edit Session

  27. Navigate to

    Deployments -> Check DbAdapter



  28. Click Update then Finish, then Activate the changes

  29. You are now complete, your code will connect to the ingress database.

Monday Aug 04, 2014

Deploying OSB in a 12C Domain with BPM

When migrating your 11g OSB projects into 12C you may come up against the error deploying the WSDL.

Unknown protocol: servicebus

This typically happens if you have BPM installed in the same  domain as OSB. In order to fix this perform the following steps:

1. Log in to the WLS Administration Console
2. On the navigation tree on the left, expand services and click OSGI Frameworks
3. Click on the bac-svnserver-osgi-framework link
4. Click lock and edit
5. In the init properties text field at the bottom add

felix.service.urlhandlers=false

Make sure there are no spaces at the end.
6. Click Save and activate the changes, restart the Weblogic server.

Wednesday May 15, 2013

Accessing VNC Console for ODA WLS Virtual Machines 2.5

When working on the ODA patch 2.5 there maybe occasions where you need to VNC into the virtual machine to try and diagnose issues because the standard oakcli doesn't provide enough information. I believe in 2.6 a new function has been a introduced to make this easier (oakcli show vmconsole) but in 2.5 it doesn't exist.

In this example I have deployed WLS on the ODA but one of the managed server machines is not coming up correctly although the command oakcli show vm is saying all is OK.

To get a VNC console to the managed server VM I performed the following.

  1. You need to go the the VM repository. This is on either of the ODA_BASE machines. To find where this is, on the node 0 db vm run the following command:

    oakcli show vm

    NAME                                  MEMORY       VCPU            STATE           REPOSITORY

    OTD_ofm_domain_AdminNode_1              4096          2            ONLINE          odarepo2
    OTD_ofm_domain_AdminNode_2              4096          2            ONLINE          odarepo1
    OTD_ofm_domain_AdminServer              1024          2            ONLINE          odarepo2
    WLS_ofm_domain_AdminServer              2048          2            ONLINE          odarepo1
    WLS_ofm_domain_ManagedServer_1          6144          2            ONLINE          odarepo2
    WLS_ofm_domain_ManagedServer_2          6144          2            ONLINE          odarepo1


    The server I'm interested in is WLS_ofm_domain_ManagedServer_1, which is found on  odarepo2. This tells me I need to go to the repository on ODA_BASE Node 1 (I'm counting from 0). In my case this is called nlab-oda-pub2.

  2. Login as root to the ODA_BASE server that has the required repository. change directory to the following

    cd  /OVS/Repositories/odarepo2/VirtualMachines/WLS_ofm_domain_ManagedServer_1

    Please note that yours may differ, I'm connecting to the odarepo2 and vm WLS_ofm_domain_ManagedServer_1

  3. In this directory there should be a file vm.cfg. Open this up and add the following line at the top

    vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0']



  4. Bounce the VM with the oakcli commands:

    oakcli stop vm WLS_ofm_domain_ManagedServer_1

    and

    oakcli start vm WLS_ofm_domain_ManagedServer_1

  5. Start a VNC session and point to the correct node. In my case I need to connect to the head machine on the second node (machine you install the ODA_BASE into). The port you need to connect to is 5901 as 5900 is taken by the ODA_BASE machine.



    This should op up the console. In this instance I can see that may network was already taken, that is why it didn't start correctly.

Enjoy.


Wednesday May 01, 2013

How to Enable Web Forms Manually

I have just upgraded my 11.1.1.6 BPM to 11.1.1.7 (PS 6). There has been a lot of talk about this release as it provides lots of functionality on the BPM Composer. I was keen to have a loom at the new features such as web forms so I implemented a simple BPM process which utilises the web form. I got as far as creating the human task but when I wanted to add a Web Form the green + was disabled.

I'm not sure if this is an issue as part of the upgrade or it was something I did wrong. After a bit of investigation comparing another instance that was working I realised that the frevvo application was not deployed in the Weblogic Console. Once I manually deployed the frevvo application the web form was enabled. The location of the frevvo app is the following:

$MW_HOME/Oracle_SOA1/soa/applications/frevvo.ear

cheers
James


Thursday Feb 28, 2013

Installing Process Accelerators 11.1.1.6.3 for Oracle BPM

Oracle Process Accelerators help organizations reach process excellence faster. Process Accelerators are business process solutions developed with Oracle Business Process Management (BPM) Suite 11g. Process Accelerators can be deployed as-is, or extended to meet customer-specific requirements. In addition to expediting time-to-value for BPM deployments, Process Accelerators embody best practices and serve as blueprints for organizations that are developing process driven solutions with Oracle BPM Suite. Organizations adopting Process Accelerators not only improve the business process targeted by the Accelerators they deploy, but also have a unique opportunity to reach maturity in their process management initiative faster and with lower risk by applying Process Accelerator Best Practices and Patterns.

This blog post is to document the install procedures for Process Accelerators 11.1.1.6.3. In this release the following process are delivered:

  • Oracle Travel Request Management (TRM) - streamlines the travel request process
  • Oracle Document Routing and Approval (DRA) - streamlines the document approval process
  • Internal Service Request (ISR) - streamlines the service request process
  • Public Section Incident Reporting (PSIR) - streamlines the incident reporting process
  • Financial Services Loan Origination (FSLO) - streamlines the loan application approval process

There are also 2 process that have been released in preview mode:

  • Oracle Employee Onboarding (EOB)
  • Oracle Business Account Opening (BAO)

This post is to show the install steps for the Process Accelerators. Before you can start the install you must have the following environment installed.

  • Java Development Kit 1.6.0 and later
  • Oracle Database 11g
  • Oracle Weblogic Server 11g
  • Oracle SOA Suite 11g
  • Oracle Business Process Management (BPM) Studio 11g Release 1 (11.1.1.6.0)
  • Oracle Business Activity Monitoring (BAM) 11g 
  • Oracle Webcenter Content Release 1 (11.1.1.6.0) - Required for Document Routing and Approval accelerator.

There are lots of sites out there that document how to install these products, for the purpose of this post it is assumed that these products are installed and have been configured. My configuration I installed Webcenter and SOA / BPM on separate machines. Here is a screenshot of the domain configuration.

Webcenter Domain


SOA Domain


I have OSB installed here, this is not required for the PAs so you can ignore.

At the time of writing this blog the only way to get the Process Accelerators is via request. You need to send and email to oracle_process_accelerators@beehiveonline.oracle.com requesting the software.

  1. Before you start installing the Process Accelerators you need to configure UCM to integrate with BPM. If this is a fresh install this work probably hasn't been done.
  2. If not done so already you need to set the RIDC Port and IP filter address. Therefore login to your UCM instance http://ucmhost:16200/cs and set the following:

    Incoming Socket Connection Address Security Filter: 127.0.0.1|0:0:0:0:0:0:0:1|*.*.*.*
    Server Socket Port : 4444


  3. Restart the UCM_Server1 managed server.
  4. Login to the weblogic console for your SOA / BPM environment.
  5. In the Domain Structure select the domain, e.g. soa_domain -> Security -> General and check the box


  6. Select the EmbeddedLDAP tab and set the credentials and confirm, e.g. welcome1



  7. Restart the soa_server1 managed server
  8. Login to UCM as weblogic and configure a new LDAP provider to link back to the SOA.
  9. Navigate to Administration -> Providers
  10. Click the link to add a new LDAPUser and enter the following, set the password to the password you set in step 6.


  11. Restart the UCM_Server1.
  12. Log back in and check that you have 5 successful connections.



  13. Under the Administrators panel, click Admin Server
  14. Click the link Advanced Component Manager
  15. Enable Folders_g



  16. Restart the UCM Managed Server
  17. Log back into UCM and navigate to Administrator, you should be in the  Component Manager screen. Make sure the following components are enabled or disabled:

    Checked : ZipRenditionManagement
    Checked :InboundRefinerySupport
    Checked BpelIntegration
    Checked DynamicConverter
    Checked WebCenterConfigure
    Un-Checked: FrameworkFolders

  18. Login to the Enterprise Manager control for Webcenter domain
  19. Navigate to WC_Domain -> Webcenter -> Content -> Content Server



  20. From the menu select Configuration and make the following changes



  21. If you have separate domains for WC and BPM login to Enterprise Manager Control for the  SOA / BPM Domain. Expand SOA then select soa-infra
  22. From the SOA Infrastructure menu, select SOA Administration -> Workflow Config, then select the More Workflow Notification Configuration Properties... link
  23. In the System MBean Browser tree, expand WorkflowConfig, then select human-workflow


  24. On the Attributes tab, in the UcmIdcUrl Value field, enter idc://ucmhost:4444 (4444 was the port I entered in step 2).
  25. Click Apply
  26. Navigate to Farm_soa_domain -> Weblogic Domain -> soa_domain. Right-click the soa_domain and select Security -> Credentials
  27. Select WF-ADMIN-USER, then click Create Key
  28. On the Create Key dialog box:
    • Ensure that Select Map field is set to WF-ADMIN-USER
    • In the Key field, enter WF-ADMIN-CREDENTIAL
    • In the User Name field, enter a user name with administrative privileges on the OWC server
    • In the Password and Confirm Password fields, enter a password for a user with administrative privileges on the OWC server

  29. Restart All Servers to make sure that all these changes have taken place.

You should now be in a position to install the Process Accelerators. The documentation that comes with the process accelerators is pretty good. To save myself lost of documentation I will point to certain steps within the document to execute.

  1. Download the documents zip file for the process accelerators OraclePADocumentation111163.zip
  2. Open the file paaig.pdf
  3. Skip all steps till you get to 2.1.2 Configure Oracle Business Activity Monitoring for Reports. Execute all steps for 2.1.2 so your BAM environment is configured.
  4. Execute all steps in the 2.2 Installing Oracle Process Accelerators and Oracle BAM Reports. The install gets to 97% fairly quickly, it will sit at running post install scripts for up to 90 or so minutes. If you want to see what is happening you can tail the install script. Please note my process accelerator home is: $MW_HOME/PAHome



    tail -f $MW_HOME/PAHome/installpa.log
    tail -f $MW_HOME/PAHome/installbam.log
    tail -f $MW_HOME/PAHome/installwc.log


    You should get a BUILD SUCCESSFUL at the end.

  5. If you get a BUILD FAILED then you will need to install manually. Go to Step 2.4 provides the steps to do this.
    I undeploy the process accelerators before I resinstall, here are the steps I take.

    • Run the commands
      cd $MW_HOME/PAHome/bin
      . env.sh
      ant uninstall-pa
    • start a database session as sys
      sqlpuls / sysdba
      SQL>drop user accelerators cascade;
      SQL>exit
    • Restart all environments - this is required to clear some of the external applications from Weblogic
    • cd $MW_HOME/PAHome/bin
      . env_sh
      ant install-pa


  6. Once you have a build successful you can complete the post install steps.
  7. Starting at Section 3, complete all steps in 3.1. Here is an example of the users I assigned to each role in the BPM workspace. If the users don't exist in your environment follow this link.



  8. In Section 3.2 there are some tricks that will impact on what will and what won't display. This is what I did.
    In the Weblogic console create the following Groups under  Security Realms -> myrealm -> Users and Groups



  9. Add the appropriate users to the following groups:

    • BPMActionOfficer = vhugo, cdoyle, jausten, jlondon, jverne, istone
    • BPMCaseManager = cdickens
    • BPMCaseWorker = wfaulk
    • BPMReporter = jcooper
    • BPMExternalApp = Any user you want to add to have access to the External Apps
  10. Now you need to add access to the Applications in EM. Here is the list of applications that require security attached:

    • DRAAdminUI(V2.0)
    • FSLOAdminUI(11.1.1.6.3)
    • IncidentReportingAdminUI(V2.0)
    • IncidentReportingTaskUI(V2.0)
    • ISRAdminUI(V2.0)
    • TRMAdminUI(V2.0)

    To attach security to these applications login to the Enterprise Manager Control for SOA and navigate to Farm_soa_domain -> Application Deployments. Select one of the applications mentioned above. From the Application Deployment menu select Security -> Application Roles.



    To add users / groups to the Application click the search button this will display the role. Select the Role and add a User / Group by clicking the Edit button.

    In all instances I added the group BPMExternalApp to the Application roles with the exception of the IncidentReportingAdminUI(V2.0) and
    IncidentReportingTaskUI(V2.0) applications, for these applications I assign the groups created in step 9 matching the role to the group.



    It is important that you don't assign the same role / user to all the roles as some roles will overwrite another giving unpredictable results.

    This completes section 3 in the Install document

There are a number of steps in section 4 that have already been performed as part of the prerequisites. These do not need to be done again. The following steps can be skipped:

  • Section 4.1, "Configuration Matrix"
  • Section 4.2, "Configuring Oracle WebCenter Content"
  • Section 4.3, "Adding New Application Roles in Oracle BPM Workspace"

In Section 4.6, "Packaging FOP for PDF Generation" there is a bug when you run the ant script. the first thing it tries to do is undeploy. As you haven't installed it the process fails. You need to make a minor change to the ant build file.

  1. Open the following file with vi

    vi $MW_HOME/PAHome/pa/src/fs/lo/deploymentPlan/PAInstallFSLO.xml

  2. Go to the line 195 with the command

    :195

  3. There you will see that there is the parameter for undeploy when it errors it will stop the process change this to false, e.g. before:

    <Application name="LoanOriginationPDFModelServices"        filelocation="${env.PA_HOME}/pa/src/fs/lo/built/deploy/LoanOriginationPDFModelServices.ear"     wlserver="pa" failonerror="true" action="undeploy" />

    After

    <Application name="LoanOriginationPDFModelServices"        filelocation="${env.PA_HOME}/pa/src/fs/lo/built/deploy/LoanOriginationPDFModelServices.ear"     wlserver="pa" failonerror="false" action="undeploy" />

Last but not least, if you find your BAM reports are not loaded you need to run the install for BAM manually.

  1. cd $MW_HOME/PAHome/bin
  2. . env.sh
  3. ant install-bam

Once the completion of Section 4 is done you are ready to run the Process Accelerators.

ENJOY!

Tuesday Feb 19, 2013

Changing Memory Settings and JVM to JRockit for SOA Suite

I know there are many blogs out there that show how and where to change the memory configurations, .e.g. increase /  decrease or swap between Sun and JRockit JVM. The reason for this post is more for me to have a reference easy to find, but it brings a few concepts together which are usually found in 2 or 3 blog posts:

  • Memory modification
  • Swapping JVM provider
  • Node manager support

Many of the post out there are for changing the memory settings when running via the startWeblogic.sh script. Although this post will support running via the script it is more about starting via Node Manager.

For the purpose of this post I'm using the scripts developed in my post Startup Scripts for Weblogic and SOA Suite

Firstly we want to change the memory settings for the JVM. This is done in the setSOADomainEnv.sh script.

Make a copy of the file the open for modification:

$MW_HOME/user_projects/domains/soa_domain/bin/setSOADomainEnv.sh

Look for the lines:

DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
PORT_MEM_ARGS="-Xms768m -Xmx1536m"

Comment these lines out and add the following straight after, e.g.

#DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
#PORT_MEM_ARGS="-Xms768m -Xmx1536m"

if [ "${SERVER_NAME}" = "" ] || [ "${SERVER_NAME}" = "soa_server1" ]; then
  DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
  PORT_MEM_ARGS="-Xms2048m -Xmx2048m"
elif [ "${SERVER_NAME}" = "" ] || [ "${SERVER_NAME}" = "AdminServer" ]; then
  DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
  PORT_MEM_ARGS="-Xms768m -Xmx768m"
elif [ "${SERVER_NAME}" = "" ] || [ "${SERVER_NAME}" = "bam_server1" ]; then
  DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
  PORT_MEM_ARGS="-Xms768m -Xmx768m"
elif [ "${SERVER_NAME}" = "" ] || [ "${SERVER_NAME}" = "osb_server1" ]; then
  DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
  PORT_MEM_ARGS="-Xms768m -Xmx768m"
else
  DEFAULT_MEM_ARGS="-Xms768m -Xmx768m"
  PORT_MEM_ARGS="-Xms768m -Xmx768m"
fi

Depending on how you installed your environment you may need to add the JRockit home, this is done in the setDomainEnv.sh, as before make a copy and open the file for modification.

$MW_HOME/user_projects/domains/soa_domain/bin/setDomainEnv.sh

check the value for BEA_JAVA_HOME and make sure it is pointing to your JRockit location, e.g.

BEA_JAVA_HOME="/u01/jrockit/jrockit-jdk1.6.0_37-R28.2.5-4.1.0"
export BEA_JAVA_HOME

As I'm starting my SOA environment via load manager I need to make some additional changes.

The node manager properties need to be updated

$MW_HOME/wlserver_10.3/common/nodemanager/nodemanager.properties

make sure the parameter StartScriptEnabled is set to true, e.g.

StartScriptEnabled=true

Also if you want to run JRockit you need to change the Java Homes

#javaHome=/u01/java/jdk1.6.0_38
javaHome=/u01/jrockit/jrockit-jdk1.6.0_37-R28.2.5-4.1.0
#JavaHome=/u01/java/jdk1.6.0_38/jre
JavaHome=/u01/jrockit/jrockit-jdk1.6.0_37-R28.2.5-4.1.0/jre


The last file that needs modification is the commEnv.sh, to change the Java Home.

$WL_HOME/common/bin/commEnv.sh

Comment out the old JAVA_ HOME and JAVA_VENDOR to point to JRockit, Before:

if [ -z "${JAVA_HOME}" -o -z "${JAVA_VENDOR}" ]; then
  # Set up JAVA HOME
  JAVA_HOME="/u01/java/jdk1.6.0_38"
  # Set up JAVA VENDOR, possible values are
  #Oracle, HP, IBM, Sun ...
  JAVA_VENDOR=Sun
  # PRODUCTION_MODE, default to the development mode
  PRODUCTION_MODE=""
fi

After

if [ -z "${JAVA_HOME}" -o -z "${JAVA_VENDOR}" ]; then
  # Set up JAVA HOME
  #JAVA_HOME="/u01/java/jdk1.6.0_38"
  JAVA_HOME="/u01/jrockit/jrockit-jdk1.6.0_37-R28.2.5-4.1.0"
  # Set up JAVA VENDOR, possible values are
  #Oracle, HP, IBM, Sun ...
  #JAVA_VENDOR=Sun
  JAVA_VENDOR=Oracle
  # PRODUCTION_MODE, default to the development mode
  PRODUCTION_MODE=""
fi

You should be now ready to start your environment. All servers should now start in JRockit, to test this you can run the command:

 ps -ef | grep soa_server1

It should have JRockit in the classpath.

Thursday Aug 30, 2012

Comparing Dates in Oracle Business Rule Decision Tables

I have been working with decision tables for some time but have never had a scenario where I need to compare dates. The use case was to check if a persons membership had expired. I didn't think much of it till I started to develop it. The first trap I feel into was trying to create ranges and bucket sets. The other trap I fell into was not converting the date field to a complete date. This may seem obvious to most people but my Google searches came up with nothing so I thought I would create a quick post.

I assume everyone knows how to create a decision table so I'm not going to go through those steps. The prerequisite for this post is to have a decision table with a payload that has a date field. This filed must have the date in the following format YYYY-MM-DDThh:mm:ss.

  1. Create a new condition in your decision table
  2. Right-click on the condition to edit it and select the expression builder



  3. In the expression builder, select the Functions tab.
  4. Expand the CurrentDate file and select date, and click Insert Into Expression button.



  5. In the Expression Builder you need to create an expression that will return true or false, add the operation <= after the CurrentDate.date



  6. In my scenario my date field is memberExpire, Navigate to your date field and expand, select toGregorianCalendar().



  7. Your expression will look something like this, click OK to get back to the decision table



  8. Now its just a matter of checking if the value is true or false.

Simple when you know how :-)


Sunday May 20, 2012

Renaming BPEL Process Names

A common problem is that developers don't get the BPEL process name right the first time. There have been a number of solutions proposed on the Internet which range from 'you just can't do it' to complicated Linux 'one-liners' which don't work on Windows.

Wouldn't it be nice to just have a "Rename BPEL Process" button in JDeveloper for SOA Suite 11g.

What we need is a script which will do the following:

  1. Run on any platform that JDeveloper supports (ie, Windows, Linux, and Mac).
  2. Rename the old filename to the new filename of the BPEL process.
  3. Updated any string references to the old BPEL name with the new BPEL name in any file that references the BPEL process.
  4. Easily accessible from JDeveloper (ie, no complicated command line wizardry).

JDeveloper has an option to call an "External Tool" which can also be a custom Ant script. This "External Tool" can be called from the toolbar of JDeveloper, including the context menu of the Application Navigator View.

Here is a build.xml Ant script which takes care of the above requirements.

<?xml version="1.0" encoding="UTF-8" ?>
<project default="RenameBPELProcessInJDeveloper11gR1">
    <target name="RenameBPEL">
        <echo>Workdir: ${PROJECT_DIR}</echo>
        <echo>String replacement: ${FILENAME_NO_EXT} -> ${NEW_FILENAME}</echo>
        <replace dir="${PROJECT_DIR}" token="${FILENAME_NO_EXT}" value="${NEW_FILENAME}">
            <exclude name="**/.svn*"/>
        </replace>
        <echo>complete...</echo>
        <echo>Renaming ${FILENAME_NO_EXT} -> ${NEW_FILENAME} in ${PROJECT_DIR}</echo>
        <move todir="${PROJECT_DIR}">
            <fileset dir="${PROJECT_DIR}">
                <exclude name=".svn*"/>
            </fileset>
            <mapper type="glob" from="${FILENAME_NO_EXT}*" to="${NEW_FILENAME}*"/>
        </move>
        <echo>Renaming ${FILENAME_NO_EXT} -> ${NEW_FILENAME} in ${PROJECT_DIR}/xsd</echo>
        <move todir="${PROJECT_DIR}/xsd">
            <fileset dir="${PROJECT_DIR}/xsd">
                <exclude name=".svn*"/>
            </fileset>
            <mapper type="glob" from="${FILENAME_NO_EXT}*" to="${NEW_FILENAME}*"/>
        </move>
        <echo>Renaming ${FILENAME_NO_EXT} -> ${NEW_FILENAME} in ${PROJECT_DIR}/xsl</echo>
        <move todir="${PROJECT_DIR}/xsl">
            <fileset dir="${PROJECT_DIR}/xsd">
                <exclude name=".svn*"/>
            </fileset>
            <mapper type="glob" from="${FILENAME_NO_EXT}*" to="${NEW_FILENAME}*"/>
        </move>
        <echo>complete...</echo>
     </target>
</project>

Save this script to a file called build.xml and store it in a folder called RenameBPELAntScript.

We will also need an icon for the "External Tool" button in the JDeveloper Toolbar. For example this one: 
Right-click and "Save Image As...". 

Next follow these steps:

  1. Tools -> External Tools...
  2. New
  3. Tool Type: Apache Ant
  4. Ant Build File: (Browse to build.xml file)
  5. Move RenameBPEL target from Available Targets to Selected Targets.
  6. Add Property -> Insert -> File Directory | Property Name: PROJECT_DIR
  7. Add Property -> Insert -> File Name Without Extension | Property Name: FILENAME_NO_EXT
  8. Add Property -> Insert -> Prompt | Property Name: NEW_FILENAME
  9. Options Page, no changes required. Click Next.
  10. Process Page, no changes required. Click Next.
  11. Classpath Page, no changes required. Click Next.
  12. Display Page: 
    Caption for Menu Items: Rename BPEL Process
    Icon Location: (Browse to renameBpelIcon.gif)
    Click Next.
  13. Integration Page:
    Add Items to Menus: 
    Tools Menu
    Navigator Context Menu
    Add Buttons to Toolbars:
    Main Toolbar
    Before Tool Starts:
    Save All
    After Tool Exits:
    Reload Open File:
    Log Output to Messages Log
    Click Next
  14. Availability Page:
    Select: When Specific File Types are Selected
    Move BPEL Process from Available Types to Selected Types
  15. Click Finish
  16. Click OK

So what does the result look like?

Select a BPEL process in the Application Navigator of JDeveloper. Now you have two options for invoking the Rename BPEL process 'External Tool'.

As shown below, you can press the new icon in the toolbar. 

Alternatively you can select from the context menu. 

The complete process should look something like this: 

Notice that the original filename was foo.bpel and we have renamed it to bar.bpel.

If you are using Oracle Suite for 11g, then the above solution will solve your BPEL process rename issue. This same features is already on the roadmap for 12c.

Tuesday May 15, 2012

Upgrading Database Schema in FMW Upgrade

During a FMW upgrade one of the most common tasks forgotten is the upgrade of the database schemas as the steps seem to be berried in a mountain of documentation. If you are looking for the actual steps to upgrade FMW look at the following post - Upgrading Fusion Middleware 11.1.1.x to 11.1.1.4 when you get to step 24 refer to this blog entry.

Oracle has made it a bit simpler running the Patch Set Assistant (psa). Before it was a bit of a dark art running a command line script and you had to guess what schemas need to be upgraded. Now there is a wizard driven tool to implement this.

  1. As the middleware owner change directory to $MW_HOME/oracle_common/bin
  2. Run the following command to start the PSA wizard
    ./psa



    Click Next to start entering details

  3. Select the component you want to upgrade, it will include all the prerequisites.



  4. Select the prerequisite check boxes to say you have backed up your environment.



  5. Enter connection details for the MDS schema



  6. Enter connection details for the SOAINFRA schema



  7. Check that all schemas connect successfully. If not go back and correct.



  8. Make sure all the setting are correct and continue



  9. Watch the install run, then complete



  10. Now you can check the version of the schemas to ensure they are correct. In this case I'm looking to have the version come back as 11.1.1.6

    SELECT owner, version, status FROM schema_version_registry
    where owner in ('DEV_MDS', 'DEV_SOAINFRA');


    OWNER                          VERSION                        STATUS
    ------------------------------ ------------------------------ -----------
    DEV_MDS                        11.1.1.6.0                     VALID
    DEV_SOAINFRA                   11.1.1.6.0                     VALID


Monday May 14, 2012

Finding the Right Patchset

I'm always surprised at how hard it is to find the right patchsets when looking to do an upgrade.

So I thought I would make a blog entry that makes it easy to find the right patches starting from 11.1.1.4. The following list are documents that provide the required patches for the associated upgrades:

11.1.1.4
11.1.1.5
11.1.1.6
11.1.1.7

I will update as more patchsets become available.

If you are looking for patch install instructions please refer to the following blog entry:

Upgrading Fusion Middleware

Happy upgrades


Tuesday Jan 24, 2012

Getting History from SQLPlus using Linux

A colleague of mine just asked me how do you get history in SQLPlus on Linux the same way you get by default in windows, e.g. when you press the up key you see the previous commands executed. I have never really given it much thought in the past, but there must be a way. After a quick Google search I found a solution. This blog is really for my future reference as the examples I found on the net required a bit of thought.

The post assumes you are using RedHat or OEL, It will work for other Linux OS but you will need to source the right application. I'm installing on OEL 5.7 64 bit.

The key behind this solution is the utility call rlwrap which is a read line wrapper. It maintains a separate input history which can be edited.

  1.  Login to you Linux machine as the root user

  2. Download the rlwrap utility using the appropriate link below and save it to a staging area on the Linux machine:
    rlwrap 32Bit
    rlwrap 64Bit

  3. From the staging area install the rlwrap utility, e.g.:

    rpm -ivh rlwrap-0.37-1.el5.x86_64.rpm

  4. Connect as the oracle user (or the user you connect to SQLPlus)

  5. Edit the .bashrc file ($HOME/.bashrc) and add the following line in the aliases section.

    alias sqlplus="rlwrap sqlplus"

  6. Initialize the .bashrc file with the following command

    . .bashrc

  7. Login to SQLPlus and history should be enabled

    sqlplus scott/tiger@orcl

    SQL*Plus: Release 11.2.0.2.0 Production on Wed Jan 25 04:07:40 2012

    Copyright (c) 1982, 2010, Oracle.  All rights reserved.


    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    SQL> select count(*) from emp;

      COUNT(*)
    ----------
            15


    Press the up key

    SQL> select count(*) from emp;

Now you are done. Enjoy



About

Discussions and Examples using Oracle Fusion Middleware. Some image links are broken when using Firefox, Safari, and Chrome. If you want to see the full image please use IE.

Twitter:@james8001

tumblr hit counter vistors, thanks for your support

Search

Archives
« April 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today