X

Recent Posts

Weblogic

PSU, Patching and Classpath Problems on Weblogic Server

When applying a new PSU on a Weblogic Server, there are some facts that you must keep in mind- First of all, never install a PSU over another one. As can be seen on the MOS Note 1573509.2: “Each PSU will conflict with any prior PSU in the series. To install a subsequent PSU, any existing PSU must first be uninstalled.”.- And what happens if I am unable to uninstall a PSU? That issue is addressed by the MOS Note 1349322.1.- It can also happen that when attempting to apply a patch, a conflict message like the following is thrown: "Patch A is mutually exclusive and cannot coexist with patch(es): B", and when trying to remove patch B, Smart Update fails with a message "Patch not installed: B". Such situations are described on MOS Note 1970064.1.- Avoid having different PSU levels on the Weblogic, even if you have multiple domains or clustered across different physical machines.And why could the above facts be related to the Classpath?As can be seen on MOS Note 1509703.1, there could be situations where, after applying a WebLogic Server 10.3.6 PSU, a managed server fails to start when the classpath is provided and started from the admin console. A critical BEA-000362 message is thrown:<BEA-000362> <Server failed. Reason: [Management:141266]Parsing Failure in config.xml: failed to find method MethodName{methodName='setCacheInAppDirectory', paramTypes=[boolean]} on class weblogic.management.configuration.AppDeploymentMBeanImpl>This happens because in the PSUs for WLS 10.3.6 (10.3.6.0.2+), every time an application deployment is done, it adds a <cache-in-app-directory> element into the config.xml file for that application. To parse this new element, the classes for the PSU must be loaded rather than the original classes for application deployment. So specifying WL_HOME/server/lib/weblogic_sp.jar;WL_HOME/server/lib/weblogic.jar in the classpath of Server-start may cause the problem. There is no need to set these in the classpath of Server-start since they will come in from the system classpath. The weblogic_patch.jar must precede weblogic_sp.jar and weblogic.jar -- this ensures that the classes in the patch are loaded rather than the unpatched classes.The already mentioned MOS Note 1509703.1 contains an additional procedure for deployments after applying a PSU.Same jars/classes included multiple times on a ClasspathAlso, note that sometimes same jars/classes may appear multiple times on a classpath (this happens mainly because the command lines are modified as any other component of the WLS architecture as time goes by with new versions of Oracle products and customer's apps). The JVM searches for them according to the specified order, and it would be correct in general terms. However, it will depend on the implementation of the classloader. But there are some potential problems, for example:- When loading classes within a web framework the deployed jar/war/ear/sar files may be checked before the official classpath. - And what would happen if two different versions of a same jar are invoked?

When applying a new PSU on a Weblogic Server, there are some facts that you must keep in mind- First of all, never install a PSU over another one. As can be seen on the MOS Note 1573509.2: “Each PSU...

Traffic Director

Working with SSL Certificates in OTD

An issue in Oracle Traffic Director (OTD) that has become somewhat common, is to get SSL certificate warnings similar to the one below:SSL server certificate Admin-Server-Cert is expired.This typically happens if the Admin SSL CA Cert has expired. So, to prevent this, the CA/SSL certificates should be renewed before their expiry dates by extending it, which could be from 1 to 10 years. There are 2 approaches:1. To artificially set the Admin-Server host clock 2. To create a new Admin server to replace the old one (but may lose old configured SSL keys)However, at that point it may also happen that you get a certificate for one year and would like it for ten years. And even when the the command below runs successfully, the expire dates are not changed:./bin/tadm renew-admin-certs --user= --port= --validity=120The problem there is that without applying the latest patch, currently the Admin Node(s) certificate will be valid for only 1 year and it requires renewal each year. So, to avoid renewing the Admin Node(s) certificate every year, you need to apply the patch 11.1.1.7.2 MLR#2 (Apr 2014) for OTD version 11.1.1.7 or later. After the patch, the startup banner will have a proper new date, and when you renew Admin Server certificates will also renew the Admin Nodes(s) certificates for same number of years.For further information, please take a look at the following MOS notes:- Oracle Traffic Director OTD Cannot Communicate Between Admin Server & Administration Node (Doc ID 1561339.1)- Oracle Traffic Director Admin Server and Admin Node Certificate Validity (Doc ID 1603520.1)- How to Renew Admin Server SSL Certificate for Oracle Traffic Director? (Doc ID 1549253.1)- Available Versions, Patches, and Updates for Download for Oracle Traffic Director (OTD) (Doc ID 1676256.1)

An issue in Oracle Traffic Director (OTD) that has become somewhat common, is to get SSL certificate warnings similar to the one below:SSL server certificate Admin-Server-Cert is expired.This...

Exalogic

A "ZFS Storage Appliance is not reachable" message is thrown by Exachk and the ZFS checks are skipped. WAD?

Sometimes it may happen that something like the following can be seen on the "Skipped Nodes" section:Host NameReasonmyexalogic01sn01-mgm1ZFS Storage Appliance is not reachablemyexalogic01sn01-mgm2ZFS Storage Appliance is not reachableAlso, a message like the following can be seen when executing Exachk:Could not find infiniband gateway switch names from env or configuration file.Please enter the first gateway infiniband switch name :Could not find storage node names from env or configuration file.Please enter the first storage server :This is because the way Exachk works on this is based on the standard naming convention of "<rack-name>sn0x" format. To solve this, make sure there is an o_storage.out file in the directory where Exachk is installed. If the file is missing, create a blank one. The o_storage.out must contain the right storage nodes hostnames in the format they have in hosts file. This format should typically be something like "<rack-name>sn0x-mgmt"For example an o_storage.out should look quite simply as below:myexalogic01sn01-mgmtmyexalogic01sn02-mgmtThis way it is ensured that the o_storage.out file has valid ZFS Storage Appliance hostnames.If the switch checks are skipped, then a similar procedure should be performed with the o_switches.out file.

Sometimes it may happen that something like the following can be seen on the "Skipped Nodes" section:Host Name Reason myexalogic01sn01-mgm1 ZFS Storage Appliance is not reachable myexalogic01sn01-mgm2 ZFS...

Weblogic

Data source in suspended state: BEA-001156 error because maximum number of sessions was exceeded in the database (ORA-00018)

Recently, I worked a Service Request where a data source was in suspended state. In the log files it could be seen a BEA-001156 error message, and the stack trace (obviously shortened in this example) contained something like the following:<BEA-001156> <Stack trace associated with message 001129 follows: java.sql.SQLException: ORA-00018: maximum number of sessions exceededat oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:389)at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:382)at oracle.jdbc.driver.T4CTTIoauthenticate.processError(T4CTTIoauthenticate.java:441)at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:450)at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)at oracle.jdbc.driver.T4CTTIoauthenticate.doOSESSKEY(T4CTTIoauthenticate.java:404)at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:385)at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:546)at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:236)at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521)at oracle.jdbc.pool.OracleDataSource.getPhysicalConnection(OracleDataSource.java:280)at oracle.jdbc.xa.client.OracleXADataSource.getPooledConnection(OracleXADataSource.java:469)at oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:156)at oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:101)...Seeing at the error message at the top, it is clearly a session handling problem at database level. Note that, depending on how your application is designed/programmed, recursive sessions can be created and sometimes it could be hard to track all of them, even more in periods of high load.When this type of issue occur, the most common solution is to increase the SESSIONS parameter of the init.ora configuration file.It is usually recommended to preserve 50% of the SESSIONS value for recursive sessions.

Recently, I worked a Service Request where a data source was in suspended state. In the log files it could be seen a BEA-001156 error message, and the stack trace (obviously shortened in this example)...

Exalogic

Setting MTU on Exalogic

For many reasons, a system administrator may want to change the MTU settings of a server. But in a system like Exalogic which contains lots of interconnected nodes and other various components, it's important to understand how this applies to the different networks.For example, when bringing up bonding of InfiniBand an error like the following may be thrown:Bringing up interface bond1: SIOCSIFMTU: Invalid argumentBoth scripts ifcfg-ib0 and ifcfg-ib1 (from the /etc/sysconfig/network-scripts/ direectory) have MTU set to 65500, which is a valid MTU value only if all IPoIB slaves operate in connected mode and are configured with the same value, so the line below must be added to both network scripts and then restart the network:CONNECTED_MODE=yesBy the way, an error of the form “SIOCSIFMTU: Invalid argument” indicates that the requested MTU was rejected by the kernel. Typically this would be due to it exceeding the maximum value supported by the interface hardware. In that case you must either reduce the MTU to a value that is supported or obtain more capable hardware. This problem has been seen when trying to modify the MTU using the ifconfig command, like the output of the example below:[root@elxxcnxx ~]# ifconfig ib1 mtu 65520SIOCSIFMTU: Invalid argumentIt's important to insist that in most cases the nodes must be rebooted after the MTU size has been changed. Although in some circumstances it may work without a reboot, it is not how it is typically documented.Now, in order to achieve a reduced memory consumption and improve performance for network traffic received on IPoIB related interfaces, it is recommend to reduce the MTU value in interface configuration files for IPoIB related bonds from 65520 to 64000. The change needs to be made to interface configuration files under the /etc/sysconfig/network-scripts directory and applies to the interface configuration files for bonds over IPoIB related slave devices, for example /etc/sysconfig/network-scripts/ifcfg-bond1. However, keep in mind that the numeric portion of the interface filenames that corresponding to IPoIB interfaces is expected to vary across compute nodes and vServers and so cannot be relied upon to identify which interface files are for bonds are over IPoIB rather than EoIB related slave interfaces.To fix these MTU values to the recommended settings, there are very useful instructions and a script on the MOS Note 1624434.1, and it's applicable physical and virtual configurations of Exalogic.Regarding the recommended MTU value for EoIB related interfaces, its maximum appropriate value is 1500. If for some reason a vServer has been created with a higher value (set on the /etc/sysconfig/network-scripts/ifcfg-bond0 file), then it must be fixed. An error like the following could be thrown under this circumstance:[root@vServer ~]# service network restart...Bringing up interface bond0:  SIOCSIFMTU: Invalid argumentAlso an error like the one below can be seen on the /var/log/messages file of the vServer:kernel: T5074835532 [mlx4_vnic] eth1:vnic_change_mtu:360: failed: new_mtu 64000 > 2026The MOS Note 1611657.1 is very useful for this purpose.

For many reasons, a system administrator may want to change the MTU settings of a server. But in a system like Exalogic which contains lots of interconnected nodes and other various components,...

Exalogic

Working with SCAN on Exalogic

During this last time, I have seen more Exalogic customers using SCAN, so decided to write a brief article about it, although there is a lot of documentation from Oracle about it (but not related to Exalogic itself).Single Client Access Name (SCAN) is a JDBC driver feature that provides a single name for clients to access any Oracle Database running in a cluster. Some of its benefits and advantages include:- Client’s connect information does not need to change if you add or remove nodes or databases in the cluster- Fast Connection Failover (FCF)- Runtime Connection Load-Balancing (RCLB)- Can be implemented with MultiDataSource or GridLink- It can also be used with Oracle JDBC 11g Thin Driver (this is clearly explained on MOS Note 1290193.1)In the particular case of Exalogic, a typical architecture, widely used by customers, is having it connected to an Exadata machine (which hosts the database) through InfiniBand. Obviously the SCAN feature can be used within this Engineered Systems architecture. As a matter of fact, GridLink is part of the Exalogic-specific enhancements of Weblogic.Some facts to keep in mind when using it:- SCAN feature is supported on JDBC version 11.2.0.1 and above- Just as any situation when a connection to database is involved, need to be careful that firewalls may cause some network adapter or timeout issues, which must be solved so the connection can be established- If using VIP hosts, instead of cluster configuration having the short host name for the VIP hosts, you should set LOCAL_LISTENER to the fully qualified domain name (e.g. node-VIP.example.com):(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node-VIP.example.com)(PORT=1521))))

During this last time, I have seen more Exalogic customers using SCAN, so decided to write a brief article about it, although there is a lot of documentation from Oracle about it (but not related...

Exalogic

Unable to change NIS password

On the last few weeks I worked a Service Request where Linux user was unable to change user passwords NIS. As a matter of fact, NIS is used by several people on their Exalogic environments.[root@computenode ~]# passwd user01Changing password for user user01.passwd: Authentication token manipulation error[root@computenode ~]#In this case, "user01" is an example username.This issue may occur on all NIS nodes and even on master server as well.This error typically corresponds to typos or missing keywords in configuration files from the /etc/pam.d directory.On the Service Request that I worked, the file system-auth-ac had no nis keyword:#%PAM-1.0# This file is auto-generated.# User changes will be destroyed the next time authconfig is run.auth required pam_env.soauth sufficient pam_unix.so nullok try_first_passauth requisite pam_succeed_if.so uid >= 500 quietauth required pam_deny.soaccount required pam_unix.soaccount sufficient pam_succeed_if.so uid < 500 quietaccount required pam_permit.sopassword requisite pam_cracklib.so try_first_pass retry=3password sufficient pam_unix.so md5 shadow remember=5 nullok try_first_pass use_authtokpassword required pam_deny.sosession optional pam_keyinit.so revokesession required pam_limits.sosession [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uidsession required pam_unix.soThis kind of issue has also been described in a case where the "pam_rootok.so" line had a typo ("sufficent" instead of correct "sufficient") on the su file.To solve this kind of issues, first the typos must be (obviously) fixed.For this NIS case, it is necessary to make sure that keyword is added:password sufficient pam_unix.so md5 shadow nis remember=5 nullok try_first_pass use_authtokNote that these settings should be consistent across all the NIS nodes (master server and clients).

On the last few weeks I worked a Service Request where Linux user was unable to change user passwords NIS. As a matter of fact, NIS is used by several people on their Exalogic environments.[root@comput...

Exalogic

Minimum percentage of free physical memory that Linux requires for optimal performance

Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes.Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory.The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes.IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffersThe MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"):Free Memory and Used MemoryEstimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand.That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally?The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes.Under normal conditions you...

Coherence

Coherence on Exalogic: dealing with the multiple network interfaces

Recently, we worked an incident where error messages like the following were being thrown when starting the Coherence servers after an upgrade of EECS:Oracle Coherence GE 3.7.1.8 (thread=Thread-3, member=n/a): Loaded Reporter configurationfrom "jar:file:/u01/app/fmw_product/wlserver_103/coherence_3.7/lib/coherence.jar!/reports/report-group.xml"Exception in thread "Thread-3" java.lang.IllegalArgumentException: unresolvable localhost 192.168.10.66atcom.tangosol.internal.net.cluster.LegacyXmlClusterDependencies.configureUnicastListener(LegacyXmlClusterDependencies.java:199)...Caused by: java.rmi.server.ExportException: Listen failed on port: 8877; nested exception is:java.net.SocketException: Address already in use...weblogic.nodemanager.server.provider.WeblogicCacheServer$1.run(WeblogicCacheServer.java:26)at java.lang.Thread.run(Thread.java:662)Caused by: java.net.UnknownHostException: 192.168.10.66 is not a local addressat com.tangosol.net.InetAddressHelper.getLocalAddress(InetAddressHelper.java:117)at com.tangosol.internal.net.cluster.LegacyXmlClusterDependencies.configureUnicastListener(LegacyXmlClusterDependencies.java:195)It is a very known fact that Exalgic has several network interfaces (bond/eth 0,1,2, etc). The logic that Coherence uses when deciding what interface to connect to, specifically to support machines with multiple network interfaces as well as enhancements to allow the localaddress to be specified as a netmask to make configuration across larger clusters easier, makes important (even more than on previuous releases of Coherence) to make sure that the tangosol.coherence.localhost parameter is specified appropriately. From that IP address (or properly mapped host address) the desired network interface to be used can easily be found and then the Coherence cluster would work fine on it.

Recently, we worked an incident where error messages like the following were being thrown when starting the Coherence servers after an upgrade of EECS:Oracle Coherence GE 3.7.1.8 (thread=Thread-3,...

Exalogic

Exachk 2.2.2 released - Now Includes Support for Exalogic Solaris Environments

This new Exachk 2.2.2 release includes several new improvements and features.The following are additional checks as part of this new 2.2.2 release for Solaris:Compute Nodes- Hardware and Firmware Profile- Software Profile- NTP Synchronization- DNS Setup- Correct Slot Installation of IB Card for Solaris- Subnet Manager- Root Partition Usage Limit for Solaris- Lockd Configuration for Solaris Compute Node- ib_ipoib Module for Solaris- ib_sdp Module for Solaris- IP Configuration - net0 and bond0- Recent Reboot Info for Solaris- Probe Based IPMP for Solaris- Swap Space for Solaris- Free Physical Memory for Solaris- MTU for Solaris- IPMP Configuration for Solaris- Fault Management Log for Solaris- BIOS Settings- NFS Mount Point - Version for Solaris- Hostname Consistency with DNS on the Physical Compute Node- NFS Mount Point - Attribute Caching for Solaris- NFS Mount Point - Rsize Wsize for Solaris- NIS domain (YPBind) for SolarisAlso, the following checks have been enhanced as part of this new 2.2.2 release:Compute Nodes- Connectivity To OVMM- MTU Value for Infiniband Interface- Hostname Matches the DNS on the Physical Compute NodeSwitch- Non-sequential Even-numbered Gateway InstanceCross-Components- NTP Configuration for Switch Nodes Matches Physical Compute Nodes- NTP Configuration for Switch Nodes Matches Oracle VM Servers- Hostname Matches the DNS on Oracle VM Server- Hostname Matching with DNS on Switches- NTP Configuration for ZFS Matches Oracle VM Servers- NTP Configuration for ZFS Matches Physical Compute NodesMultiple Components- MTU for InfiniBand Link in Control vServersExachk is available via MOS Doc Id. 1449226.1

This new Exachk 2.2.2 release includes several new improvements and features.The following are additional checks as part of this new 2.2.2 release for Solaris:Compute Nodes- Hardware and Firmware...

Coherence

ClassCastException thrown when running Coherence with Exabus IMB enabled

Today I worked a Service Request which was a Coherence issue on an Exalogic platform. It is a very interesting issue (at least for me ).An exception message like the following is thrown when running Coherence with IMB on a WLS server:java.lang.ClassCastException: com.oracle.common.net.exabus.util.SimpleEventat com.tangosol.coherence.component.net.MessageHandler$EventCollector.add(MessageHandler.CDB:6)at com.oracle.common.net.infinibus.InfiniBus.emitEvent(InfiniBus.java:323)at com.oracle.common.net.infinibus.InfiniBus.handleEventOpen(InfiniBus.java:502)at com.oracle.common.net.infinibus.InfiniBus.deliverEvents(InfiniBus.java:468)at com.oracle.common.net.infinibus.EventDeliveryService$WorkerThread.run(EventDeliveryService.java:126)The cause of this problem is that Coherence runs into a classloading issue when:- using to enforce the child-first classloading- coherence.jar is both in the system classpath and application classpath- and Exabus IMB is enabledIn newer versions of WLS (12c), coherence.jar is in system classpath, so by default Coherence classes will be loaded from the system classpath. For situations where is required child first class loading semantics, and should be specified over configuration inside weblogic.xml to change the classloading order.To solve this, add the following into weblogic.xml:<container-descriptor>  <prefer-application-packages>    <package-name>com.tangosol.*</package-name>    <package-name>com.oracle.common.**</package-name>  </prefer-application-packages>  <prefer-application-resources>    <resource-name>com.tangosol.*</resource-name>    <resource-name>com.oracle.common.*</resource-name>    <resource-name>coherence-*.xml</resource-name>    <resource-name>coherence-*.xsd</resource-name>    <resource-name>tangosol-*.xml</resource-name>    <resource-name>tangosol.properties</resource-name>    <resource-name>tangosol.cer</resource-name>    <resource-name>tangosol.dat</resource-name>    <resource-name>internal-txn-cache-config.xml</resource-name>    <resource-name>txn-pof-config.xml</resource-name>    <resource-name>pof-config.xml</resource-name>    <resource-name>management-config.xml</resource-name>    <resource-name>processor-dictionary.xml</resource-name>    <resource-name>reports/*</resource-name>  </prefer-application-resources></container-descriptor>

Today I worked a Service Request which was a Coherence issue on an Exalogic platform. It is a very interesting issue (at least for me ).An exception message like the following is thrown when...

Exalogic

Two New Exalogic ZFS Storage Appliance MOS Notes

This week I have closed 2 Service Requests related to the ZFS Storage Appliance and I created My Oracle Support (MOS) notes from both of them as, despite they were not complicated issues and the SRs were both closed in less than one week, these procedures were still not formally documented on MOS. Below can be seen the information about these created documents.MOS Doc Id. 1519858.1 - Will The Restart Of The NIS Service On The ZFS Storage Appliance Affect The Mounted Filesystems?On this case, for a particular reason it was necessary to restart the NIS service. So, if for any reason, the NIS service needs to be restarted on the ZFS Storage Appliance, will the mounted filesystems be affected during the restart?The default cluster configuration type of the ZFS storage appliance is active-passive and the storage nodes are supposed to be mirrored, so the restart of NIS should not be causing any issues; it can be done.Note that restart of NIS should be done on the active storage head. Restarting the NIS itself will not cause any ZFS failover from Active to Passive.In general terms, even in the event of a storage node failure, the appliance will automatically fail over to the other storage node. Under that condition, an initial degradation in performance can be expected because all of the cached data on the failed node is gone, but this effect decreases as the new active storage node begins caching data in its own SSDs.MOS Doc Id. 1520223.1 - Exalogic Storage Nodes Hostnames Are Displayed IncorrectlyThis was not the first time I saw something like this, so decided to create a note because clearly is a problem that may affect to more than one Exalogic user.The Exalogic storage node hostnames displayed on the BUI were different than the ones displayed when accessing the node through SSH or ILOM.This happens because for any reason the hostname is misconfigured on the ZFS Storage Appliance.To solve this problem, it is necessary to set the system name and location accordingly on the Storage Appliance nodes BUI:1. Login on the ZFS Storage Appliance BUI2. Go to the "Configuration" tab, and select the "Services" subtab3. Under the "Systems Settings" section, click on "System Identity"4. Set the system name and location accordingly

This week I have closed 2 Service Requests related to the ZFS Storage Appliance and I created My Oracle Support (MOS) notes from both of them as, despite they were not complicated issues and the SRs...

Weblogic

CONNECTION_REFUSED messages on load balancing in Weblogic with OHS or Apache

In the last months I have had to work on some issues related to load balancing. It is very important to understand how the layers interact between them and where specific settings must be done.Some people gets upset with the fact that OHS/Apache do load balancing even to servers that are shutdown and may be losing transactions.This document provides very good tips about how many Production critical issues can be resolved just by setting the appropriate values for some parameters.Personally, I think that the DynamicServerList parameter (which is in fact the first one mentioned on the document linked above) is particularly important to understand. As can be seen at this documentation from Oracle:In a clustered environment, a plug-in may dispatch requests to an unavailable WebLogic Server instance because the DynamicServerList is not current in all plug-in processes.DynamicServerList=ON works with a single Apache server (httpd daemon process), but for more than one server, such as StartServers=5, the dynamic server list will not be updated across all httpd instances until they have all tried to contact a WebLogic Server instance. This is because they are separate processes. This delay in updating the dynamic server list could allow an Apache httpd process to contact a server that another httpd process has marked as dead. Only after such an attempt will the server list will be updated within the proxy. One possible solution if this is undesirable is to set the DynamicServerList to OFF.In a non-clustered environment, a plug-in may lose the stickiness of a session created after restarting WebLogic Server instances, because some plug-in processes do not have the new JVMID of those restarted servers, and treat them as unknown JVMIDs.To avoid these issues, upgrade to Apache 2.0.x and configure Apache to use the multi-threaded and single-process model, mpm_worker_module.Also, this Oracle documentation provides inportant information about "Failover, Cookies, and HTTP Sessions", and "Tuning to Reduce Connection_Refused Errors".As can be seen at this Apache document, the MaxRequestsPerChild directive sets the limit on the number of requests that an individual child server will handle during its life.Note that mod_proxy and related modules implement a proxy/gateway for Apache HTTP Server, supporting a number of popular protocols as well as several different load balancing algorithms. Third-party modules can add support for additional protocols and load balancing algorithms.On Oracle Forums I also found a very interesting thread:The error which you are getting is a common which can be fixed by increasing the "AcceptBackLog" value by 25% until error disappears from weblogic console (Path: Servers => => Configuration tab=> Tuning sub-tab.) and setting the value to ON for "KeepAlive" in the httpd.conf which should take care of your issue.Topic: Tuning Connection Backlog Bufferinghttp://download.oracle.com/docs/cd/E13222_01/wls/docs81/perform/WLSTuning.html#1136287Search for "KeepAliveEnabled":http://download.oracle.com/docs/cd/E12840_01/wls/docs103/plugins/plugin_params.html#wp1143055Also here is a link which would be helpful to understand some common issue which occurs when using a plug-in and there are solutions:http://weblogic-wonders.com/weblogic/2010/03/07/plug-in-issues-some-tips/May transactions be affected because of this?Certainly yes, but it depends on how your application is developed. A good practice would be to create a bunch of transactions and track them to check if some are missed or not.This Transaction and redelivery in JMS article may be helpful.

In the last months I have had to work on some issues related to load balancing. It is very important to understand how the layers interact between them and where specific settings must be done.Some...

Reports

REP-0178 - "Reports Server cannot establish connection" Error Message

During this last week I saw an interesting thread in Oracle Forums about this error message, and wanted to share the findings that I got to answer in the forum thread:REP-0178: Reports Server [server_name] cannot establish connectionThis problem may occur when a wrong rwclient is picked. Perhaps the environment is not set appropriately before rwclient is called or the rwclient.bat/rwclient.sh in /bin is found and used, but it is only a template that upon install allowed to create the valid rwclient.bat/rwclient.sh in the /config/reports/bin (in fact when the instance was actually configured).So you can try using the appropriate rwclient.bat/rwclient.sh as it calls rwclient.exe/rwclient after setting the environment. Either set in the PATH the directory /config/reports/bin before /bin or specify the full path to rwclient.bat/rwclient.sh.Another possibility can be that the services were started as root and therefore some log files have been created with the user root. Hence there is no more write access to theses log files for the basic Oracle user (which is the owner of the installation): $INSTANCE_HOME/diagnostics/logs/OHS/ohs1/access_log$INSTANCE_HOME/diagnostics/logs/ReportsToolsComponent/ReportsTools/runtime_diagnostic.log$INSTANCE_HOME/diagnostics/logs/ReportsToolsComponent/ReportsTools/zrclient_diagnostic.logIf that's the case, then try to change the owner of the following log files to ORACLE user (which is the owner of the installation):$INSTANCE_HOME/diagnostics/logs/OHS/ohs1/access_log$INSTANCE_HOME/diagnostics/logs/ReportsToolsComponent/ReportsTools/runtime_diagnostic.log$INSTANCE_HOME/diagnostics/logs/ReportsToolsComponent/ReportsTools/zrclient_diagnostic.logAnd then restart the report server and run again the rwclient.sh command to generate the reports.

During this last week I saw an interesting thread in Oracle Forums about this error message, and wanted to share the findings that I got to answer in the forum thread:REP-0178: Reports Server...

Exalogic

Exalogic Elastic Cloud Software Version 2.0 Released

The Oracle Engineered Systems Community is pleased to announce the availability of Exalogic Elastic Cloud Software (EECS) version 2.0, offering the following new features and enhancements:A Layer 7 application traffic management software component called Oracle Traffic Director, which features extremely high performance HTTP load balancing (reverse proxy), HTTPS termination via Intel AES cryptography support, load balancing, rate throttling, connection limiting, logging and other advanced featuresSupport for secure application isolation using InfiniBand Partitions, a technology that allows the implementation of virtual firewalls on Exalogic in combination with Oracle Traffic DirectorImproved Exabus implementation - which greatly improves system performance and provides new optimized integration with Oracle Coherence and Oracle Tuxedo, as well as performance enhancements for WebLogic Server and all other Oracle Linux and Solaris applicationsAs of February 16, 2012 all new Exalogic X2-2 configurations are being shipped with the EECS 2.0 firmware, device drivers, operating system images and utilities loaded on shared storage. Customers running existing Exalogic X2-2 systems at any previous EECS patch level will also be able to update their systems to EECS 2.0 via an upgrade kit expected to be released shortly.

The Oracle Engineered Systems Community is pleased to announce the availability of Exalogic Elastic Cloud Software (EECS) version 2.0, offering the following new features and enhancements:A Layer 7...

Weblogic

Error "java.net.BindException: Address already in use: JVM_Bind" when running Node Manager in Windows

After a brand new install of WebLogic Server, when accessing node manager the following error is thrown:GRAVE: Fatal error in node manager serverjava.net.BindException: Address already in use: JVM_Bindat java.net.PlainSocketImpl.socketBind(Native Method)at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:365)at java.net.ServerSocket.bind(ServerSocket.java:319)at javax.net.ssl.impl.SSLServerSocketImpl.bind(Unknown Source)at java.net.ServerSocket.(ServerSocket.java:185)at java.net.ServerSocket.(ServerSocket.java:141)at javax.net.ssl.SSLServerSocket.(SSLServerSocket.java:84)at javax.net.ssl.impl.SSLServerSocketImpl.(Unknown Source)at javax.net.ssl.impl.SSLServerSocketFactoryImpl.createServerSocket(Unknown Source)at weblogic.nodemanager.server.SSLListener.init(SSLListener.java:76)at weblogic.nodemanager.server.NMServer.start(NMServer.java:206)at weblogic.nodemanager.server.NMServer.main(NMServer.java:392)at weblogic.NodeManager.main(NodeManager.java:31)When WebLogic is installed in Windows, the NodeManager is set as a Windows service which is automatically started up and runs on port 5556 (default).The node manager is already started. It can be shut down in the windows services, and to avoid that it is started up automatically along with Windows, the startup type can be changed.

After a brand new install of WebLogic Server, when accessing node manager the following error is thrown: GRAVE: Fatal error in node manager serverjava.net.BindException: Address already in use:...

Weblogic

Using WebLogic Server to Simply Store Static Content

A couple of weeks ago I saw a forum thread which I found very interesting. A guy was asking if it's possible or not to use WebLogic as a Web Server, in a similar way than when an index.html file is placed on the 'www' directory of Apache. So I answered him.Actually you can use WebLogic as a web server, but unlike Apache where you place a html file to the www directory and you can see it immediately, in WebLogic you have to deploy it to a domain.1. The html files (or any other static content like pdf files) must be in a directory of your choice (i.e. C:\testhtml).2. On that directory create a WEB-INF directory (from the example of step 1 it would be C:\testhtml\WEB-INF).3. In the WEB-INF directory create a file called web.xml with the following content:<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE web-app PUBLIC"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN""http://java.sun.com/j2ee/dtds/web-app_2_3.dtd"><web-app></web-app>4. Login to the WebLogic Console to deploy the application (i.e. http://[your_server]:7001/console )5. Click on Deployments.6. Click Install and set the path to the directory of step 3 (if the directory only contains the html file and the WEB-INF subdirectory you will see no files to select, but the Next button will be enabled anyway).7. Leave default "Install this deployment as an application" and click Next.8. Select the servers you wish to deploy this.9. Accept the defaults and click Finish.10. Activate Changes if the message appears.11. You should now be able to see the application started in the deployments screen.12. You can now access your static content on the WebLogic Server port via the following URL:http://[your_server]:7001/testhtml/[your_static_file]

A couple of weeks ago I saw a forum thread which I found very interesting. A guy was asking if it's possible or not to use WebLogic as a Web Server, in a similar way than when an index.html file is...

Coherence

Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

Date: Thursday, July 21, 2011Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm)Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google MapWe will be providing snacks and beverages.Register! - Registration is required for building security.Presentation Line Up:
5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, OracleOracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern.6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption.7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, OraclePartitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture.Additional Info:

- Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files.- Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn.- Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google MapWe will be providing...

Enterprise Manager

"BPEL PM and OSB Operational Management with Oracle Enterprise Manager 10g Grid Control" Book - My Humble Review

After reviewing this book, I am really amazed with it. I really recommend it, specially if you work with these Oracle tools like Enterprise Manager (of course), BPEL (and BPEL Process Manager), SOA Suite and/or OSB, if you are a SOA Architect and/or if your work is focused on production environments.As the title says, advanced techniques for operational management of these products (even for multiple environments) are provided on this book, including valuable and useful information for monitoring and automation tasks.In the books is very clearly explained and with screenshots (which makes it even easier to read, understand and follow) how to perform several tasks that are necessary to keep a correct performance on the production environments and the subtasks that must be executed on them, with very good examples too.The test sections on chapters 3, 10 and 13 (SOAP tests for partner links and BPEL processes, service tests on web applications, and SOAP test OSB proxy and business service endpoints) look specially interesting for me and I really liked to see that there is special emphasis on the use of WebLogic Server as well.For further information and order the book, please go to the Packt Publishing web site.

After reviewing this book, I am really amazed with it. I really recommend it, specially if you work with these Oracle tools like Enterprise Manager (of course), BPEL (and BPEL Process Manager), SOA...

JDeveloper/ADF

Working with Dates in the Model Layer

 In ADF applications, model data is surfaced to the view and controller layers through data control objects implemented for each type of data provider. The Oracle ADF data controls provide a consistent mechanism for clients and web application controllers to access data and actions defined by these diverse data-provider technologies, including Oracle ADF Business Components, JavaBeans, EJB session beans, Web Services, etc.No matter if you are using the BC4J components or creating your own objects, the date object that the model layer must provide to the controllers is oracle.jbo.domain.DateIf you are using JavaBeans or EJBs you should convert the dates from the Java date format to oracle.jbo.domain.Date if necessary, for example:import java.text.DateFormat;import java.text.SimpleDateFormat;...DateFormat formatter;formatter = new SimpleDateFormat("dd/MM/yyyy");java.util.Date date;date = formatter.parse("25/10/2010");java.sql.Date sqlDate = new java.sql.Date(date.getTime());oracle.jbo.domain.Date jboDate = new oracle.jbo.domain.Date(sqlDate);  If you try to send a different date object from the model layer you can get an error like the following: To change the date format in BC4J components like View Objects or Entity Objects, you can execute the following steps:1. Open the component2. Go to the Overview tab3. Go to the Attributes section at the left4. Double click on the date field, the Edit Attribute window will open5. Go to the Control Hints section at the left6. Select format date "Simple Date"7. Enter the desired format on the Format property8. Click Apply9. Click Ok (the Edit Attribute window will close)  Note that dates in both BC4J and other objects that you can develop for the model layer (like EJBs or other Java classes) must be formatted with the Java date format, whose usage can be seen here, but finally the date sent to the ViewController must be an object of type oracle.jbo.domain.Date 

  In ADF applications, model data is surfaced to the view and controller layers through data control objects implemented for each type of data provider. The Oracle ADF data controls provide...

Coherence

How to Monitor Coherence-Based Applications using JRockit Mission Control

You can start JRockit Mission Control (JRMC) before the Coherence cluster. If you do that, then the Coherence nodes and clients will appear in the JVM browser section (at the left on the screen) when they are started up.1. Go to your Coherence installation directory.2. Start a Coherence cluster node, (coherence.jar must be added to the classpath):java-showversion -server -Xmx350m -cp lib/coherence.jar-Dcom.sun.management.jmxremote-Dtangosol.coherence.management=all-Dtangosol.coherence.management.remote=truecom.tangosol.net.DefaultCacheServer3. Start a coherence cache client with no local storage:java -cp lib/coherence.jar-Dtangosol.coherence.distributed.localstorage=falsecom.tangosol.net.CacheFactory4. You can start more Coherence nodes, in the same way that on the previous steps.5. If you have not started JRMC yet, then start it now.6. Select the Coherence node or client that you want to monitor. Right-click on it and select "Start Console" (in the same way you can perform memory leaks and record runtime analysis).7. If you have set a node with the management parameters, then you will be able to see the MBeans information.Despite JRMC can monitor Java applications running on other JVMs different than JRockit (e.g. the Sun JVM), it is recommended to use JRockit on the Coherence nodes to make sure the JRMC monitoring will work fine and can use all of its functional features.

You can start JRockit Mission Control (JRMC) before the Coherence cluster. If you do that, then the Coherence nodes and clients will appear in the JVM browser section (at the left on the screen)...

JDeveloper/ADF

How to set bar chart colors programmatically according to values found in another column of the View Object

 This example shows how to set bar fill colors according to data in another column of the same View Object (WarehouseStockLevelsView) that was used to create the bar chart.When you press the button, the colors will change based on the corresponding zip code value for each bar (note that zip code is not a field used to build the graph), which means that it checks data in another column in the row and sets the bar colors according to the values in that other column.This sample works with the FOD db schema and the model project of the sample application used in the Introduction to ADF Data Visualization Components - Graphs, Gauge, Maps, Pivot Table and Gantt tutorial. To do something similar to the sample available on this blog post, you can do the following: 1. Execute the following sections of the tutorial mentioned above:OverviewScenarioPrerequisitesAdd a Bar ChartThe "Create a Master Detail Order Page" section can be skipped and within the "Add a Bar Chart" section will be enough to follow until step 7 only.  2. Create a jspx and expose the UI contents on a managed bean.  3. Source code for the jspx page used on this example:  TestBarChart1.jspx<?xmlversion='1.0' encoding='windows-1252'?><jsp:rootxmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"         xmlns:f="http://java.sun.com/jsf/core"         xmlns:h="http://java.sun.com/jsf/html"         xmlns:af="http://xmlns.oracle.com/adf/faces/rich"         xmlns:dvt="http://xmlns.oracle.com/dss/adf/faces">  <jsp:directive.pagecontentType="text/html;charset=windows-1252"/>  <f:view>    <af:document id="d1"binding="#{backingBeanScope.backing_TestBarChart1.d1}">      <af:messagesbinding="#{backingBeanScope.backing_TestBarChart1.m1}"                   id="m1"/>      <af:form id="f1"binding="#{backingBeanScope.backing_TestBarChart1.f1}">        <dvt:barGraphid="barGraph1"                     value="#{bindings.WarehouseStockLevelsView1.graphModel}"                     subType="BAR_VERT_CLUST"                     binding="#{backingBeanScope.backing_TestBarChart1.barGraph1}"                      threeDEffect="true">          <dvt:background>            <dvt:specialEffects/>          </dvt:background>          <dvt:graphPlotArea/>          <dvt:seriesSet>            <dvt:seriescolor="#ffffff"/>            <dvt:seriescolor="#ff0000"/>            <dvt:seriescolor="#0000ff"/>          </dvt:seriesSet>          <dvt:o1Axis/>          <dvt:y1Axis/>          <dvt:legendAreaautomaticPlacement="AP_NEVER"/>          <dvt:o1MajorTick id="o1MajorTick1"tickStyle="GS_AUTOMATIC"/>        </dvt:barGraph>        <af:commandButton text="ChangeChart Colors"                         binding="#{backingBeanScope.backing_TestBarChart1.cb1}"                          id="cb1"                          action="#{backingBeanScope.backing_TestBarChart1.changeColors}"/>      </af:form>    </af:document>  </f:view> <!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_TestBarChart1--></jsp:root>  4. Method used in the managed bean: TestBarChart1.javapublicvoid changeColors() {    String amDef ="oracle.fod.model.FODModule";    String config = "FODModuleLocal";    ApplicationModule am =Configuration.createRootApplicationModule(amDef,config);    ViewObject vo = am.findViewObject("WarehouseStockLevelsView1");    Row row;    while (vo.hasNext())    {        row = vo.next();        // if the corresponding zip code ishigher than 94100 the color will be set to black        // otherwise, it will be set to blue        if (Integer.parseInt(row.getAttribute(7).toString())> 94100)           barGraph1.getSeriesSet().getSeries(vo.getCurrentRowIndex(),true).setColor(Color.BLACK);        else           barGraph1.getSeriesSet().getSeries(vo.getCurrentRowIndex(),true).setColor(Color.BLUE);    }               } When running the application you should get graphics like the following before and after pressing the button:Figure 1 - Bar chart before pressing the buttonFigure 2 - Bar chart after pressing the button The sample (created with JDeveloper 11.1.1.2.0) can be downloaded here.

  This example shows how to set bar fill colors according to data in another column of the same View Object (WarehouseStockLevelsView) that was used to create the bar chart. When you press the button,...

APEX

"Oracle Application Express 3.2 - The Essentials and More" - Why this book was necessary?

 APEX requires certain knowledge and skills before being able to use it and there is no much documentation.Some Oracle DB components that are not used by many people such as the PL_SQL proxy must be known. Another reason is that despite APEX is a RAD tool which runs inside an Oracle database allowing you to develop applications just using a web browser, many previous skills are required, including:Strong knowledge in Oracle Database administrationHTML, CSS and JavaScriptProgrammingNetworking and securityIf you don't have all these skills, don't worry. This book put these required skills and knowledge all together and is a great guide to build web applications using APEX, providing you many help on all these topics. Also, the book contains a lot of examples.Once you have all these knowledge, APEX is not difficult to use (much easier if looking at this book) and you can create the web applications really very fast.It's impressive that you may not need to install an application server on your machine, just an Oracle Database, to make your web pages work and interact with the database, as Oracle XML DB HTTP Server with the embedded PL/SQL gateway installs with Oracle Database 11g. It provides the database with a web server and the necessary infrastructure to create dynamic applications. You have the option to use Oracle HTTP server as well (as a matter of fact, it looks like most customer s use this choice).Important topics for building web applications like page processing, regions, session management, security, validations, and (of course) lots of web components are deeply reviewed. Even the web services support of APEX is mentioned in the book. Note that APEX 4.0 was released just a few weeks ago. For further information and download, please go to http://apex.oracle.comFor further information and order the book, please visit the Packt Publishing web site. 

  APEX requires certain knowledge and skills before being able to use it and there is no much documentation. Some Oracle DB components that are not used by many people such as the PL_SQL proxy must be...

JRockit

"Oracle JRockit: The Definitive Guide" Book - My Humble Review

 As its name says,this book is, in fact, The Definitive Guide. It's hard to be more exact thanthat. I strongly recommend it. It provides a very good technical overview abouthow JRockit works and it's a great user guide too.Just-in-Time (JIT)Compilation is deeply explained, including all its benefits, the way it handlesbytecode, and the good reasons about why JRockit does not use interpretationfor code translationThe adaptive codegeneration and memory management of JRockit are very well explained too,including the usage of the code line flags like -Xverbose, -XnoOpt, -XX:DisableOptsAfter,-XX:JITThreads, -XX:OptThreads, -XX:+UnlockDiagnosticVMOptions, -XX:AllowSystemGC,and others.The use of nurseries,memory leak detection and troubleshooting, compaction, GC strategies, young andold collections, and other important topics for fine tuning, improvingperformance and optimization of JRockit are described as well.The book alsocontains complete chapters for JRockit Mission Control, JRockit RuntimeAnalyzer (JRA), JRockit Flight Recorder (JFR) which replaces JRA since the R28version, and JRCMD. These chapters provide very clear information about why usethese tools, when to use them and how to use them.The last chapter isabout "JRockit Virtual Edition". It was especially useful for me as sometimes someOracle products are not officially supported on virtualized environments.For furtherinformation and order the book, please go to the Packt Publishingweb site. 

  As its name says, this book is, in fact, The Definitive Guide. It's hard to be more exact than that. I strongly recommend it. It provides a very good technical overview abouthow JRockit works and...

New Packt Books: APEX & JRockit

 I have received these 2 ebooks from Packt Publishing and I am currently reviewing them. Both of them look great so far.  Oracle Application Express 3.2 - The Essentials and MoreFirst of all, I have to mention that I am new to APEX. I was interested on this product which is a development tool for Web applications on the Oracle Database. As I support JDeveloper and ADF, which are products that work very closely with the Oracle Database and are a rapid development tool as well, it is always interesting and useful to know complementary tools. APEX looks very useful and the book includes many working examples.A more complete review of this book is coming soon.Further information about this book can be seen at Packt. Oracle JRockit: The Definitive GuideMany of our Oracle Coherence customers run their caches and clusters using JRockit. This JVM has helped us to solve lots of Service Requests. It is a really reliable, fast and stable JVM. It works great on both development and production environments with big amounts of data, concurrency, multi-threading and many other factors that can make a JVM crash. This book has been a very good complement for the JRockit training that I am attending, which is being dictated by Mattis Castegren.I must also mention JRockit Mission Control (JRMC), which is a great tool for management and monitoring. I really recommend it. As a matter of fact, some months ago, I created a document entitled "How to Monitor Coherence-Based Applications using JRockit Mission Control" (Doc Id 961617.1) on My Oracle Support. Also, the JRockit Runtime Analyzer (JRA) and it successor of newer versions, the JRockit Flight Recorder (JFR) are deeply reviewed. This book contains very clear and complete information about all this and more. I will post an entry with a more complete review soon (and will probably post an entry about Coherence monitoring with JRMC soon too). Further information about this book can be seen at Packt. 

  I have received these 2 ebooks from Packt Publishing and I am currently reviewing them. Both of them look great so far.   Oracle Application Express 3.2 - The Essentials and More First of all, I have to...

JDeveloper/ADF

Enabling and Disabling ADF Components

The purpose of this is to show different alternatives to enable and disable ADF components. There are 3 separate sample workspaces (11.1.1.2.0), one for each one of the ways to enable and disable ADF components that will be shown.  1. Using isDisabled() and setDisabled() methods on the managed bean. In this example, the first af:commandButton (cb1) calls a method in the backing bean which enables/disabled the other af:commandButton: <af:commandButton text="Press Me!" binding="#{backing_test1.cb1}"                  id="cb1" action="#{backing_test1.toggleComponent}"/><br/><af:goLink text="goLink 1" binding="#{backing_test1.gl1}" id="gl1"           destination="http://www.oracle.com"/> This is the method used in the backing bean which simply uses the isDisabled() and setDisabled() methods of the second button: public void toggleComponent() {    if (gl1.isDisabled()) {        gl1.setDisabled(false);        gl1.setText("This goLink is now enabled");    } else {        gl1.setDisabled(true);        gl1.setText("This goLink is now disabled");    }} Further information about the isDisabled() and setDisabled() methods (that are also applicable to other ADF components) can be seen at the Java API Reference for Oracle ADF Faces. Download sample workspace here.  2. Using expression language to check if the component is disabled or not. As in the previous example, the first af:commandButton (cb1) calls a method in the backing bean which enables/disabled the other af:commandButton: <af:commandButton text="Press Me!" binding="#{backing_test1.cb1}"                  id="cb1"                  actionListener="#{backing_test1.toggleComponent}"/><br/><af:inputDate label="Date:" binding="#{backing_test1.id1}" id="id1"/> This is the method used in the backing bean which uses Expression Language to check if the second button is disabled or not: public void toggleComponent(ActionEvent evento) {    FacesContext fctx = FacesContext.getCurrentInstance();    ELContext elctx = fctx.getELContext();    Application jsfApp = fctx.getApplication();    //create a ValueExpression that points to the ADF binding layer    ExpressionFactory exprFactory = jsfApp.getExpressionFactory();    ValueExpression valueExpr = exprFactory.createValueExpression(                                 elctx,                                 "#{!backing_test1.id1.disabled}",                                  Object.class                                 );      id1.setDisabled(Boolean.parseBoolean(valueExpr.getValue(elctx).toString()));} Further information aboutExpression Language Techniques can be seen in thisAdvanced Expression Language Techniques OTN article. Download sample workspace here.  3. Using expression language on the Disabled property of the ADF component on the JSPX. In this example the managed bean method to disable the second button is also called from the first button, but the second button has its disabled property set on the JSPX to take the EL value from the managed bean attribute: <af:commandButton text="Click me" id="cb1"                  actionListener="#{ToggleTestBk.toggleComponent}"/><br/><af:commandButton text="commandButton 2" id="cb2"                  disabled="#{ToggleTestBk.isButtonEnabled}"                  partialSubmit="true"/> The managed bean method simply uses a private Boolean attribute which indicates whether the button is enabled or not and changes the value every time is invoked. private boolean isButtonEnabled; ...public void toggleComponent(ActionEvent event) {    try{        this.setIsButtonEnabled(!isButtonEnabled);    }    catch(Exception ex){      System.out.println("Exception in toggleButton: " + ex);    }} In the downloadable sample workspace provided for this example, the managed bean scope configured for the managed bean is application, but the application works with session scope as well. Further information about the 4 possible values for the managed bean scope element can beseen in thisJSF Managed Bean Facility OTN article.Download sample workspace here. On the first workspace a goLink is enabled and disabled, on the second one an inputDate is used, and on the third one a commandButton. There are many other ADF components that have the Disabled property. Three approaches, one similar behaviour. Obviously, you can figure out many other ways to get this enable/disable functionality on ADF components.  It's up to you to decide which alternative is best for you.

The purpose of this is to show different alternatives to enable and disable ADF components. There are 3 separate sample workspaces (11.1.1.2.0), one for each one of the ways to enable and disable ADF...

Coherence

"Oracle Coherence 3.5" Book - My Humble Review

 After reviewing the book in more detail I say again that it is a great guide for sure.Lots of important concepts that sometimes can be somewhat confusing are deeply reviewed, including all types of caching schemes and backing maps, and the cache topologies with their corresponding performances and very useful "When to use it?" sections.Some functionalities that are very desirable or used a lot are reviewed with examples and best practices of implementation, including: Data affinity Querying Pagination Indexes Aggregations Event processing, listening and triggering Data persistence SecurityRegarding the networking and architecture topics, Coherence*Extend is exhaustively reviewed, including C++ and .NET clients, with very good tips and examples, even including source codes. Personally, I am also glad to see that the address providers (<address-provider> tag), new feature in Coherence 3.5 which is a way to programmatically provide well-known addresses in order to connect to the cluster, is mentioned on the book, because it provides new functionalities to satisfy some special configuration requirements, for example: Provide a way to switch extend nodes in cases of failure Implement custom load balancing algorithms and/or dynamic discovery of TCP/IP connection acceptors Dynamically assign TCP address and port settings when binding to a server socketAnother very interesting and useful section is the "Coherent Bank Sample Application", which is a great tutorial, useful to understand how Coherence interacts with third party products establishing a clear integration with them, including the use of non-Oracle products like MS Visual Studio.For further information and order the book, please go to the Packt Publishing web site. 

  After reviewing the book in more detail I say again that it is a great guide for sure. Lots of important concepts that sometimes can be somewhat confusing are deeply reviewed, including all types...

Coherence

Coherence Special Interest Group: First Meeting in Toronto and Upcoming Events in New York and California

The first meeting of the Toronto Coherence Special Interest Group (TOCSIG).Date: Friday, April 23, 2010Time: 8:30am-12:00pm Where: Oracle Mississauga Office, Customer Visitation Center, 110 Matheson Blvd. West, Suite 100, Mississauga, ON L5R3P4Cameron Purdy, Vice President of Development (Oracle), Patrick Peralta, Senior Software Engineer (Oracle), and Noah Arliss, Software Development Manager (Oracle) will be presenting.Further information about this event can be seen here The New York Coherence SIG is hosting its seventh meeting.Date: Thursday, Apr 15, 2010Time: 5:30pm-5:45pm ET social and 5:45pm-8:00pm ET presentationsWhere: Oracle Office, Room 30076, 520 Madison Avenue, 30th Floor,Patrick Peralta, Dr. Gene Gleyzer, and Craig Blitz from Oracle, will be presenting.Further information about this event can be seen hereThe Bay Area Coherence SIG is hosting its fifth meeting. Date: Thursday, Apr 29, 2009Time: 5:30pm-5:45pm PT social and 5:45pm-8:00pm PT presentationsWhere: Oracle Conference Center, 350 Oracle Parkway, Room 203, Redwood Shores, CA Tom Lubinski from SL Corp., Randy Stafford from the Oracle A-team, and Taylor Gautier from Grid Dynamics will be presentingFurther information about this event can be seen hereGreat news, aren't they? 

The first meeting of the Toronto Coherence Special Interest Group (TOCSIG). Date: Friday, April 23, 2010 Time: 8:30am-12:00pm Where: Oracle Mississauga Office, Customer Visitation Center, 110 Matheson...

Coherence

java.util.ConcurrentModificationException when serializing non thread-safe maps

We have got some questions related to exceptions thrown during a map serialization like the following one (in this example, for a LRUMap):java.util.ConcurrentModificationExceptionat org.apache.commons.collections.SequencedHashMap$OrderedIterator.next(Unknown Source)at org.apache.commons.collections.LRUMap.writeExternal(Unknown Source)at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java(Inlined CompiledCode))at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java(Inlined CompiledCode))at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java(Inlined CompiledCode))at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java(Compiled Code))at com.tangosol.util.ExternalizableHelper.writeSerializable(ExternalizableHelper.java(InlinedCompiled Code))at com.tangosol.util.ExternalizableHelper.writeObjectInternal(ExternalizableHelper.java(Compiled Code))at com.tangosol.util.ExternalizableHelper.serializeInternal(ExternalizableHelper.java(Compiled Code))at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java(InlinedCompiled Code))at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java(InlinedCompiled Code))at com.tangosol.coherence.servlet.TraditionalHttpSessionModel$OptimizedHolder.serializeValue(TraditionalHttpSessionModel.java(Inlined Compiled Code))at com.tangosol.coherence.servlet.TraditionalHttpSessionModel$OptimizedHolder.getBinary(TraditionalHttpSessionModel.java(Compiled Code))This is caused because LRUMap is not thread safe, so if another thread is modifying the content of that same map while serialization is in progress, then the ConcurrentModificationException will be thrown. Also, the map must be synchronized. Other structures like java.util.HashMap are not thread safe too.To avoid this kind of problems, it is recommended to use a thread-safe and synchronized map such as java.util.Map, java.util.Hashtable or com.tangosol.util.SafeHashMap. You may also need to use the synchronizedMap(Map) method from Class java.util.Collections. 

We have got some questions related to exceptions thrown during a map serialization like the following one (in this example, for a LRUMap): java.util.ConcurrentModificationExceptionat org.apache.commons....

Coherence

"Oracle Coherence 3.5" - Book and Author Podcast

 The first book written about Oracle Coherence can be pre-ordered now at a significant discount and will be available in March 2010: "Oracle Coherence 3.5: Create Internet-scale applications using Oracle's high-performance data grid"Authored by leading Oracle Coherence authorities, this essential book will teach you how to use Oracle Coherence to build high-performance applications that scale to hundreds of machines and have no single points of failure. You will learn when and how to use Coherence features such as distributed caching, parallel processing, and real-time events within your application, and understand how Coherence fits into the overall application architecture. See the Pakt Publishing web site to pre-order the book and ebook at a discounted rate. Please encourage your Coherence customers to purchase this indispensable guide to Oracle Coherence. Listen to the Author Podcast: A conversation with Aleks Seovic author of "Oracle Coherence 3.5: Create Internet-scale applications using Oracle's high-performance data grid" (MP3)Scalability, performance, and reliability have to be designed into an application from the very beginning. Author Aleksander Seovic talks with Cameron Purdy about how to achieve these things using Oracle Coherence, and about his book "Oracle Coherence 3.5" and how it will help you to leverage Oracle's leading data grid solution to build massively scalable, high-performance applications. About the Author: Aleksandar SeovicAleksandar Seovic is a founder and Managing Director at S4HC, Inc., where he has worked in the architect role on both .NET and Java projects and has led development effort on a number of engagements for Fortune 500 clients, mostly in the pharmaceutical and financial services industries.Aleksandar led the implementation of Oracle Coherence for .NET, a client library that allows applications written in any .NET language to access data and services provided by Oracle Coherence data grid, and was one of the key people involved in the design and implementation of Portable Object Format (POF), a platform-independent object serialization format that allows seamless interoperability of Coherence-based Java, .NET, and C++ applications.Aleksandar frequently speaks about and evangelizes Coherence at industry conferences, Java and .NET user group events, and Coherence SIGs. He can be reached at aleks@s4hc.com.See the Solutions for Human Capital (S4HC) web site for more information about the services this valued partner provides. 

  The first book written about Oracle Coherence can be pre-ordered now at a significant discount and will be available in March 2010:   "Oracle Coherence 3.5: Create Internet-scale applications...

Coherence

The MachineName and UnicastAddress Attributes of the Coherence Nodes MBeans

Some time ago, I was asked why the MachineName and the UnicastAddress attributes may have similar values or, in other cases, completely different..In most cases, if no special configurations are applied to a Coherence node its MachineName MBean value will be exactly the same name configured in the OS of the physical machine, and the UnicastAddress value will have a format like machine_name/ip_address.Conceptually, the unicast address is an address given to interface for communication between host and router.The following definition about the machine-name parameter can be seen at the Oracle Coherence documentation:The machine-name element contains the name of the physical server that the member is hosted on. This is often the same name as the server identifies itself as (e.g. its HOSTNAME, or its name as it appears in a DNS entry). If provided, the machine-name is used as the basis for creating a machine-id, which in turn is used to guarantee that data are backed up on different physical machines to prevent single points of failure (SPOFs). The name is also useful for displaying management information (e.g. JMX) and interpreting log entries.Hence, the MachineName MBean value can be changed by configuring the machine-name element.However, if not set through the Coherence configuration, that value can be affected by the hosts file or, if the Coherence node or cluster is running on an Application Server, then its configuration may affect the MachineName value. Four scenarios will be described.1. If there are no special configurations on the machine's hosts file and the machine-name parameter is not set in the Coherence configuration, then the MachineName value in JConsole (or another JMX-compliant monitoring tool) should be simply the same configured in the OS of the physical machine (in this case, it was "csoto-cl"). See figures 1 and 2:Figure 1 - Default hosts fileFigure 2 - JConsole where MachineName and UnicastAddress parameters have similar values 2. If the hosts file has a hostname configured for the IP of the machine (and the machine-name parameter is not set in the Coherence configuration), then that name will be the one shown in JConsole. See figures 3 and 4:Figure 3 - The hosts file with a name configured for the IP of the machine Figure 4 - JConsole showing the MachineName configured in the hosts file3. Now, if the machine-name parameter is set on the Coherence configuration file, then that name will be the one displayed, no matter if the hosts file contains a different value. As can be seen at the figures below, now the value set in tangosol-coherence-override.xml (in this example, we willl use "mymachine") will be displayed instead of the one previously set on the hosts file (which was "testmachine"). See figures 5 and 6:Figure 5 - The tangosol-coherence-override.xml file with the machine-name parameter configured Figure 6 - JConsole showing the MachineName configured in the tangosol-coherence-override.xml file 4. The machine-name parameter can also be configured using the Command Line Setting Override Feature, and that value would even override the machine-name value specified on the tangosol-coherence-override.xml file. In this case the option -Dtangosol.coherence.machine=newname was added to the startup script.Figure 7 - JConsole showing the MachineName set with the Command Line Setting Override FeatureNote that, despite all the performed changes, the UnicastAddress value kept the machine name and the format machine_name/ip_address configured in the OS in all cases, as can be seen on the above JConsole screenshots.For further information about how to manage Coherence using JMX, please take a look at http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/managejmx.htm

Some time ago, I was asked why the MachineName and the UnicastAddress attributes may have similar values or, in other cases, completely different..In most cases, if no special configurations are...

Coherence

Next Bay Area Coherence SIG - January 21, 2010

 Bay Area Coherence Special Interest Group NEXT MEETING:  Thursday - January 21, 2010  Announcing the next meeting of the Bay Area Coherence Special Interest Group (BACSIG). Whether you're an experienced Coherence user, or new to Data Grid technology, the BACSIG is the community for realizing Coherence-related projects and best practices.Please email the BACSIG directly to register for this event.Your Full name and Company name are required for building security. We are very excited that Mariano Hernandez from Macys.com, Shaun Smith & Doug Clarke from the Oracle TopLink Grid team, and Christer Fahlgren the Coherence Incubator team will be presenting. See the presentation details below. We hope to see you at our next meeting. Date:  Thursday, Jan 21, 2010Time:  5:30pm-5:45pm PT social and 5:45pm-8:00pm PT presentationsWhere:  Oracle Conference Center, 350 Oracle Parkway, Room 203, Redwood Shores, CA, Google MapWe will be providing snacks and beverages. Register! Registration is required for building security.  Presentations: "Coherence in the Retail Space: A Case Study of Adoption of Coherence at Macys.com" - Mariano Hernandez (speaker bio) Senior eCommerceDeveloper (Macys.com) In this talk the presenter will discuss the original limitations of Macys.com infrastructure which led to the decision to implement Coherence as a side cache. The discussion then moves into the design and implementation of Coherence at Macys.com. The topics covered include infrastructure topology, cached object modeling, Spring enabling Coherence, central management of cluster configuration, and an implementation of the Command Cache pattern. The discussion will end with findings and a plan for future development. "TopLink Grid: Scaling JPA Applications with Coherence" - Shaun Smith (speaker bio) Product Manager (Oracle) & Doug Clarke (speaker bio) Principal Product Manager (Oracle) JPA, the Java Persistence API, is the Java standard for relational database access. But as is the case for many technologies, scaling JPA applications into large clusters is a challenge. In this session we'll introduce Oracle TopLink's new "TopLink Grid" feature, which supports scaling JPA applications through Coherence integration. We'll also introduce the new TopLink Grid enabled "JPA on the Grid" application architecture in which the underlying relational database is replaced by Coherence and all standard database operations, including JPQL query execution, are handled by Coherence. In this session we'll demo how easy it is to leverage Coherence from JPA using TopLink Grid and how to build standards compliant Java EE applications that scale.     "Introduction to the Coherence Incubator" - Christer Fahlgren (speaker bio) Group Product Manager (Oracle) The Coherence Incubator hosts innovative example implementations for commonly used design patterns, system integration solutions, distributed computing concepts and other artifacts designed to enable rapid delivery of solutions to potentially complex business challenges built using or based on Oracle Coherence. This talk provides an overview of Coherence Incubator functionality, including the Processing Pattern and the Messaging Pattern, and will introduce what is coming in the next release.  We look forward to seeing you all at the 4th BACSIG meeting on January 21st. Contact BACSIG with any comments, questions, presentation proposals and content suggestions. Join the BACSIG online community on our Bay Area Coherence SIG Group on Oracle Mix. 

  Bay Area Coherence Special Interest Group   NEXT MEETING:  Thursday - January 21, 2010    Announcing the next meeting of the Bay Area Coherence Special Interest Group (BACSIG). Whether you're...

Coherence

Winter 2010 Edition of the New York Coherence SIG

New York Coherence Special Interest Group NEXT MEETING:  Thursday - January 14, 2010  Announcing the next meeting of the New York Coherence Special Interest Group (NYCSIG). Whether you're an experienced Coherence user, or new to Data Grid technology, the NYCSIG is the community for realizing Coherence-related projects and best practices. Please email the NYCSIG directly to register for this event.Your Full name and Company name are required for building security.  We are very excited that Taylor Gautier from Grid Dynamics, Shaun Smith & Doug Clarke from the Oracle TopLink Grid team, and Noah Arliss form the Coherence Incubator team will be presenting. See the presentation details below. We hope to see you at our next meeting. Date:  Thursday, Jan 14, 2010Time:  5:30pm-5:45pm ET social and 5:45pm-8:00pm ET presentationsWhere:  Oracle Office, Room 30076, 520 Madison Avenue, 30th Floor, NY, NY Google MapWe will be providing snacks and beverages. Register! Registration is required for building security.  Presentations: "Extreme Transaction Processing - Leveraging Oracle Coherence as a SEDA-based Scalable Event Transaction Processor" - Taylor Gautier (speaker bio) Principal Client Architect (Grid Dynamics) Modern telco billing processing systems face huge challenges in the search to move from mostly offline, or batch, processing of billing events to handling huge volumes of events in near-real time. The challenges for an architect of such a system are: how to handle the high throughput processing of near real-time events with low latency while maintaining strict transactional semantics and provide high availability and scalability of the service.  We present our extreme transaction processing solution leveraging Oracle Coherence as an in-memory-data-grid (IMDG) which provides reliable messaging, data storage, and asynchronous write-back to RDBMS. The system follows the staged event driven architecture (SEDA) design pattern providing for a flexible and manageable design that is both highly available and scalable. We discuss lessons learned during performance tuning and profiling plus best practices that are applicable for any Coherence user. "TopLink Grid: Scaling JPA Applications with Coherence" - Shaun Smith (speaker bio) Product Manager (Oracle) & Doug Clarke (speaker bio) Principal Product Manager (Oracle) JPA, the Java Persistence API, is the Java standard for relational database access. But as is the case for many technologies, scaling JPA applications into large clusters is a challenge.  In this session we'll introduce Oracle TopLink's new "TopLink Grid" feature, which supports scaling JPA applications through Coherence integration. We'll also introduce the new TopLink Grid enabled "JPA on the Grid" application architecture in which the underlying relational database is replaced by Coherence and all standard database operations, including JPQL query execution, are handled by Coherence. In this session we'll demo how easy it is to leverage Coherence from JPA using TopLink Grid and how to build standards compliant Java EE applications that scale.     "Coherence Incubator Update & the Processing Pattern Revisited" - Noah Arliss (speaker bio) Senior Software Engineer (Oracle) This presentation will discuss some of the feature themes for the next set of Coherence Incubator releases planned for the end of January. We've done a lot of work since the first release of the Processing Pattern. In this talk we'll look at some of the changes and new features to enable "compute-grid" style development with Coherence.  We look forward to seeing you all at the 6th NYCSIG meeting on January 14th. Contact NYCSIG with any comments, questions, presentation proposals and content suggestions. Join the NYCSIG online community on our New York Coherence SIG Group on Oracle Mix.

New York Coherence Special Interest Group   NEXT MEETING:  Thursday - January 14, 2010   Announcing the next meeting of the New York Coherence Special Interest Group (NYCSIG). Whether you're an...

JDeveloper/ADF

How to programmatically populate a selectOneChoice component using data stored on a Coherence cache

v\:* {behavior:url(#default#VML);}o\:* {behavior:url(#default#VML);}w\:* {behavior:url(#default#VML);}.shape {behavior:url(#default#VML);}The purpose of this is to show how both the web application and database performances can be improved by using Oracle Coherence as the jspx page does not need to query the database. This document contains a simple example (how-to steps and downloadable sample workspace) about how to populate ADF components based on a Coherence cache. The JDeveloper 10.1.3.5 workspace can be downloaded from here and does not need a database connection to run, but coherence.jar must be imported as explained on step 4.  1. Create a new web application with JSF and ADF BC.  2. Once this new application is created go to the ViewController project and create a new jspx page. Expose the UI components in a managed bean.  3. Place an af:selectOneChoice component on the jspx page.  4. Import coherence.jar to the ViewController project. Right-click on its icon, select "Project Properties", select "Libraries" at the left menu, click on the "Add Jar/Directory ..." button, browse to your coherence.jar file and click "Select".  5. A way to make sure the data contained in the Coherence cache is populated when the page loads is to use the backing bean class constructor to call the populating method (the OnPageLoad-PagePhaseListener functionality is useful in pages of applications that use a database connection to run).  6. Source code for the jspx page and the managed bean used on this example (below):  TestPage.jspx <?xml version='1.0' encoding='windows-1252'?><jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.0"          xmlns:h="http://java.sun.com/jsf/html"          xmlns:f="http://java.sun.com/jsf/core"          xmlns:af="http://xmlns.oracle.com/adf/faces"          xmlns:afh="http://xmlns.oracle.com/adf/faces/html">  <jsp:output omit-xml-declaration="true" doctype-root-element="HTML"              doctype-system="http://www.w3.org/TR/html4/loose.dtd"              doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN"/>  <jsp:directive.page contentType="text/html;charset=windows-1252"/>  <f:view>    <afh:html>      <afh:head title="TestPage">        <meta http-equiv="Content-Type"              content="text/html; charset=windows-1252"/>      </afh:head>      <afh:body>        <h:form>          <af:selectOneChoice label="Choices:"                              binding="#{backing_TestPage.selectOneChoice1}"                              id="selectOneChoice1">            <f:selectItems value="#{backing_TestPage.listOfItems}"                           id="selectItems1"/>          </af:selectOneChoice>        </h:form>      </afh:body>    </afh:html>  </f:view>  <!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_TestPage--></jsp:root>  TestBean.java package view.backing; import com.tangosol.net.CacheFactory;import com.tangosol.net.NamedCache;import com.tangosol.util.Filter;import com.tangosol.util.extractor.IdentityExtractor;import com.tangosol.util.filter.LikeFilter; import java.util.ArrayList;import java.util.HashSet;import java.util.Iterator;import java.util.List;import java.util.Map;import java.util.Set; import javax.faces.model.SelectItem; import oracle.adf.view.faces.component.core.input.CoreSelectOneChoice; public class TestBean {     private CoreSelectOneChoice selectOneChoice1;        private List listOfItems = new ArrayList();        public TestBean() // Constructor    {              populateComponent();          }        public void populateComponent() {        SelectItem si = new SelectItem();              CacheFactory.ensureCluster();                              NamedCache c = CacheFactory.getCache("mycache");        Filter query = new LikeFilter(IdentityExtractor.INSTANCE, "%", '\\', true);        // Perform query, return keys of entries that match. In this case the query will return all the entries in the cache.        Set keys = c.keySet(query);        Set hashSet = new HashSet();        for (Iterator i = keys.iterator(); i.hasNext();) {          hashSet.add(i.next());        }        Map map = c.getAll(hashSet);        for (Iterator ie = map.entrySet().iterator(); ie.hasNext();) {          Map.Entry e = (Map.Entry) ie.next();          System.out.println("key: "+e.getKey() + ", value: "+e.getValue());          si = new SelectItem();          si.setLabel(e.getValue().toString());          si.setValue(e.getValue().toString());          listOfItems.add(si);        }        CacheFactory.shutdown();    }     public void setSelectOneChoice1(CoreSelectOneChoice selectOneChoice1) {        this.selectOneChoice1 = selectOneChoice1;    }     public CoreSelectOneChoice getSelectOneChoice1() {        return selectOneChoice1;    }        public void setListOfItems(List listOfItems)    {      this.listOfItems = listOfItems;    }     public List getListOfItems()    {      return listOfItems;    }}  7. Before running the page, make sure the Coherence cache contains data. In this example, to make it easier, we do it directly through the Coherence console. To do this run cache-server.cmd (Windows) or cache-server.sh (Linux/Unix) to make sure there is at least one storage enabled node, and then coherence.cmd or coherence.sh to put data in the cache. In this example we will use a Coherence cache named "mycache" and put 3 entries on it: "Red", "Blue" and "Yellow", as can be seen at the image below.  8. Before running the page, set it to run as a client node (storage disabled) by adding the following Java option: -Dtangosol.coherence.distributed.localstorage=false 9. Run the jspx page. The JDeveloper 10.1.3.5 workspace can be downloaded from here. It does not need a database connection to run, but coherence.jar must be imported as explained on step 4. To learn how to populate a Coherence cache based on a database query, please see http://coherence.oracle.com/display/COH33UG/Bulk+Loading+and+Processing+with+Coherence

The purpose of this is to show how both the web application and database performances can be improved by using Oracle Coherence as the jspx page does not need to query the database. This document...