Thursday Jun 27, 2013

An Overview of Batch Processing in Java EE 7

Up on otn/java is a new article by Oracle senior software engineer Mahesh Kannan, titled “An Overview of Batch Processing in Java EE 7.0,” which explains the new batch processing capabilities provided by JSR 352 in Java EE 7. Kannan explains that “Batch processing is used in many industries for tasks ranging from payroll processing; statement generation; end-of-day jobs such as interest calculation and ETL (extract, load, and transform) in a data warehouse; and many more. Typically, batch processing is bulk-oriented, non-interactive, and long running—and might be data- or computation-intensive. Batch jobs can be run on schedule or initiated on demand. Also, since batch jobs are typically long-running jobs, check-pointing and restarting are common features found in batch jobs.”

JSR 352 defines the programming model for batch applications plus a runtime to run and manage batch jobs. The article covers feature highlights, selected APIs, the structure of Job Scheduling Language, and explains some of the key functions of JSR 352 using a simple payroll processing application. The article also describes how developers can run batch applications using GlassFish Server Open Source Edition 4.0.

Kannan summarizes the article as follows:

“In this article, we saw how to write, package, and run simple batch applications that use chunk-style steps. We also saw how the checkpoint feature of the batch runtime allows for the easy restart of failed batch jobs. Yet, we have barely scratched the surface of JSR 352. With the full set of Java EE components and features at your disposal, including servlets, EJB beans, CDI beans, EJB automatic timers, and so on, feature-rich batch applications can be written fairly easily.”

Check out the article here.

Tuesday Apr 24, 2012

Spring to Java EE Migration – Part 4, the Finale

In a new article, now up on otn/java, titled “Spring to Java EE Migration, Part 4,” David Heffelfinger presents the final part of his series in which he demonstrates the ease of migration from the Spring Framework to Java EE. Here he compares equivalent functionality in Java EE and Spring in areas such as MVC design pattern implementation, data access, transaction management, and dependency injection.

He concludes the series with these remarks:

“In this series of articles, we developed a Java EE version of Spring’s Pet Clinic application. We saw how the advanced tooling provided by NetBeans enables us to quickly develop a Java EE application…. Once we were done building the Java EE version of the application, we compared it with the Spring version, noting that the original version has several dependencies whereas the Java EE version has none, because it takes advantage of all the services provided by the Java EE application server.

Finally, we compared how to implement similar functionality such as MVC and DAO implementation, transaction management, and dependency injection with Spring and Java EE. In every case with Spring, some XML configuration needs to be done besides adding annotations to the code. Java EE relies on convention, and in most cases, no XML configuration is needed in order to implement these services.

Although newer versions of Spring rely a lot less on explicit XML configuration than earlier versions, there are always a few little lines here and there that we need to add to an XML configuration file in order to get most of the Spring annotations to work, violating the DRY (don’t repeat yourself) principle...

Additionally, Spring applications tend to have several dependencies, because they are meant to run in a “lightweight” Servlet container such as Tomcat or Jetty and these containers don’t provide all the required functionality. In contrast, Java EE applications are meant to be deployed in a full-blown Java EE 6 application server such as Oracle GlassFish Server...

For these reasons, I always recommend Java EE over Spring for enterprise application development.”

Have a look at the article here.

Wednesday Nov 30, 2011

The JavaServer Faces 2.2 viewAction Component

Life just got easier for users of JavaServer Faces. In a new article, now up on otn/java, titled “New JavaServer Faces 2.2 Feature: The viewAction Component,” Tom McGinn, Oracle’s Principal Curriculum Developer for Oracle Server Technologies, explores the advantages offered by the JavaServer Faces 2.2 view action feature, which, according to McGinn, “simplifies the process for performing conditional checks on initial and postback requests, enables control over which phase of the lifecycle an action is performed in, and enables both implicit and declarative navigation.”

As McGinn observes: “A view action operates like a button command (UICommand) component. By default, it is executed during the Invoke Application phase in response to an initial request. However, as you'll see, view actions can be invoked during any phase of the lifecycle and, optionally, during postback, making view actions well suited to performing preview checks.”

McGinn explains that the JavaServer Faces 2.2 view action feature offers several advantages over the previous method of performing evaluations before a page is rendered:

   * View actions can be triggered early on, before a full component tree is built, resulting in a lighter weight call.

   * View action timing can be controlled.

   * View actions can be used in the same context as the GET request.

   * View actions support both implicit and explicit navigation.

   * View actions support both non-faces (initial) and faces (postback) requests.

Read the complete article here.

Monday Oct 17, 2011

Greg Bollella and Eric Jensen on the Future of Cyber-Physical Systems with Embedded Java and Berkeley DB

At JavaOne 2011, Greg Bollella, Chief Architect for Embedded Java and Eric Jensen, Oracle Principal Product Manager and a former embedded developer, gave a session (25143) titled “Telemetry and Synchronization with Embedded Java and Berkeley DB”. Bollella has been a leader in the Embedded Java and real-time Java space since Java was first applied there.

The presentation offered a vision of the potential future of Cyber-Physical Systems (CPS), defined as, “a system featuring a tight combination and coordination between the systems computational and physical elements,” that was so powerful that even if the expectations turn out to be exaggerated, CPS technological change will, in a decade or so, significantly alter our lives in pervasive and unforeseeable ways. Bollella went so far as to say that CPS applications have the potential to dwarf the 20th Century IT Revolution.

He drew a contrast between where CPS applications are in use today and where they will be in use tomorrow.

Today: High confidence medical devices and systems; assisted living; process control (metal smelting, chemical plants, refineries); traffic control and safety; advanced automotive systems; energy conservation; environmental control (electric power, water resources, and communications systems); distributed robotics (telepresence, telemedicine); defense systems; manufacturing; smart structures; home automation; building automation; transportation (rail, air, water, road); retail systems (point of sale and monitoring); entertainment industry; mining; industrial control (power generation).

Tomorrow:  Distributed micro-power generation; highly advanced autonomous driver assistance features; networked autonomous automobiles; networked building automation systems; cognitive radio (distributed consensus about bandwidth availability); large-scale RFID-based servicing systems which could acquire the nature of distributed real-time control systems; autonomous air traffic control; advanced industrial and home networked robotics; intelligent traffic control systems; intelligent autonomous power (gas/electricity); distribution systems; networked personal medical monitoring devices.

A lot to take in – the technology all around us growing in intelligence! In 2009, 3.9 billion embedded processors were shipped – the number is expected to double to roughly 8 billion by 2015. Some predict that by 2025 the number will be well into the trillions. And currently, an estimated five times more embedded software is written than all other software today. If the reality is anywhere close to the projections and estimates, we are in for an interesting ride on some intelligent transport.


Bollella went on to discuss telemetry, a term frequently used by NASA and defined as a technology that “allows remote measurement and reporting of information”. Central to telemetry is the idea that the information does not persist on the device after measurement. Uses of telemetry in the automotive realm include streaming operational data from the vehicle to the manufacturer’s IT system for analysis, services for vehicle operator, failure prediction, and feedback to design teams on wear and failure rates. For industrial automation, telemetry is used for failure prediction and to process monitoring and reporting


Bollella explained that his use of synchronization is idiosyncratic to database technology and involves two synchronized databases containing the same set of data and relationships. Any change in one database appears (after some indeterminate delay) in the other. The information on the device persists on the device as long as it does on the backend

The use cases for synchronization are widespread and include:

•    Healthcare: Telemedicine, Home health systems, Mobile health practitioners
•    Industrial: Manufacturing, Mining
•    Energy: Smart Grid, Energy Management
•    Entertainment: TVs, set top boxes, automotive rear-seat entertainment
•    Distribution/Shipping: Everything from local deliveries to transoceanic cargo shipments
•    Government: Border Control, Resource Management, Customs, Immigration, Land Management, Forest Service, etc
•    Law Enforcement/Military: Police officers and soldiers in the field, also aboard Naval vessels
•    Retail: Real time inventory linked to point-of-sale transactions
•    Distribution/Shipping: Everything from local deliveries to transoceanic cargo shipments

Bollella acknowledged that serious development challenges remain. The current state of CPS connectivity is poor, with the vast majority being standalone. Given the highly connected world of social networking, mobile devices, and the web, this might be surprising. But it is important to consider that these are two technological areas have evolved in environments with different demands. CPS is focused on real-time, predictability, safety, security, and fault tolerance; the Web is a different matter.

CPS requires real-time with predictable control loops -- there are no standard communication protocols or Ethernet or “IP-over” functionality on devices. There are harsh environments, especially in spacecrafts, that can affect wired Ethernet, and there exists incompatibility of data formats and communication protocols with IT standards.

Perhaps of greatest importance, there has been little perceived need for CPS connectivity with devices. But this is changing rapidly, and with it, obstacles are being overcome as one of the major trends in embedded is connectivity development. Bollella admitted that there were a lot of unknowns going into the future, but the challenges are not insurmountable.

Oracle’s Eric Jensen took over and gave some details about the Oracle Berkeley DB and the Oracle Database Mobile Server, which he characterized as the best way to synchronize mobile or embedded applications that utilize SQLite or Berkeley DB with an Oracle backend. The embedded Java platform, when coupled with Berkeley DB and Database Mobile Server, has the ability to manage networks of embedded devices using existing enterprise frameworks in a way that could prove to be quite revolutionary

It will be interesting to look back in 10 years and see how much Cyber-Physical Systems have, or have not, changed the world.

Thursday Aug 18, 2011

Templating with JSF 2.0 Facelets

A new article on otn/java, “Templating with JSF 2.0 Facelets,” by Deepak Vohra, offers a concise explanation of how to use Facelets, which in JavaServer Faces (JSF) 2.0, has replaced JavaServer Pages (JSP) as the default view declaration language (VDL). With Facelets, developers no longer need to configure a view handler as they once did in JSF 1.2.

From the article itself:

“Facelets is a templating framework similar to Tiles. The advantage of Facelets over Tiles is that JSF UIComponents are pre-integrated with Facelets, and Facelets does not require a Facelets configuration file, unlike Tiles, which requires a Tiles configuration file.

JSF Validators and Converters may be added to Facelets. Facelets provides a complete expression language (EL) and JavaServer Pages Standard Tag Library (JSTL) support. Templating, re-use, and ease of development are some of the advantages of using Facelets in a Web application.

In this article, we develop a Facelets Web application in Oracle Enterprise Pack for Eclipse 11g and deploy the application to Oracle WebLogic Server 11g. In the Facelets application, an input text UIComponent will be added to an input Facelets page. With JSF navigation, the input Facelets page is navigated to another Facelets page, which displays the JSF data table generated from the SQL query specified in the input Facelets page. We will use Oracle Database 11g Express Edition for the data source. Templating is demonstrated by including graphics for the header and the footer in the input and the output; the graphics have to be specified only once in the template.”

Read the complete article here.


Insider News from the Java Team at Oracle!

javeone logo


« April 2014