X

Topics and trends related to the Java ecosystem with occasional random rants.

Recent Posts

Changing Certificate Aliases Containing Non-ASCII Characters

As digital certificates are wont to do, mine was set to expire.  The content for this article stems from a recent renewal experience... As it ought to be, acquiring a code-signing certificate from a CA (Certificate Authority) is a non-trivial exercise.  For individuals like myself, part of the process includes proving my identity.  This entails producing multiple forms of government-sanctioned ID in front of a notary.  The signed and notarized document is then sent to the CA where they go though additional steps to verify both my and the Notary's identity.  Only after the process is complete, will a certificate be generated and released. For the past few years, I've used Comodo, one of many reputable CAs, for my certificate needs.  When receiving a certificate, to effectively use it, you'll not only need the code-signing certificate and private key, but also the certificate chain, and certificate alias.  We leave cursory pointers to most of these components for your edification.   The intent of this article is to focus on the certificate alias. A certificate alias provides a means to identify certificates in a human-readable fashion.  With respect to the Java ecosystem, it enables developers to take advantage of certificate related utilities like keytool(1) and jarsigner(1) which, in many instances, require furnishing the alias name as part of their command-line invocation.  At issue here was the fact that Comodo's recent renewal made a slight change to the alias which had widespread ramifications. As it arrived from Comodo, my original code signing certificate had a Comodo-generated alias associated with it, that looked like this: jim connors's comodo ca limited id Not the most elegant of names, but it works just fine.  And besides, you could change it (e.g. with keytool) if you'd like.  Fast forward to the summer of 2018, the most recent renewal included what appeared to be an insignificant modification to the original alias.  It now reads: jim connors?s comodo ca limited id The culprit here is the ? character.  It's not actually a question mark, but rather an alternative 16-bit character representation of the apostrophe (') character. In the Windows CMD.EXE shell, because this is a non-ASCII character, we see this question mark thingy.  In reality the character in question has a hex value of 0x2019. This latest alias renders it difficult, if not impossible, to use within a Windows CMD shell. So why not just change the alias then? Great question.  And as it turns out, keytool has an option to do just this.  Unfortunately you need to specify the alias on keytool's command-line in order to so.  And as it's difficult to represent this non-ASCII string in a CMD shell, what can you do?  It's quite possible that non-windows environments (Linux, MacOS ...) may be more amenable to non-ASCII characters, but that fix would be way too easy.  My solution: create a dopey application that can aid in modifying certificate aliases with these weird characters.  The Java project can be found here: https://github.com/jtconnors/ChangeCertAlias The project includes a sample keystore (my.jks) with a certificate.  Furthermore the certificate is assigned an alias with a non-ASCII character (jim?s self-signed cert).  In the sample run that follows, the alias in question is selected and modified, eliminating the non-ASCII character from the original alias. >java -jar ChangeCertAlias.jar Keystore file: my.jks Enter keystore password: changeit Aliases in my.jks         1: jim?s self-signed cert Select an alias number: 1 Enter the new certificate alias: jims self-signed cert Name of new keystore file: new.jks Enter key password: changeit New keystore: new.jks created with updated alias: "jims self-signed cert" The default keystore and key passwords are "changeit" for this example.  Using this program on a real keystore will require that you know those passwords, which should almost certainly be different.  

As digital certificates are wont to do, mine was set to expire.  The content for this article stems from a recent renewal experience... As it ought to be, acquiring a code-signing certificate from a CA ...

The Changing Landscape of Laptop Solid State Storage

In the hopes that those encountering the challenge of adding additional SSD storage to modern laptops may benefit from this learning experience... Among the more popular standard-issue corporate laptops, the Dell Latitude E-Series product line has proven to contain a good mix of the qualities many look for: a reasonably small footprint, powerful CPU options, full HD graphics and room to expand both RAM and disk.  Of note, I've used a Dell E7440 laptop for three years and liked it enough that once the corporate lease was up, I looked to buy a used one for personal consumption.  As the E7440 is a bit long in the tooth now, I instead opted for the slightly newer, faster version -- the E7450.  At the time of this publishing, these can be had on eBay for about $350 US. What at first seemed to be a simple upgrade from the previous generation turned out to have one rather important difference: there is no supported secondary storage option on the Dell E7450. As the image above demonstrates, the Dell E7440 on the left contains an expansion slot suitable for mSATA Solid State Drives whereas the connector on the right for the Dell E7450 instead includes a more generic (and more modern) mini PCIe slot.  Cards for both, although somewhat similar at first glance, are not the same, do not have the same connectors and are incompatible.  Here's what an mSata card looks like next to a 42mm mini PCIe card: The Solution (for the impatient) Not all mini PCIe cards are alike (more on this later).  If you really don't feel like reading the rest of this article, one card has finally made it to market that does work in the Dell E7450 mini PCIe slot.  That card is the Toshiba RC100 M.2 2242 240GB PCIe SSD, pictured directly below. Note:  The SSD may not automagically appear once installed in, say a Windows 10 environment.  It may need to be partitioned and formatted first.  And of course, as to whether it's officially supported by Dell is another question entirely. More on Mini PCIe Its arguable Dell was ahead of the game with respect to SSD connectivity for the E7450.  The PCIe standard, and in particular the M.2 connector, is rapidly becoming more and more common, slowly but surely rendering mSATA obsolete.  In addition to being available in many form factors, M.2 SSDs come with slightly different edge connectors or keys (which ultimately determine read/write speed) and can follow the SATA or NVMe (Non-Volatile Memory express) protocols.  Let's examine these briefly now. M.2 Form Factors M.2 cards come in many different widths (12, 16, 22 and 30 mm) and lengths (16, 26, 30, 38, 42, 60, 80 and 110 mm). Some of these combinations occur far more regularly than others.  The most common M.2 SSDs use 22 mm width cards with lengths of 30, 42, 60, 80 or 110 mm.  A large majority of the M.2 SSDs currently available prefer the longer length form factors over the shorter ones, especially when considering larger capacity drives. Often times the M.2 card will specify the dimensions as part of the model number.  For example, the working module for the Dell E7450 (mentioned above) is the Toshiba RC100 M.2 2242 240GB PCIe SSD.  The highlighted 2242 number indicates this card is 22 mm wide and 42 mm long.  This form factor is not nearly as common as the longer types, and capacity is currently limited to no more than 256GB.  No doubt as densities improve, this will change.  The bottom line here is, the Dell E7450 PCIe slot is made for a 22 mm wide 42 mm long M.2 cards.  The market for these types of SSDs are quite limited today. M.2 Protocol for SSDs M.2 SSDs are available as either supporting the SATA or the PCIe NVMe protocols.  It is important to understand which of these protocols your laptop motherboard supports.  Unfortunately, I was unaware of these differences, and in my ignorance initially purchased a Transcend 256GB SATA III MTS400 42 mm SSD.  In the limited world of 42 mm M.2 SSDs, the SATA variety is by far and away the most prevalent and in fact until recently was the only option.  Unfortunately the Dell E7450 does not support SATA on its PCIe interface.  The picture that follows includes both PCIe M.2 SSDs.  They both look the same.  The Transcend device on the left won't work; the Toshiba device on the right does. M.2 Connectors We won't get into much detail here, suffice it to say the M.2 edge connector can have different keying and notch configurations signifying among others,  interfaces and transfer speeds.  The cards in the image above are of the "B & M key" edge connector variety. Conclusion Here's the Dell Latitude E7450 with the Toshiba SSD installed. And here's how in appears in the Windows 10 Device Manager

In the hopes that those encountering the challenge of adding additional SSD storage to modern laptops may benefit from this learning experience... Among the more popular standard-issue corporate...

Build JDK 10 for your Raspberry Pi Right on your Device

Starting with the release of JDK 9, Oracle's list of supported hardware architecture / operating system platforms for its Java SE implementation has been trimmed.  No longer are 32-bit versions being provided, nor are binaries for the Arm architecture, including those for the wildly popular Raspberry Pi.  However, work supporting the 32-bit armhf architecture is incorporated in the OpenJDK source including that for OpenJDK 10.  So for all the Raspberry Pi / Java aficionados out there, you effectively have two options if you want the latest Java release for your device: Wait for someone to build and provide an installable package for your desired update. Build it yourself. We'll discuss option (2) today, and with the resources available on the latest models of the Raspberry Pi, show that it is feasible to build Java from source right on your device. For the impatient (which definitely includes this author), here are the instructions to get you going. The aforementioned howto is divided into 7 relatively straightforward steps.  Here are some of the more important considerations: Do not use any Raspberry Pi hardware with less than 1GB RAM.  OpenJDK build machines have a bare minimum build environment, and the less capable Raspberry Pi devices will be inadequate.  The example build system here uses the Raspberry Pi 3+.  As of this article's posting, it represents the latest and most capable Raspberry Pi to date. You'll need to configure at least 1GB additional swap space.  This is explained in the instructions. You should have at least 10GB free disk space to accommodate the source code and resulting built binaries. As many Raspberry Pi versions are equipped with multiple cores, don't be tempted to take advantage of those cores by issuing the make command with a JOBS number greater than 1.  Increasing the JOBS count requires more RAM, a precious resource the Raspberry Pi doesn't have enough of, which will ultimately cause the build to fail. How long does it take to build? Depending upon the quality of the SD card you use, your mileage will vary.  In this instance, A 32GB SanDisk Extreme U3 SD card was used.  The total time reuired to issue the make command was approximately 216 minutes, or a little more than 3 ½ hours. And what about JavaFX? As of Java 10's GA release, JavaFX is no longer part of the JDK distribution.  To add this platform, you can either download a build of JavaFX that has been completed by the community, or build it yourself by following these instructions.  Unfortunately at this time you cannot build OpenJFX natively on the Raspberry Pi device.  You'll need to cross-compile the source from a supported development platfom, like for example, a Linux x64 system. Postscript Since the creation of this article's original build instructions, JDK 10.0.1 has been released.  You can build this version of Java in a very similar manner, but a few small changes need to be made, including locating and pulling down the appropriate JDK 10.0.1 update source code.  The instructions contain a Postscript section which should aid in this endeavor.  

Starting with the release of JDK 9, Oracle's list of supported hardware architecture / operating system platforms for its Java SE implementation has been trimmed.  No longer are 32-bit versions being...

OpenJDK 10 Now Includes Root CA Certificates

With the release of OpenJDK 10 on 20 March 2018, Oracle and the Java community have made good on their commitment to furnish Java releases every six months.  The JDK 11 project is well underway and the proposed schedule calls for its release on 25 September 2018, six months after the GA (General Availability) of OpenJDK 10.  Alongside this significant change in release cadence, Oracle has pledged to make its commercial implementation of OpenJDK (Java SE or the Oracle JDK) as indistinguishable as possible from OpenJDK.  This will take some time, but those efforts have commenced and are beginning to bear fruit. One of the enhancements to JDK 10 includes, for the first time, a set of root CA (Certificate Authority) certificates incorporated into the OpenJDK source.  As specified by Java Enhancement Proposal (JEP 319), providing root CA certificates makes "OpenJDK builds more attractive to developers" and "reduces [sic] the differences between those builds and Oracle JDK builds". Root certificates are stored, by default, in a keystore file called cacerts.  Prior to JDK 10, the source code contained an empty cacerts file, disabling the ability to establish trust and effectively rendering many important security protocols unuseable.  To work around this shortcoming, developers had to roll their own cacerts keystore by manually populating it with a set of root certificates.  Let's examine OpenJDK 10 on a Windows desktop: >jdk-10\bin\java --version openjdk 10 2018-03-20 OpenJDK Runtime Environment 18.3 (build 10+46) OpenJDK 64-Bit Server VM 18.3 (build 10+46, mixed mode) The following command utilizes the JDK keytool utility to query the cacerts keystore and count the number of certificates: >jdk-10\bin\keytool -cacerts -list | find "Certificate" /c Enter keystore password:  changeit 80 By default the cacerts keystore password is changeit.  The 80 included certificates matches the number specified in JEP-319. The Certificate Authorities in question were required to sign an agreement granting Oracle the right to open-source their certificates. Expect to see more Oracle value-add finding its way into the OpenJDK source as time marches on.      

With the release of OpenJDK 10 on 20 March 2018, Oracle and the Java community have made good on their commitment to furnish Java releases every six months.  The JDK 11project is well underway and the...

Bitcoin Mining: Six Months Later

About six months have passed since first setting up a personal Bitcoin mining rig.  As a follow up to that original post, I thought it make make sense to return to see what's transpired during this brief -- and tumultuous -- period, and to discuss adjustments made to the rig. What's Changed in the Bitcoin Universe? My Humble Bitcoin Mining Rig (version 2.0) Configuration Changes Return on Investment? Really? What's Changed in the Bitcoin Universe? In a word, a lot (actually that's two words, but who's counting). Here are just a few trends: Bitcoin price volatility shows no signs of slowing down even as its market cap grows dramatically.  After the price of Bitcoin had nearly qunitupled from September to December 2017, it has since lost half its value in the subsequent two months. Mining is taking place on an even more massive scale.  Take a look at this, and this. In Russia, scientists were arrested for using top secret nuclear supercomputer facilities to mine cryptocurrencies. The worldwide Bitcoin hash rate has tripled in the last six months, meaning the competition for mining Bitcoins is getting more and more intense. Here's a nice website with lots of charts visualizing the growth. My mining pool, slushpool, has increased its Bitcoin hash rate capacity by ten fold, meaning without any increase in hash rate capacity on my end, my take of slushpool's winnings would be 1/10th what it originally was. China is escalating its crackdown on cryptocurrency trading. As it and other countries consider regulating this industry, further uncertainties surrounding Bitcoin are sure to arise. My Humble Bitcoin Mining Rig (version 2.0) Here's what the new version of the rig looks like.  The original rig can be viewed here.  Following is a description of the modifications. Raspberry Pi 3: Even with the new and improved hash rate this rig provides, the Raspberry Pi 3 still has more than enough horsepower to serve as the overall controller.  The big change to the controller involves the addition of the Xtronix Power over Ethernet (PoE) Adapter for the Raspberry Pi.  With PoE, it's now possible to remotely reset the power on this rig.  In the Configuration Changes section of this article, software modifications have been made which make this platform much more reliable.  Nonetheless, the ability to remotely hard reset the rig does come in handy occasionally. GekkoScience 2-pac (BM 1348x2) USB Bitcoin Miners:  The number of USB ASIC miners has been doubled from 4 to 8.  Running at 225Mhz, this rig now averages about 200 Gh/s (200 Billion hashes per second).  The power consumption stands at about 100 Watts, roughly equivalent to an old school incandescent light bulb. HooToo 60W 7-port powered USB hubs: To accommodate the additional USB ASIC miners, two USB hubs have been added.  All four USB ports on the Raspberry Pi 3 are now occupied. Artic Breeze USB fans: An extra USB fan was added to assure all miners are cooled adequately. Configuration Changes After running without incident for weeks, the Raspberry Pi began occasionally locking up, rendering it unusable until a hard reset was performed.  A search yielded this thread, where it appears others were experiencing similar problems.  Related to kernel memory corruption, a two-fold solution was employed to minimize the downtime associated with this tricky problem: Enable SLUB debugging in the Raspian kernel.  Although this requires the Linux kernel to consume more CPU cycles, a large number of these intermittent memory allocation errors will be caught.  In order to configure and enable SLUB debugging, as root add the following line to the /boot/cmdline.txt file: slub_debug=FPUZ and reboot. Configure in a watchhdog daemon.  Many processors come equipped with a watchdog timer which simply counts down from some set value to zero.  In order to prevent it from reaching zero, the system must periodically reset the watchdog timer.  If the timer reaches zero, it is assumed the system has hung.  The watchdog daemon detects this condition and can be configured to reboot when this occurs, lessening the need for manual intervention.  This procedure is typically hardware, Operating System and version dependent.  One way to enable the watchdog reset feature for a Raspberry Pi 3 running Raspbian Jessie is to follow these instructions. Return on Investment?  Really? Even though the price of Bitcoin has risen by about 2½ times its September 2017 value and my mining rig has doubled in capacity, those substantial gains cannot offset the tremendous increase in worldwide mining capacity.  Taking this point-in-time analysis with a grain of salt, the average current daily yield in Bitcoins for my rig (as of 26-Feb-2018) is .00001677.  Multiplied by the current price ($10,158 US) results in a daily reward of $0.17 US a day.  That's two cents a day more than the original setup but with double the power requirements (100 Watts vs. 50 Watts), not to mention the capital costs of the additional equipment. So to repurpose the punchline from an old joke: "we're losing money on every transaction and making it up in volume".  

About six months have passed since first setting up a personal Bitcoin mining rig.  As a follow up to that original post, I thought it make make sense to return to see what's transpired during this...

Help for Signing Deployment Rule Sets

Among other benefits, the Java SE Advanced offering provides customers with access to security patches for Java releases that are no longer publicly updated.  And as a result, many of these organizations have become diligent -- deservedly so -- in keeping up to date with Oracle's quarterly cadence.  If you are one of those customers who falls into this category, you may have noticed that the most recent October 2017 updates for Java 6 (6u171) and Java 7 (7u161) will no longer include a Java Plugin. Does that mean future Java 6 and 7 updates won't be able to run browser-based applications?  The answer is no, these releases can still run Java web content, but they must be launched with the latest Java 8 update configured with Deployment Rule Sets.  Briefly, Deployment Rule Sets enable you to control the version of the JRE that is used for specific applications.  In this scenario, the Latest (most secure) Java 8 update is launched when a user clicks on a link to start a web application.  The Java 8 plugin will consult the Deployment Rule Set, which contains a set of rules, to determine what to do next,  If a rule exists to direct your application to run a specific version of Java, it will do so.  If no rule exists, the rule set can be configured to block the application, thus assuring only those applications you trust can run. The purpose of this article is not to introduce you to Deployment Rule Sets; there are other excellent resources including this entry entitled Introducing Deployment Rule Sets.  Rather, the discussion today focuses on a critical step in creating rule sets, namely the requirement that the rule set be signed.  The aforementioned article was written in 2013 when Deployment Rule Sets were first introduced.  Java web application security has been further ratcheted up since, and the rule set signing section in the article only glosses over the steps required. To help facilitate the signing of Deployment Rule Sets, the following GitHub project has been created: https://github.com/jtconnors/sign_drs Along with documentation and a sample ruleset, it includes Windows Powershell script which automates the process.  You can check out the project's README for further info.

Among other benefits, the Java SE Advanced offering provides customers with access to security patches for Java releases that are no longer publicly updated.  And as a result, many of these ...

JDK9 keytool Transitions Default Keystore to PKCS12

When it comes to the JDK9 release, project jigsaw garners nearly all the attention, sucking the air out of the room and leaving very little oxygen for many other smaller but interesting enhancements. One such feature addresses the universal quest to modernize overall security and involves an improvement to the keytool utility. For approximately two decades, Java and keytool had relied on the JDK-specific JKS keystore type as its default store. As specified by JEP 229, JDK9 transitions the default keystore to PKCS12. This change means that any new keystores will be created in the PKCS12 format.  It should however not affect existing applications that rely upon the original JKS keystore type.  Backwards compatibility will be maintained allowing existing applications to continue operating unmodified for the foreseeable future. PKCS12 has a number of advantages: It is more extensible It supports stronger cryptographic algorithms It is widely adopted.  PKCS12 is frequently the format provided by certificate authorities when issuing certificates. With respect to point (3) above, as mentioned in this previous article, keytool has historically been unable to directly import PKCS12 generated trusted keys and certificates, and instead must rely on external workarounds like the following: Use openssl to create a keystore containing the certificate chain and private key. Then use keytool to import this keystore into either a new or larger keystore.. Platforms like Oracle WebLogic contain a utils.ImportPrivateKey class (with a main method) that is included in weblogic.jar which can accomplish this task Unfortunately this shortcoming still exists in JDK9.  However a request for enhancement has been recently been created and can be found here:  keytool should be able to import private keys: https://bugs.openjdk.java.net/browse/JDK-8189321  Perhaps enough folks can weigh in and vote, increasing its priority.  

When it comes to the JDK9 release, project jigsawgarners nearly all the attention, sucking the air out of the room and leaving very little oxygen for many other smaller but interesting enhancements....

And Now For Something Completely Different...

To steal a phrase from Monty Python's Flying Circus, this article represents a departure from the standard fare.  Today, I'd like to discuss my foray into the Bitcoin world, and in particular, how contributing (in an infinitesimally small but meaningful way) strengthens the decentralized Bitcoin network.  In short, I have become a Bitcoin miner.  Before going any further, let me be perfectly clear: there is no gold in them thar' hills. Unless you plan on mining on an industrial scale where electricity is plentiful and more importantly, cheap, you will likely achieve a negative return on your investment.  But it's still fun to do. Overview My Humble Bitcoin Mining Rig Creating and Configuring a Bitcoin Mining Rig Return on Investment? Overview The revolutionary idea behind Bitcoin is the fact that its transactions are verified in a decentralized fashion free from manipulation by governments and banks.  The more systems on the network that participate in the verification, the less likely any one entity has the ability to take a majority stake and threaten the ecosystem.  To encourage participation, verifiers, otherwise known as miners, are periodically rewarded with Bitcoins.  Bitcoin mining draws parallels to precious metal mining in that (1) the resource is scarce and finite (only 21 million will ever be issued) and (2) mining is labor intensive. Miners compete for winning Bitcoins that are periodically released by both verifying transactions and solving complicated mathematical hashing calculations.  Obviously grossly oversimplified, you can begin to better understand the concept of mining here.  The organizations with the most computing power have the best odds at winning the Bitcoin lottery.  You can, of course, attempt to go it alone and compete with the rest of the world in trying to win Bitcoins, but the odds of doing so are very, very slim.  Instead nowadays it probably makes more sense to join one of a number of large mining pools that share resources and consequently share in the overall pool's success.  In a pool you receive a payout commensurate with your percentage contribution to that pool. My Humble Bitcoin Mining Rig The image that follows, along with the subsequent description of the component parts, describes the rig. Raspberry Pi 3:  The venerable Raspberry Pi serves as the overall controller for the Bitcoin mining rig and uses the cgminer software to manage our mining devices.  The Pi has more than adequate compute power for this project and consumes a minimum amount of electricity. GekkoScience 2-Pac (BM1384x2) USB Bitcoin miners: There was a time when hashing calculations were performed on CPUs, however it quickly became apparent that specialized hardware would perform these tasks far more efficiently.  The first leap was to utilize a system's GPU (which is still an ideal way of mining other digital currencies).  Nowadays for Bitcoin, any mining hardware worth using is based upon specialized ASICs (Application Specific Integrated Circuits) that are far more efficient than GPUs.  The GekkoScience 2-Pac (BM1384x2) USB Bitcoin miners are state-of-the-art (September 2017) when it comes to the engineering trade off between performance and power efficiency.  These devices can be run at different frequencies; the default (100MHz) yields a device which can perform about 15 billion hashes a second (usually written as 15 Gh/s).  By upping the clock frequency, you increase the hash rate and also consequently, power consumption.  These 4 devices are running at 225 MHz and combined, execute approximately 100Gh/s.  The power consumption for this rig is approximately 50 Watts. HooToo 60W 7-port powered USB hubs: One of the challenges with setting up this rig was finding USB hubs that could power the 2-Pac USB mining devices, as each easily consumes 10 Watts or more.  Powered USB hubs are an absolute necessity, and most powered hubs can only support the power requirements for one of these devices.  The HooToo hub, rated at 60W, can support two devices at 225MHz (plus a USB fan).  I was hoping to add at least one more miner to each USB hub, but that appears to be a bit unstable.  Not being an electrical engineer, I don't understand why. Artic Breeze USB Fan: The USB mining devices run hot, especially when overclocked.  A fan is essential to keep things cool and extend the life of the mining devices. Creating and Configuring a Bitcoin Mining Rig Once the requisite hardware is gathered, there are a few general steps required to get up and running: Configure the Raspberry Pi as a mining controller by downloading and building the cgminer software Choosing a Bitcoin wallet and setting up your payout address Choosing a mining pool This text file describes the steps needed to set up the rig described in this article.  In terms of choosing a Bitcoin wallet and mining pool there are a multitude of choices.  Electrum and Slush Pool were chosen respectively for the rig you see here. Return on Investment? Is mining a money making enterprise?  At this small scale, absolutely not.  Ignoring the non-trivial upfront capital cost of the hardware required to run this rig, you're likely to spend more in electricity than you could accrue in Bitcoin.  The price of Bitcoin is so volatile, and the amount of hashing power that contributes to the network changes (generally increasing over time) that any metrics provided here would only be relevant for one point in time.  But what the heck.  At the current time, based upon my contribution to the Slush Pool mining pool, my rig is rewarded approximately .000035 Bitcoins a day.  As of the writing of this article, the current price of Bitcoin is about $4300 US.  Doing the arithmetic yields a reward of about $0.15 US a day.  It costs more in electricity each day (especially in New York) to power a 50 Watt rig.  

To steal a phrase from Monty Python's Flying Circus, this article represents a departure from the standard fare.  Today, I'd like to discuss my foray into the Bitcoinworld, and in particular, how...

Mimicking Java Flight Recorder Triggers Outside Java Mission Control

As highlighted in this previous article, Java Flight Recorder triggers enable you to selectively dump detailed runtime information about your Java application when user-defined conditions are met.  In order to take advantage of this powerful feature, you must create and enable trigger rules inside the Java Mission Control client.  For one or a very small number of applications, using Java Mission Control might be acceptable, however if you need to manage a large number of applications, the notion of keeping a Mission Control client open for each application instance might not be very appealing or realistic. Unfortunately, use of Flight Recorder triggers currently only works within the Mission Control client.  So the question becomes, is it possible to mimic trigger-like behavior outside of Mission Control?  This article aims to show how you can, with a simple JMX client program and some scripting.  The following README.txt file is part of the included framework and provides a reasonable description of its individual components.  It also instructs the user as to how to run the demonstration.  The framework is bundled as a NetNeans project and can be downloaded here. To cut to the chase, here's a simple example of how the framework might be used.  For this example, we'll use a sample Java application called Latencies. This program has a deliberate flaw in that as the user increases the number of threads, the latency dramatically increases too.  What we'd like to do is mimic a trigger which will result in a dump of the Java Flight Recorder information when our thread count crosses a certain threshold.  Here are the steps: 1. Unzip the jmxclient.zip file. For this example, we unzip the jmxclient.zip file in the d:\tmp directory, resulting in the creation of the d:\tmp\JMXClient directory. 2. Start NetBeans and open up the JMXClient project that was just unzipped in the d:\tmp\JMXClient directory 3. Within NetBeans "Clean and Build" the JMXClient project. 4. The previous step creates a D:\tmp\JMXClient\dist directory.  Change to the dist/ directory and run the Latencies.bat script from a CMD shell: Let's take a look at the command-line options associated with running the Latencies program: -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.ssl=false - this enables an agent to remotely monitor, via JMX, the Latencies program at port 9999. -XX:+UnlockCommercialFeatures -XX:+FlightRecorder - these two options must be added to enable the Flight Recorder capability. -XX:FlightRecorderOptions=defaultrecording=true - this puts the Latencies program in Flight Recorder continuous recording mode, that is to say, telemetry data will continuously be recorded and stored in a circular buffer.  At any point in time, through a variety of means, the Flight Recorder data can be dumped to a file. 5. With the Latencies program running, open up another CMD shell and start the JMXClientThreadCount.bat script. Again, let's examine the command-line arguments to this program: D:\Oracle\jdk8\bin\java.exe -cp JMXClient.jar com.example.jmxclient.JMXClientThreadCount - com.example.JMXClientThreadCount is the Java main class found in JMXClient.jar -debug - logs debug information to the console.  Among the information displayed is the URL (hostname and port number) it uses to connect to the JMX server (the Latencies program), and additionally it logs the value of the ThreadCount MBean each time the program polls that information. -interval:2000 - instructs the program to poll the value of the ThreadCount bean every 2 seconds (2000 milliseconds) -threshold:20 - instructs the program to terminate when the ThreadCount value exceeds 20. 6. With both programs running, return back to the Latencies window and hit <Enter> 4 or 5 times to add additional threads to the program. 7. Now take a look at the JMXClientThreadCount.bat window.  It should have caused the thread count to exceed the specified threshold value of 20, resulting in a "trigger" that dumps the Flight Recorder contents to a file called Latencies.jfr.  With that file, you can now examine the inner workings of the Latencies program with the Java Mission Control utility. Voila!  For further information please look at the aforementioned  JMXClient NetBeans Project and README.txt file.

As highlighted in this previous article, Java Flight Recorder triggers enable you to selectively dump detailed runtime information about your Java application when user-defined conditions are met.  In...

Using Java Flight Recorder Triggers

A good amount has been written and said about Java Flight Recorder, its integration into the Oracle Java SE Java Virtual Machine (JVM) and the very low overhead associated with enabling the framework.  It not only makes the notion of collecting detailed runtime information about a Java application in production a possibility, it makes it a reality. Many opt to place a program in Java Flight Recorder's Continuous Recording Mode.  In this state, the Java application will collect runtime data indefinitely, where you can specify (or default to) how much data you want to retain before overwriting.  Once in this mode, you can at any time, with many different options, dump the runtime information into a self-contained flight recorder file.  From there, the Java Mission Control tool can be used to open this file to further diagnose your running application's behavior. Rather than randomly or periodically taking Flight Recorder dumps, it is also possible to conditionally do so.  That is to say, you can, though the use of triggers, create rules that will cause a flight recorder dump to take place when a defined condition is met.  It's hopefully not difficult to envision that in this scenario triggers could help you focus on detailed runtime information when things start to go bad, as opposed to looking at a much larger data set.  The video that follows goes through a simple session, demonstrating how to trigger a Flight Recording dump (7 minutes into the video) and briefly uses Mission Control to diagnose the sample application's behavior based on reading the dumped flight recording.

A good amount has been written and said about Java Flight Recorder, its integration into the Oracle Java SE Java Virtual Machine (JVM) and the very low overhead associated with enabling the...

Updates to Java Serial Communications, Raspberry Pi 3

Why not kill a few birds with one stone?  First, it was high time to finally learn how to post an article with our new blogging platform (based on Oracle Content Marketing).  Second, and this is of course relative to my little world, it would be useful to provide an update to one of my more historically read topics dealing with serial communications and Java. A recent comment to one of the Java Serial Port Communications posts mentions difficulty with using the provided RxTx shared object for the ARMv7l architecture.  So I figured I'd rebuild it taking advantage of some new hardware. The Raspberry Pi 3 represents the first 64-bit ARM offering from the Raspberry Pi Foundation, and can be ultimately utilized to build both 32 and 64-bit binaries.  Although currently most work on this device is done in 32-bit mode, there are some 64-bit environments available.  For the 32-bit ARMv7l environment we'll use the ubiquitous Raspbian Jessie distribution. And for our 64-bit AArch64 instance we'll use OpenSuSE LEAP 42.2. To understand the history of this topic, you can take a look at the following past posts: Serial Port Communications for Java SE Embedded Java Serial Communications Revisited RxTx code has not always been readily available, so to improve upon that, you can now go to https://github.com/jtconnors/rxtx-2.1-7r2 and pull down the source (including the modifications needed to build for Java versions 6 though 8).  Additionally, another repository has been created at https://github.com/jtconnors/rxtx-2.1-7r2-binaries to keep compiled binaries for the native shared objects associated with RxTx for popular embedded platforms.  If you create subsequent binaries for additional platforms, I'd be glad to add them. Finally, while playing around with both 32 and 64-bit variants, I thought it might be useful to run the SPECjvm2008 benchmark to see how Java performance differs between the architectures.  One of the primary advantages of a 64-bit JVM is its ability to support much larger heap sizes.  The usefulness of a 64-bit JVM on a Raspberry Pi 3 is certainly debatable, considering it only has 1GB of RAM, nonetheless it would be interesting to see what overhead might be incurred using the 64-bit JVM. Although results of individual components of the benchmark do vary, overall it looks like the 64-bit JVM does incur some (hopefully acceptable) overhead.

Why not kill a few birds with one stone?  First, it was high time to finally learn how to post an article with our new blogging platform (based on Oracle Content Marketing).  Second, and this is of...

Sun

Automating The Creation of JDK9 Reduced Runtime Images in NetBeans

With the upcoming release of JDK9, the Java SE platform will cease to exist in monolith fashion and will instead be built from the ground up as a series of modules.  This sea change will allow developers to modularize their own applications and furthermore enable them to create runtime images with only those modules that are required to run their application.  Much has been, and certainly will be, written about this massive evolution.  Today we'll focus on the ability to create custom runtime images and how their creation might be automated by an Integrated Development Environment like NetBeans. First off you can download an early access build of JDK9 and a corresponding early access JDK9 NetBeans build to get started. For this recipe, we'll create a very simple JDK9 application called SimpleApp which just a logs a message using the Java Logging API.  All Java 9 programs require the use of a module called java.base; the rationale behind choosing to invoke a logging method in this program is that it will require the module system to pull in an additional module called java.logging.  Here's what the NetBeans project and source code look like: When we run this program in NetBeans the output window shows this: To modularize this program, the first thing we'll need to identify is its module dependencies.  We can accomplish this task by taking advantage of a JDK9 utility called jdeps as follows: First we we first just invoke jdeps -version to confirm that indeed we're using a JDK9 version of the tool. Next we invoke jdeps -s on the NetBeans generated SimpleApp.jar file to get its module dependencies.  In this instance, our program requires two modules.  As briefly mentioned earlier, all Java 9 applications by default require the java.base module.  Additionally our simple program calls a method in the java.logging module and hence has a dependency on it too. With this information, we can introduce a module-info.java file into our NetBeans project by right clicking on the SimpleApp project and selecting New->Java Module Info... Once created, the module-info.java file will appear in theSimpleApp's default package.  The module specification will initially be empty, add the two requires clauses and one exports clause as depicted below. With the module-info.java file properly situated and populated, the next time SimpleApp is built, NetBeans will add the compiled module-info.class file to contents of the SimpleApp.jar file, making it a modular jar. It may still be a little early in the game, what appears to be missing in NetBeans at this point is the ability to construct runtime images using the JDK9 jlink utility.  So let's do some customization to NetBeans to provide this capability.  There are no doubt more elegant solutions, this one will enable you to output the the actual java commands that run, as they run, to aid in debugging. The first step is to locate and edit SimpleApp's NetBeans project.properties file.  You can find this file in NetBeans by clicking on the Files tab which gives you a filesystem view of your project.  Underneath the nbproject/ directory you'll find the project.properties file.  Double clicking on project.properties will open that file for editing: Add the following text to the end of the project.properties file:  ## Added to support creation of modular jar and jlink image.# Change this property to match your JDK9 location#jdk9.basedir=C:\\Users\\jtconnor\\jdk-9#modular.jar.command=${jdk9.basedir}\\bin\\jar.exemodular.jar.file=${dist.dir}\\${application.title}.jar#modular.jar.arg1=--verbosemodular.jar.arg2=--updatemodular.jar.arg3=--filemodular.jar.arg4=${modular.jar.file}modular.jar.arg5=--main-classmodular.jar.arg6=${main.class}modular.jar.arg7=--module-versionmodular.jar.arg8=1.0modular.jar.arg9=-Cmodular.jar.arg10=${build.dir}\\classesmodular.jar.arg11=module-info.classmodular.jar.args.concatenated=${modular.jar.arg1} ${modular.jar.arg2} ${modular.jar.arg3} ${modular.jar.arg4} ${modular.jar.arg5} ${modular.jar.arg6} ${modular.jar.arg7} ${modular.jar.arg8} ${modular.jar.arg9} ${modular.jar.arg10} ${modular.jar.arg11}#jlink.command=${jdk9.basedir}\\bin\\jlink.exejlink.module.dependency1=${modular.jar.file}jlink.module.dependency2=${jdk9.basedir}\\jmodsjlink.module.path=${jlink.module.dependency1};${jlink.module.dependency2}jlink.image.dir=${dist.dir}\\jimage#jlink.arg1=--module-pathjlink.arg2=${jlink.module.path}jlink.arg3=--add-modulesjlink.arg4=${application.title}jlink.arg5=--outputjlink.arg6=${jlink.image.dir}jlink.arg7=--compress=2jlink.args.concatenated=${jlink.arg1} ${jlink.arg2} ${jlink.arg3} ${jlink.arg4} ${jlink.arg5} ${jlink.arg6} ${jlink.arg7} It's not as bad as it looks.  Basically most of the work here involves setting up the command line arguments for running the JDK9 jar and jlink utilities.   And with this setup, you should be able to more easily debug the modular jar and runtime image creation should it go awry.  The one important change that you will need to make is the setting of the jdk9.basedir property: ## Added to support creation of modular jar and jlink image.# Change this property to match your JDK9 location#jdk9.basedir=C:\\Users\\jtconnor\\jdk-9  <-- Change This to point to your JDK9 location! With that step completed, the final modification goes to the SimpleApp's build.xml file.  Just as before, this file can be found in the Files tab, under the SimpleApp/ directory.  Double click on build.xml to edit the file: Once the file is open, move to the end of the file.  We are going to insert additional ant tasks right before </project> delimiter as follows: And here's the text that should be placed there: <target name="-post-jar" depends="-do-modular-jar,-do-jlink">    </target>        <target name="-do-modular-jar">        <echo message="Updating ${dist.jar} to be a modular jar."/>        <echo message="Executing: ${modular.jar.command} ${modular.jar.args.concatenated}"/>        <exec executable="${modular.jar.command}">            <arg value="${modular.jar.arg1}"/>            <arg value="${modular.jar.arg2}"/>            <arg value="${modular.jar.arg3}"/>            <arg value="${modular.jar.arg4}"/>            <arg value="${modular.jar.arg5}"/>            <arg value="${modular.jar.arg6}"/>            <arg value="${modular.jar.arg7}"/>            <arg value="${modular.jar.arg8}"/>            <arg value="${modular.jar.arg9}"/>            <arg value="${modular.jar.arg10}"/>            <arg value="${modular.jar.arg11}"/>        </exec>    </target>        <target name="-do-jlink">        <echo message="Creating jlink image in ${jlink.image.dir}/."/>        <echo message="Executing: ${jlink.command} ${jlink.args.concatenated}"/>        <exec executable="${jlink.command}">            <arg value="${jlink.arg1}"/>            <arg value="${jlink.arg2}"/>            <arg value="${jlink.arg3}"/>            <arg value="${jlink.arg4}"/>            <arg value="${jlink.arg5}"/>            <arg value="${jlink.arg6}"/>            <arg value="${jlink.arg7}"/>        </exec>    </target> The SimpleApp project is now modified.  Issuing a NetBeans "Clean and Build" on the SimpleApp project will now run the additional ant tasks that were incorporated into the project's build.xml file.  Here's what the NetBeans output window should show: Finally, let's take a look at what was built, and how the custom runtime image can be run.  The screenshot that follows examines the SimpleApp's dist/ directory built by NetBeans.  It shows that there are two entries: SimpleApp.jar (the modular jar file) and a jimage/ directory (the custom runtime image).  It goes on to run java -version from the jimage/ directory to show that it's a JDK9 runtime, and then issues a java --list-modules to show what modules are part of this runtime.  Notice only three appear versus what would normally be somewhere in the neighborhood of 90 for a full-fledged JDK9 runtime.  And to top it off, the last command shows how the SimpleApp application is run from the jimage/ runtime. The example shown here is specific to the Windows platform.  If you wish to duplicate this effort on a Linux or MAC desktop, you'll need to modify the project.properties file accordingly. Here's hoping this sheds some light on some of the new Java 9 features.

With the upcoming release of JDK9, the Java SE platform will cease to exist in monolith fashion and will instead be built from the ground up as a series of modules.  This seachange will allow...

Sun

Java.net and Kenai.com Forges Closing, Moving to GitHub

Aside from all that eloquent prose (wink, wink), my entries have, through the years, referenced a fair amount of source code and provided downloads to some of those examples.  Having wanted to move these to a modern source code repository for a while now; an approaching event has forced the issue: namely that Oracle has announced the closing of the Java.net and Kenai.com forges on April 28, 2017.  In accordance, I've revisited a few blog entries and rehosted referenced source code projects on GitHub under the https://github.com/jtconnors URL.  Here is some more information: JavaFX Scoreboard Entry: A Raspberry Pi / JavaFX Electronic Scoreboard ApplicationEntry: Source Code for JavaFX Scoreboard Now Available (You can click on the Scoreboard image to start the application).  To date, nearly 100 readers have asked for access to this source code.  In the past, requestors had to provide a Java.net username in order to subscribe to the Java.net project, entitling them download rights.  You can now access the source code (without any inconvenience) for version 1.1 of the JavaFX Scoreboard on GitHub at the following location:https://github.com/jtconnors/Scoreboard-v1.1An overview of the application can be found inside the GitHub repository here. Various Musings on Java and Sockets Entry: Update to JavaFX, Sockets and Threading: Lessons LearnedEntry: JavaFX, Sockets and Threading: Lessons LearnedEntry: Adding a Timestamp to a Signed Java RIAIn addition to the two original projects (you can click on the images above to start the SocketServerFX and SocketClientFX applications respectively), two more have been added to GitHub to round out this discussion.  The four NetBeans projects are listed as follows: SocketServerFX (Image above on the left): a simple JavaFX 2.x based UI application representing the server end of a socket connection. Used in conjunctionwith the SocketClientFX application, these two show how connections are established and data is passedbetween sockets connections.Source code can be accessed at https://github.com/jtconnors/SocketServerFX SocketClientFX (Image above on the right): a simple JavaFX 2.x based UI application representing the client end of a socket connection. Used in conjunctionwith the SocketServerFX application, it shows how connections are established and data is passedbetween sockets connections.Source code can be accessed at https://github.com/jtconnors/SocketClientFX MultiSocketServerFX (new): JavaFX UI program, serving up multiple client socket connections.   Can be used in conjunctionwith one or more SocketClientFX clients.Source code can be accessed at https://github.com/jtconnors/MultiSocketServerFX com.jtconnors.socket (new):  Java socket utility classes, utilized by the client and server programs, packaged up for re-useSource code can be accessed at https://github.com/jtconnors/com.jtconnors.socket

Aside from all that eloquent prose (wink, wink), my entries have, through the years, referenced a fair amount of source code and provided downloads to some of those examples.  Having wanted to move...

Sun

Adding a Timestamp to a Signed Java RIA

As the title suggests, the focus for this article revolves around adding timestamps to signed Java Rich Internet Applications.  The related subtopics are worth mentioning up front in case the reader is interested in jumping right to one of those areas:   Example Signed (and Timestamped) RIAs What is Timestamping and Why Should I Care? How Can Code Be Signed and Timestamped? How Can you Verify That a Jar File Has Been Signed and Timestamped? How Can you Integrate Signing and Timestamping into a NetBeans Project? Example Signed (and Timestamped) RIAs If your interest lies solely in getting access to a signed and timestamped Java web application, here are two that can be run by clicking on the images below.  The SocketServerFX and SocketClientFX applications, when run simultaneously and connected, demonstrate how simple text can be sent and received over sockets. For those experimenting with Deployment Rule Sets, these two web applications could serve as test examples for use in managing RIA access.   What is Timestamping and Why Should I Care?     Applications signed with a trusted certificate come with an expiration date.  At expiration, the code signer has to re-issue the software package with with an updated certificate in order to maintain a valid trusted signature.  There are a whole host of reasons why re-signing may be impractical; the question becomes, is it possible to validate trusted signatures even after they have expired, thus prolonging their lifetime?  The answer is yes by including a timestamp verified by a Timestamp Authority.  With the timestamp, you're essentially proving that your code signing certificate was still valid at the time of signing.   How Can Code be Signed and Timestamped? The jarsigner utility, found in the Java Development Kit, is the mechanism used to for signing Java applications.  A -tsa argument can be included on the command-line to specify a Timestamp Authority.  A sample invocation from a Windows system might look something like this: > jarsigner -keystore code-sign.jks -tsa http://timestamp.comodoca.com \ SocketServerFX.jar "jim connors's comodo ca limited id" Enter Passphrase for keystore: jar signed. As the code signing certificate referenced above comes from Comodo, one of many trusted certificate authorities, we use their Timestamp Authority to authorize the signature. How Can You Verify That a Jar File Has Been Signed and Timestamped? Perhaps not the most elegant solution, you can utilize additional command-line arguments provided for by the jarsigner utility (-verify -verbose -certs) and search for a timestamp that is formatted in a specific way, as demonstrated by the following sample invocation:   > jarsigner -verify -verbose -certs SocketServerFX.jar | findstr signed       [entry was signed on 3/1/16 8:48 AM]       [entry was signed on 3/1/16 8:48 AM]       . . .       [entry was signed on 3/1/16 8:48 AM] If you see text of the form "[entry was signed on ...]", then you know the jar file has been signed and timestamped.  If the jar is not timestamped, no such output will appear. How Can you Integrate Signing and Timestamping into a NetBeans Project? Within the NetBeans IDE, if you'd like to sign and timestamp your application automatically as part of your build process, you can do so by making a few modifications to your NetBeans project. 1. Add the following properties onto your project's project.properties file: # Properties for custom signjar jnlp.signjar.alias=<your certificate alias> jnlp.signjar.keystore=<keystore file containing certificate private key> jnlp.signjar.storepass=<keystore passphrase> jnlp.signjar.keypass=<private key passphrase> jnlp.signing.tsaurl=<URL for TimeStamp Authority> 2. Add the following target to the project's build.xml file.     This should be placed at the bottom of the file but before the </projects> directive. <!-- Custom Code Timestamping using Ant's signjar instead of NetBeans --> <target name="sign-jars" depends="-check-signing-possible">         <echo message="Using custom code for signing and timestamping via build.xml..."/>         <signjar                alias="${jnlp.signjar.alias}"                storepass="${jnlp.signjar.storepass}"                keystore="${jnlp.signjar.keystore}"                keypass="${jnlp.signjar.keypass}"                tsaurl="${jnlp.signing.tsaurl}">             <path>                 <fileset dir="dist" includes="*.jar" />             </path>         </signjar>     </target> By running the sign-jars ant target, your project's jar file will be signed and timestampped.      

As the title suggests, the focus for this article revolves around adding timestamps to signed Java Rich Internet Applications.  The related subtopics are worth mentioning up front in case the reader...

Sun

The $5 Raspberry Pi

Less than a year after the introduction of the Raspberry Pi 2, the Raspberry Pi foundation has once again outdone itself with yet another landmark product launch: the Raspberry Pi Zero.                       Image taken from https://www.raspberrypi.org/blog/raspberry-pi-zero/ The Raspberry Pi Zero is truly revolutionary.  At 40% the size of the original Pi, it consumes less than half of the power, and can be configured to draw as little as 80mA. It has a faster processor than the original (Java performance graph below), and maintains binary compatibility with previous generations.  And with a list price of $5US, the dream of offering serious computing power to nearly anyone on the planet comes ever closer to reality.  Here are a some of the high level features: 1 GHz ARM11 Broadcom BCM2835 processor 512MB RAM Micro-SD card slot Multiple Linux distributions available plus support for Windows 10 Mini-HDMI socket for 1080p video Micro-USB sockets for data and power GPIO header (unpopulated).  Pinout the same as Model A+/B+/2B Composite video header (unpopulated) At the time of this article's creation (Jan 2016), availability is limited, as the first shipment sold out quite rapidly (no surprise there).  I was able to procure my Pi Zero on the black market.  That is to say I bought it on eBay at a substantial uplift. In terms of Java performance, the chart that follows compares SPECjvm2008 benchmark results for a Raspberry Pi and a Raspberry Pi Zero utilizing the latest available Oracle JDK (1.8.0_71).  The larger the number, the better, the Pi Zero outperforms the original by 45%. Another feel-good aspect to this story is that the Pi Zero will be manufactured in Wales.  Here's hoping that in the near future supplies will meet the pent up demand. 

Less than a year after the introduction of the Raspberry Pi 2, the Raspberry Pi foundation has once again outdone itself with yet another landmark product launch: the Raspberry Pi Zero.                 ...

Sun

Installing Trusted Certificates into a Java Keystore

As software environments continue to ratchet up security measures, the odds of having to deal with digital certificates in more than a superficial manner only increases over time.  Furthermore, platforms are not only mandating the use of certificates, they are to a greater extent shunning the self-signed variety and instead insisting upon certs that originate from a trusted authority.  When managing certificates in the Java world, the utility you're most likely to encounter is keytool, an integral part of the Java Development Kit.  Although keytool can be effectively used to generate self-signed private/public key pairs, it's a little lacking when it comes to incorporating in certs generated by trusted authorities.  [Based upon feedback and recommendations received after the original posting of this article, a Postscript section has been appended which discusses alternative solutions to the original discussed below] In particular, we'll focus on overcoming the following two shortcomings: An existing private key and certificate generated by a trusted Certificate Authority (CA) cannot be imported by keytool, at least not in the format traditionally provided by CAs. Not only must the unique private key be imported into the keystore, in some instances the root CA certificate and any intermediate certificates (referred to as a certificate chain) must be included, and more importantly in the correct order.  The keytool utility doesn't help much in the way of ensuring a valid order. The following example uses a real SSL certificate chain from a real certificate authority and was done in the context of the Java Advanced Management Console 2.1 application.  Java AMC is a Java EE application and requires Oracle's WebLogic application server to function.  In this instance we'll be updating a keystore associated with WebLogic, but in reality this Java keystore should be no different from any other Java keystore, so these steps should apply elsewhere just fine. We'll leave the minutiae of applying for a trusted certificate as an exercise for the reader; one of the first steps towards getting a certificate involves creating and submitting a CSR (Certificate Signing Request) to a Certificate Authority.  A byproduct of this process is that a private key file is generated based on the information in the CSR.  This file, when opened, looks something like this: -----BEGIN PRIVATE KEY-----encryptedgobbledygook line 1... encryptedgobbledygook line n-----END PRIVATE KEY----- This private key file is in PEM (Privacy Enhanced Mail) format, and will take a .pem suffix.  For our example, the file is stored as: private-key.pem For this article, we used Comodo (one of many alternatives) as our Certificate Authority.  When the entire process with Comodo was complete, the following documents were ultimately received from them: amc-server_jtconnors_com.crt - This is the Comodo-generated host-specific SSL certificate for a machine with a Fully Qualified Domain Name (FQDN) of amc-server.jtconnors.com AddTrustExternalCARoot.crt - Comodo Root Certificate COMODORSAAddTrustCA.crt - Comodo Intermediate Certificate 1 COMODORSADomainValidationSecureServerCA.crt - Comodo Intermediate Certificate 2 With these files in place we can now begin the importing process.  We will take advantage of enhancements added to keytool with the Java 6 release: namely that keytool can merge and import keystores that are in PKCS12 format. With this new information what remains is to figure out how to convert our private key and certificate chain into a PKCS12 file.  For this functionality we resort to the capabilities found in the ubiquitous OpenSSL toolkit, available on virtually all popular compute platforms.  In the example that follows, we'll be running on a Windows system and will utilize the Cygwin environment to run the required OpenSSL commands. Step 1. (From Cygwin) Concatenate the certificates comprising the CA-supplied root certificate chain to one file.  Include only the root certificate and intermediate certificate(s) and exclude the host-specific SSL certificate. $ cat AddTrustExternalCARoot.crt COMODORSAAddTrustCA.crt COMODORSADomainValidationSecureServerCA.crt > BUNDLE.crt Step 2. (From Cygwin) Create a PKCS12 keystore.  This command incorporates both the certificate chain along with the SSL private key and certificates.  Your passwords may vary. $ openssl pkcs12 -export -chain -in amc-server_jtconnors_com.crt -inkey private-key.pem -out keystore.p12 -name amc-server -CAfile BUNDLE.crtEnter Export Password: changeitVerifying - Enter Export Password: changeit Step 3. (From Windows CMD) Using keytool, import the PKCS12 keystore into the resulting JKS keystore called keystore.jks. Again, you may select different passwords. > "c:\Program Files\Java\jdk1.8.0_66\bin\keytool.exe" -importkeystore -destkeystore keystore.jks -srckeystore keystore.p12 -alias amc-serverEnter destination keystore password: changeitRe-enter new password: changeitEnter source keystore password: changeit Step 4. With the keystore successfully created we can now take the optional step of further verifying it.  Certain Java-based application frameworks, like Oracle's WebLogic for example, can be finicky about the completeness and order of the certificate chain.  To help validate the keystore, we can use the ValidateCertChain program which comes bundled with WebLogic 12c and can be found in the distribution's weblogic.jar file.  Here's a sample invocation of the program on our recently created keystore.jks file: > java -cp %MW_HOME%\wlserver\server\lib\weblogic.jar utils.ValidateCertChain -jks amc-server keystore.jksCert[0]: CN=amc-server.jtconnors.com,OU=PositiveSSL,OU=Domain Control ValidatedCert[1]: CN=COMODO RSA Domain Validation Secure Server CA,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GBCert[2]: CN=COMODO RSA Certification Authority,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GBCert[3]: CN=AddTrust External CA Root,OU=AddTrust External TTP Network,O=AddTrust AB,C=SECertificate chain appears valid Here's hoping these examples save you some time, and more importantly, a little grief. Postscript After having received recommendations from the engineers far better versed in AMC and WebLogic, the following section was added to briefly discuss alternatives to the above mentioned OpenSSL solution. Alternative A:  For people that have complete control of the certificate generating process, they can, as previously hinted, use keytool exclusively for this process. For the first phase of the process they can use keytool in the following general manner (arguments to keytool will vary) to generate a keypair and certificate request: keytool -keystore keystore.jks -genkeypair -keyalg rsakeytool -keystore keystore.jks -certreq In this scenario, OpenSSL would not be required since the keypair is already stored in the keystore.  From here you can import the certificates following a form similar to this: keytool -import -keystore keystore.jks -alias root -file AddTrustExternalCARoot.crtkeytool -import -keystore keystore.jks -alias intermediate1 -file COMODORSAAddTrustCA.crtkeytool -import -keystore keystore.jks -alias intermediate2 -file COMODORSADomainValidationSecureServerCA.crtkeytool -import -keystore keystore.jks -alias mykey -file amc-server_jtconnors_com.crt In the last command, "-alias mykey" is essential and must match the key pair in the keystone.  As a shortcut, you could also concatenate all PEM-encoded certificates into a big file and then call: keytool -import -keystore keystore.jks -alias mykey -file thebigfile Alternative B: Along with the ValidateChain program WebLogic 12c's weblogic.jar file also includes a utility called ImportPrivateKey which can also used to import a certificate chain into a Java keystore.  This utility command is described in Creating a Keystore Using ImportPrivateKey subsection of WebLogic documention on Configuring Keystores.  The command follows a form like this: java -cp %WL_HOME%\server\lib\weblogic.jar utils.ImportPrivateKey -keystore newkeystore -storepass **keystorepassword** -alias amctrust -certfile certificate.pem -keyfile privatekey.pem [-keyfilepass **privatekeypassword**] For further edification please consult the WebLogic docs.

As software environments continue to ratchet up security measures, the odds of having to deal with digital certificates in more than a superficial manner only increases over time.  Furthermore,...

Sun

Using Java Flight Recorder with Compact Profiles

Like it's big brother Java SE, the Java SE-Embedded 8 platform provides support for Java Flight Recorder, an invaluable profiling and event collection framework built into the Java Runtime Environment.  However, Flight Recorder is only available with the Java SE Embedded 8 Full JRE, not with the smaller Compact Profiles.  So the question becomes, is there anything that can be done to use Java Flight Recorder with Compact Profiles? At the current time, the smaller Compact1 and Compact2 profiles cannot realistically support Java Flight Recorder without substantial changes.  Hence we'll avoid discussing their prospects for inclusion here.  What we will focus on is Compact3 as its specification includes among others, the javax.management APIs, making for a more reasonable match.  As it turns out, all that is required to enable Flight Recorder use with the Compact3 profile is to copy over a few files over from the Full JRE.  The instructions that follow should aid in creating a Flight Recorder enabled instance: 1. Download a Java SE-Embedded EJRE for your platform.  You can do so here.  For this example, we'll do this on a Linux host and choose the ARMv6/v7 Hard Float option (suitable for the venerable Raspberry Pi). 2. Extract the EJRE. $ tar xvf ejdk-8u51-linux-armv6-vfp-hflt.tar.gz 3. Use the EJRE's jrecreate.sh script to create a full JRE: $ ./ejdk1.8.0_51/bin/jrecreate.sh --dest full_jre -g -kBuilding JRE using Options {    ejdk-home: /home/pi/ejdk1.8.0_51    dest: /home/pi/full_jre    target: linux_armv6_vfp_hflt    vm: all    runtime: jre    debug: true    keep-debug-info: true    no-compression: false    dry-run: false    verbose: false    extension: []}Target JRE Size is 59,389 KB (on disk usage may be greater).Embedded JRE created successfully 4. Create a Compact3 JRE: $ ./ejdk1.8.0_51/bin/jrecreate.sh --profile compact3 --dest compact3 -g -kBuilding JRE using Options {    ejdk-home: /home/pi/ejdk1.8.0_51    dest: /home/pi/compact3    target: linux_armv6_vfp_hflt    vm: client    runtime: compact3 profile    debug: true    keep-debug-info: true    no-compression: false    dry-run: false    verbose: false    extension: []}Target JRE Size is 24,336 KB (on disk usage may be greater).Embedded JRE created successfully 5.  Check the size of the compact3 JRE.  We'll use this to see how much space was needed to add support for Flight Recorder. $ du -sk compact324480   compact3 6.  Create the compact3/lib/jfr/ directory $ mkdir compact3/lib/jfr 7.  Copy the following files over from the full JRE to the compact3 instance with these commands: $ cd full_jre/ $ cp ./lib/jfr.jar ../compact3/lib $ cp ./lib/arm/libjfr.so ../compact3/lib/arm $ cp ./lib/arm/libbci.so ../compact3/lib/arm $ cp ./lib/jfr/default.jfc ../compact3/lib/jfr $ cp ./lib/jfr/profile.jfc ../compact3/lib/jfr 8. That's all that is required.  You can compare the disk usage of the compact3/ directory before and after these modifications to get an idea of the additional space required to utilize Java Flight Recorder. $ du -sk ../compact324824   ../compact3 So comparing the disk usage of step (5), we see that less than 400KB is added in order to enable usage of Java Flight Recorder. To better understand how you might remotely connect to a Flight Recorder enabled instance in a secure fashion, check out this article.

Like it's big brother Java SE, the Java SE-Embedded 8 platform provides support for Java Flight Recorder, an invaluable profiling and event collection framework built into the Java Runtime...

Sun

New for Java 9: jshell

A sampling of indices gauging computer programming language popularity (e.g. PYPL and TIBOE) shows that Java, after 20 years, still enjoys a huge following.  In general, depending upon whom and/or when you ask, Java usually comes in first or second place in these surveys.  Although it might be hard to imagine recent computer science graduates without Java exposure, one trend is evident: top universities are gravitating towards teaching "simpler" languages in lieu of Java for their introductory programming classes. The standard Java platform, compared to its counterparts like Python, currently lacks a Read-Eval-Print-Loop otherwise known as a REPL.  Instead of having to construct and compile complete syntactically correct programs before feedback can be achieved, REPLs allow much more interactivity, enabling the student/programmer to enter small snippets of code and receive immediate feedback.  According to the Java Enhancement Proposal which outlines the new jshell functionality, "The number one reason schools cite for moving away from Java as ateaching language is that other languages have a "REPL" and have far lowerbars to an initial "Hello, world!" program."  With the introduction of jshell in the upcoming Java 9 release, this shortcoming will be eliminated. Falling under the auspices of an OpenJDK project called Kulla, the code representing the REPL capability has yet to be incorporated into the core JDK 9 early access release.  As of the writing of this article (late July 2015), a separate build is required to get the jshell functionality.  In the (hopefully not-too-distant) future an early access release will bundle all these features together obviating the need for extra work.  In the interim, here's a brief video demonstrating some of it's features. As we near the "feature complete" phase for JDK 9, we look forward to better integration and incorporation of yet more new important features into this upcoming release.

A sampling of indices gauging computer programming language popularity (e.g. PYPL and TIBOE) shows that Java, after 20 years, still enjoys a huge following.  In general, depending upon whom and/or...

Sun

Update to JavaFX, Sockets and Threading: Lessons Learned

Recently, a reader commented on a dated article of mine, circa 2010, entitled JavaFX, Sockets and Threading: Lessons Learned.  In it he correctly stated that, as the original content is based on the deprecated JavaFX 1.3 framework (aka JavaFX Script), he cannot test or utilize that code with the modern JavaFX API.  What follows is an update to that blog entry with code and references appropriate for the JavaFX 8 (Java SE 8 with JavaFX) platform. Overview For a more thorough understanding of the method behind this madness, please consult the original article.  Briefly stated, socket programming, especially in Java, often times lends itself to utilizing threads.  To facilitate using sockets, both from the "client" and "server" side, an abstract class called GenericSocket.java was and is provided and, baring a few minor changes, remains quite similar to the original version. Just as in the Original JavaFX Script framework, The JavaFX UI is still not thread-safe and its scene graph must only be accessed through the JavaFX application thread.   What has changed between old and new is the class and method name required to perform work on the JavaFX application thread. Just like before, we've identified two methods associated with socket manipulations that need to perform work on the main thread.  These abstract method calls are incorporated into the GenericSocket.java source and are specified in the SocketListener.java interface file: public interface SocketListener { public void onMessage(String line); public void onClosedStatus(boolean isClosed); } Within GenericSocket.java, you'll see references to these method calls as follows: /* * The onClosedStatus() method has to be implemented by * a sublclass. If used in conjunction with JavaFX, * use Platform.runLater() to force this method to run * on the main thread. */ onClosedStatus(true);  and /* * The onMessage() method has to be implemented by * a sublclass. If used in conjunction with JavaFX, * use Platform.runLater() to force this method to run * on the main thread. */ onMessage(line); As implied by these comments, JavaFX-specifc classes that extend the GenericSocket class and implement the SocketListener interface are required.  Correspondingly two helper classes have been created: FxSocketClient.java and FxSocketServer.java.  These extend the GenericSocket class and implement the SocketListener interface from the perspective of a JavaFX environment.  Both classes override the onMessage() method like this:   Likewise, the onClosedStatus() method is implemented as follows:   As these two code snippets exhibit, we enclose our execution requirements within a call to the Platform.runLater() method, ensuring that  it will be executed on the JavaFX application thread.  (For the Java 8 aficionados, you may notice that these two methods could be converted to lambda expressions. That exercise is left to the reader.)  What remains now is for any referencing JavaFX class to implement the SocketListener interface. To demonstrate the usage of FxSocketClient and FxSocketServer classes within JavaFX 8, this article provides two NetBeans projects represented by the screenshots that follow.  By clicking on each image, you can start up the FxSocketClient and FxSocketServer applications via Java WebStart (assuming you have (1) the latest Java 8 installed and (2) you have a browser which allows Java applets to run.  The last browsers standing are Internet Explorer 11 and Safari).   The user interface for these programs was created with the assistance of a terrific tool called the JavaFX Scene Builder.  You can download the source code, including the UI (represented in FXML), here: SocketClientFX.zip SocketServerFX.zip The javafx.concurrent Package The intent of this article was to focus solely on the Platform.runLater() method, however, it is important to note that JavaFX 2.x also provides an additional means to create background tasks that safely interact with the JavaFX application thread.  Similar in capability to the venerable java.util.concurrent  package (and in fact extending some of those classes), the javafx.concurrent package also furnishes the ability to safely control the execution and track the progress of the application code running in background tasks. For an overview of their use, check out the following article published as part of the Oracle documentation: Concurrency in Java FX.    

Recently, a reader commented on a dated article of mine, circa 2010, entitled JavaFX, Sockets and Threading: Lessons Learned.  In it he correctly stated that, as the original content is based on...

Sun

Welcome Raspberry Pi 2!

Having surpassed 4 million units shipped and clearly establishing itself as a de facto reference platform, the folks at the Raspberry Pi Foundation are not resting on their laurels.  Earlier this month (February 2015), they introduced the Raspberry Pi 2 Model B: Compared to its predecessor, the Raspberry Pi B+, this newer model is packaged with a Broadcom BCM2836 SoC at 900MHz.  It is not only superior in performance to the original single core BCM2835 processor on core-by-core basis, it now brings the added benefit of including 4 cores.  Add to that a doubling of RAM from 512MB to 1GB and it should be no mystery that this new platform can handle more serious workloads.  Later on down, a chart compares the SPECjvm2008 benchmark performance of the original Raspberry Pi Model B with the new Raspberry Pi 2 Model B. In addition to the substantial performance bump, major kudos are to be given to the Raspberry Pi Foundation engineers for their focus on hardware and software compatibility: The Pi2 maintains the same physical form factor as the Raspberry Pi B+, meaning that all of the previous accessories (USB power cable, serial console cable, microSD card ...) all work perfectly well with the new version.  That also includes any Raspberry Pi B+ enclosures you may have too. The ARMv7-based BCM2836 processor is backwards-compatible with the original ARMv6 version.  In order to maintain binary compatibility, the latest version of the Raspbian distribution will boot either the new or old versions of the Raspberry Pi.  Note that in this mode, applications will not be able to take full advantage of the ARMv7 enhancements.  Alternate OS distributions may provide the extra performance boost at the expense of breaking backwards compatibility. Perhaps best of all, the Raspberry Pi maintains its appeal to the masses.  At approximately £23/$35, no price increase comes along with this upgrade. A Brief Look at Performance The chart that follows examines Raspberry Pi Java performance via the SPECjvm2008 benchmark.  Both original and Raspberry Pi 2 systems ran the latest available Raspbian Image (2015-02-02) and the latest available Java 7 JDK (Java 7 update 72)1.  The four vertical bars (blue, red, green, purple) represent four separate runs of the benchmark: Blue: Raspberry Pi Model B.  Invoked with JVM argument -Xmx400m specifying a maximum heap of 400MB. Red: Raspberry Pi 2, single threaded.  Invoked with JVM argument -Xmx400m specifying a maximum heap of 400MB, and application argument -bt 1 instructing the SPECjvm2008 test harness to run the benchmark tests with a single thread.  This is meant to approximate an apples-to-apples comparison of single core performance between the two Raspberry Pi systems. Green: Raspberry Pi 2, multithreaded (default).  Invoked with JVM argument -Xmx600m specifying a maximum heap of 600MB2. Purple: Raspberry Pi 2, multithreaded (default), larger heap.  Invoked with JVM argument -Xmx900m specifying a maximum heap of 900MB3. The larger the measurement (ops/min), the faster the result, the single core performance of the Raspberry Pi 2 is a big improvement over its predecessor.  As the SPECjvm2008 suite does include a healthy dose of mutithreaded code, enabling the test harness to take advantage of the 4 hardware threads available to the Pi 2 added another huge jump in performance.  And finally utilizing the extra RAM (in the form of a larger heap) provided for by the Raspberry Pi 2 yields only a very modest gain in throughput.  In particular, 2 of the 11 component tests (compress, xml) show an improvement with the extra heap.  For more detailed results, you can view this spreadsheet. The added horsepower that the Raspberry Pi 2 brings solidifies the Raspberry Pi family as an important reference platform in the present as well as the future for companies like Oracle.  As the Internet of Things phenomena plays out and more processing is brought to the edge, the new Raspberry Pi is better suited to handle the increased workload that applications like Oracle Event Processing for Oracle Java Embedded and others require. Notes 1 The latest Raspbian distro includes a recent version of Oracle's Java 8 JDK.  Unfortunately the somewhat dated SPECjvm2008 benchmark has a dependency that requires Java 1.7 or earlier, so for these benchmarks, we needed to use a Java 7 JDK.  The latest available, 7u72, was used for these tests. 2 With 4 cores, the SPECjvm2008 test suite would produce an OutOfMemoryError when run with the initial max heap size argument of -Xmx400m.  Increasing the max heap size to 600MB (-Xmx600m) enabled the tests to complete.  Evidently SPECjvm2008 requires more heap space as more hardware threads are configured in. 3 In order to utilize the full amount of RAM (1GB) available with the Raspberry Pi 2, a firmware upgrade may be required.  If necessary, you can invoke 'sudo rpi-update'  from a linux shell on the Raspberry Pi 2 to accomplish this task.

Having surpassed 4 million units shipped and clearly establishing itself as a de facto reference platform, the folks at the Raspberry Pi Foundation are not resting on their laurels.  Earlier...

Sun

Managing Java Flight Recorder Enabled JVMs with SSL

One advantage Java Flight Recorder enjoys is that its event collection and profiling framework is built right into the JDK, allowing instrumented JVM instances to operate with near-zero overhead.  At arguably negligible expense, would it then make sense to consider enabling Java Flight Recorder on production applications?  We say absolutely! You can google around to search for how this framework can be configured to monitor remote JVMs, and nearly all of the reference material will help you get up to speed quickly.  However, you'll find that these examples favor simplicity over security.  They generally disable authentication and encryption leaving the remote application vulnerable to anyone who knows or can guess its host name and port number.  In the real world, if you wanted to monitor a production application remotely, you'd need to carefully consider the security ramifications. Not finding much in the way of instruction here, we thought it might be helpful to document, by example, how a sample remote Java application can be more securely instrumented and managed with Java Flight Recorder and Java Mission Control.  What follows are the steps necessary to enable monitoring over a connection that is secured using Secure Sockets Layer (SSL). For the remainder of this article, we'll refer to the instrumented JVM as the remote side of the solution.  To demonstrate the ubiquity of the Java SE platform, our remote JVM will actually reside on a Raspberry Pi.  As Java SE and Java SE Embedded runtime environments both now bundle the Java Flight Recorder feature set, it doesn't really matter what type of remote JVM instance we have.  The location where the Java Mission Control application runs will be referred to as the client side.  For this example we'll use an Ubuntu-based system.  In reality, the client could have run on any of the alternative supported Java clients like Windows or MacOS. On the remote side (hostname pi1): Remote side, step 0: Create a directory to house the signed digital certificates required for SSL communication.  Moreover, it should have minimal access. pi1$ mkdir $HOME/.certspi1$ chmod 700 $HOME/.certs Remote side, step 1: Create a self-signed cryptographic key pair with the JDK keytool(1) utility.  The are many alternatives to creating (and requesting) cryptographic keys.  The method we'll use for this example is the most straightforward, as keytool is part of the JDK, not to mention the least expensive too! pi1$ keytool -genkey -alias jfrremote -keyalg RSA -validity 730 \ -keystore $HOME/.certs/jfrremoteKeyStoreEnter keystore password: changeitRe-enter new password: changeitWhat is your first and last name? [Unknown]:  Joe SchmoWhat is the name of your organizational unit?  [Unknown]:  Acme CorpWhat is the name of your organization?  [Unknown]:  SkunkworksWhat is the name of your City or Locality?  [Unknown]:  New YorkWhat is the name of your State or Province?  [Unknown]:  NYWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=Joe Schmo, OU=Acme Corp, O=Skunkworks, L=New York, ST=NY, C=US correct?  [no]:  yesEnter key password for <jfrremote>        (RETURN if same as keystore password): Remote side, step 2: Restrict the access of the newly created jfrremoteKeyStore file. pi1$ chmod 600 $HOME/.certs/jfrremoteKeyStore Remote side, step 3: Verify that the keystore contains our newly created key: pi1$ keytool -keystore $HOME/.certs/jfrremoteKeyStore -list -vEnter keystore password: changeitKeystore type: JKSKeystore provider: SUNYour keystore contains 1 entryAlias name: jfrremoteCreation date: 13-Jan-2015Entry type: PrivateKeyEntryCertificate chain length: 1Certificate[1]:Owner: CN=Joe Schmo, OU=Acme Corp, O=Skunkworks, L=New York, ST=NY, C=USIssuer: CN=Joe Schmo, OU=Acme Corp, O=Skunkworks, L=New York, ST=NY, C=USSerial number: 258d7624Valid from: Tue Jan 13 17:32:51 UTC 2015 until: Thu Jan 12 17:32:51 UTC 2017Certificate fingerprints:         MD5:  2E:AD:5F:85:61:21:8D:1A:4B:ED:02:7C:67:26:8B:95         SHA1: 82:12:D6:A0:4C:20:E4:7F:C5:C1:C7:BC:AD:C7:D1:E8:47:76:F2:A6         SHA256: AF:E1:D8:7F:67:F3:DA:F1:22:58:42:B9:A5:50:37:6A:BA:49:76:BC:15:5F:11:9D:F0:1E:13:15:39:BB:9F:C4         Signature algorithm name: SHA256withRSA         Version: 3Extensions:#1: ObjectId: 2.5.29.14 Criticality=falseSubjectKeyIdentifier [KeyIdentifier [0000: 5E EC 1E F8 D5 33 F4 E6   29 06 B6 65 39 85 68 05  ^....3..)..e9.h.0010: F7 19 B3 AF                                        ....]]************************************************************************************** Remote side, step 4: Export the certificate associated with the recently generated key pair. This will be used by the remote JVM instance and also has to be imported into the trust store of the client application (Java Mission Control or jmc). pi1$ keytool -export -alias jfrremote -keystore \ $HOME/.certs/jfrremoteKeyStore -rfc -file jfrremote.cerEnter keystore password: changeitCertificate stored in file <jfrremote.cer> Remote side, step 5: Securely transfer the certificate stored in the jfrremote.cer file over to the system where the Java Mission Control client (jmc) will be run Switching over to the client side (hostname R840): Client side, step 0: Create a directory to house the signed digital certificates required for SSL communication, with minimal access. R840$ mkdir $HOME/.certsR840$ chmod 700 $HOME/.certs Client  side, step 1: Import the certificate, represented by the jfrremote.cer file, into the client's trust store. R840$ keytool -importcert -keystore $HOME/.certs/jfrremoteTrustStore \ -alias jfrremote -file jfrremote.cerEnter keystore password: changeitRe-enter new password: changeitOwner: CN=Joe Schmo, OU=Acme Corp, O=Skunkworks, L=New York, ST=NY, C=USIssuer: CN=Joe Schmo, OU=Acme Corp, O=Skunkworks, L=New York, ST=NY, C=USSerial number: 4fec928cValid from: Tue Jan 13 08:41:48 EST 2015 until: Thu Jan 12 08:41:48 EST 2017Certificate fingerprints:         MD5:  3D:81:45:16:49:13:85:38:E8:E9:90:50:4A:59:F5:5E         SHA1: 6E:FA:63:D7:9A:58:26:A4:22:94:33:9F:AA:1A:6C:B6:E4:16:2C:DE         SHA256: 58:EB:F0:C9:DD:F9:D4:F7:FD:95:4B:2B:61:4C:88:6D:57:E3:87:9F:71:F5:BD:25:67:FB:3C:C0:05:0B:C6:0F         Signature algorithm name: SHA256withRSA         Version: 3Extensions:#1: ObjectId: 2.5.29.14 Criticality=falseSubjectKeyIdentifier [KeyIdentifier [0000: D9 A6 A2 0E CE D2 F7 9D   FA 96 9C B9 9A 32 E2 3A  .............2.:0010: 98 ED A7 5F                                        ..._]]Trust this certificate? [no]:  yesCertificate was added to keystore Client side, step 2: Restrict the access of the newly created jfrremoteTrustStore file. R840$ chmod 600 $HOME/.certs/jfrremoteTrustStore Client side, step 3: Create a self-signed cryptographic key pair with the JDK keytool(1) utility.  This represents the certificate for the client-side Java Mission Control (jmc) application. R840$ keytool -genkey -alias jmc -keyalg RSA -validity 730 \ -keystore $HOME/.certs/jmcKeyStoreEnter keystore password: changeitRe-enter new password: changeitWhat is your first and last name?  [Unknown]:  Joe SchmoWhat is the name of your organizational unit?  [Unknown]:  Acme CorpWhat is the name of your organization?  [Unknown]:  SkunkworksWhat is the name of your City or Locality?  [Unknown]:  New YorkWhat is the name of your State or Province?  [Unknown]:  NYWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=Joe Schmo, OU=Acme Corp, O=Skunkworks, L=New York, ST=NY, C=US correct?  [no]:  yesEnter key password for <jmc>        (RETURN if same as keystore password): Client side, step 4: Restrict the access of the newly created jmcKeyStore file. R840$ chmod 600 $HOME/.certs/jmcKeyStore Client side, step 5: Export the certificate associated with the recently generated key pair. This will be used by the Java Mission Control application and also has to be imported into the trust store of the remote JVM instance. R840$ keytool -export -alias jmc -keystore $HOME/.certs/jmcKeyStore \ -rfc -file jmc.cerEnter keystore password: changeitCertificate stored in file <jmc.cer> Client side, step 6: Securely transfer the certificate stored in the jmc.cer file over to the system where the remote JVM instance will be run Returning to the remote side (hostname pi1): Remote side, step 6: Import the certificate, represented by the jmc.cer file, into the remote JVM instance's trust store. pi1$ keytool -import -alias jmc -file jmc.cer \ -keystore $HOME/.certs/jmcTrustStoreEnter keystore password: changeitRe-enter new password: changeitOwner: CN=Joe Schmo, OU=Acme Corp, O=Skunkworks, L=New York, ST=NY, C=USIssuer: CN=Joe Schmo, OU=Acme Corp, O=Skunkworks, L=New York, ST=NY, C=USSerial number: 860e0e4Valid from: Tue Jan 13 20:15:33 UTC 2015 until: Thu Jan 12 20:15:33 UTC 2017Certificate fingerprints:         MD5:  7B:D7:F3:9D:71:52:F9:35:03:3A:68:BF:02:C2:52:51         SHA1: 38:95:6D:2F:DE:FC:99:D6:63:55:00:A8:57:E2:31:FF:53:35:18:F7         SHA256: 7D:87:87:01:E5:21:58:02:67:0E:7E:2F:14:77:86:12:9D:52:CD:11:A4:B1:C5:D3:32:D8:05:30:61:7B:F5:3E         Signature algorithm name: SHA256withRSA         Version: 3Extensions:#1: ObjectId: 2.5.29.14 Criticality=falseSubjectKeyIdentifier [KeyIdentifier [0000: 0A E8 E3 5E 0A 3C 48 FF   D4 DB 10 A8 62 31 1E F9  ...^.<H.....b1..0010: 55 D8 4C 7A                                        U.Lz]]Trust this certificate? [no]:  yesCertificate was added to keystore Remote side, step 7: Restrict the access of the newly created jmcTrustStore file. pi1$ chmod 600 $HOME/.certs/jmcTrustStore Putting it All Together With key stores and trust stores set up on both sides, we can now start up both components.  Our sample application is a very simple one called Allocator.java. Download it onto your remote system and compile it with the javac program.  When ready, you can start up an SSL-enabled remote JVM instance of the Allocator program in continuous flight recorder mode with the following invocation: pi1$ java \ -Dcom.sun.management.jmxremote.port=7091\ -Dcom.sun.management.jmxremote.ssl=true \ -Dcom.sun.management.jmxremote.authenticate=false \ -Djavax.net.ssl.keyStore=$HOME/.certs/jfrremoteKeyStore \ -Djavax.net.ssl.keyStorePassword=changeit \ -Djava.rmi.server.hostname=pi1 \ -XX:+UnlockCommercialFeatures \ -XX:+FlightRecorder \ -XX:FlightRecorderOptions=defaultrecording=true \ Allocator On the client side, Java Mission Control is started in the following way: R840$ jmc -vmargs \ -Djavax.net.ssl.trustStore=$HOME/.certs/jfrremoteTrustStore \ -Djavax.net.ssl.trustStorePassword=changeit \ -Djavax.net.ssl.keyStore=$HOME/.certs/jmcKeyStore\ -Djavax.net.ssl.keyStorePassword=changeit Once Java Mission Control has started, we connect to the VM instance on host pi1 as shown in the screenshots that follow.  First off, we select "Connect..." from the File menu. A "New Connection" window appears.  Select "Create a new connection" and click the "Next>" button. In the next window that appears, "pi1" is selected as the host and "7091" as the port.  Clicking the "Finish" button continues the process.  And here's a screenshot of the MBean Server window for the JVM running on host pi1. Conclusion To assist in this repetitive and potentially error-prone task, a series of shell scripts have been created which just might be of use.  Download this tarball, and check out the README files found in the two directories, one for the remote side of the equation, the other for the client. For further edification, please check out these links: Marcus Hirt's blog for all things related to Java Flight Recorder and Java Mission Control http://hirt.se/blog/ If you run into SSL handshake problems, this link may prove helpful. How to Anazyze Java SSL Errors: http://java.dzone.com/articles/how-analyze-java-ssl-errors Erik Costlow's article on Self-signed certificates for a known community: https://blogs.oracle.com/java-platform-group/entry/self_signed_certificates_for_a

One advantage Java Flight Recorder enjoys is that its event collection and profiling framework is built right into the JDK, allowing instrumented JVM instances to operate with near-zero overhead.  At...

Sun

USB Device Access for Java SE and OSGi

One of the challenges in creating the content for the Java One 2014 Java SE Embedded Internet of Things Hands-on-Lab concerned interacting, via Java and OSGi, with a USB temperature sensor.   Unfortunately a USB communications API is not part of the Java SE standard (as of this post: Halloween 2014).  So the question is, how can Java/USB communication be established, and furthermore and how does this work within the OSGi framework? In looking around at some of the available options, we chose the javahidapi as the basis for this connectivity.  As a Java/JNI wrapper around the C/C++ HID API for Linux, MacOS X and Windows, the appeal behind this API is that using it does not require theuse of a custom driver for each device on each platform. In order to operate within an OSGi framework (in this case Apache Felix 4.4), javahidapi's open source code has been slightly modified/enhanced. The end result is that an OSGi bundle is available that can be dropped into standard OSGi frameworks to support USB communication for HID devices.  It does contain a native component, and for the sake of simplicity, we've decided to include a separate jar file for each supported architecture.  For the OSGi enthusiast, here's what the generated MANIFEST.MF file looks like for the Linux/armhf (suitable for the Raspberry Pi) architecture: Manifest-Version: 1.0Bnd-LastModified: 1415889978962Build-Jdk: 1.7.0_51Built-By: jtconnorBundle-Activator: com.codeminders.hidapi.ActivatorBundle-ManifestVersion: 2Bundle-Name: hidapi OSGi Bundle for Linux/armhfBundle-NativeCode: native/linux/armv6l/libhidapi-jni-32.so; osname=Linux; processor=armv6lBundle-SymbolicName: com.codeminders.hidapi-armhfBundle-Version: 1.0.0Created-By: Apache Maven Bundle PluginExport-Package: com.codeminders.hidapi;uses:="org.osgi.framework";version="1.0.0"Import-Package: org.osgi.framework;version="[1.6,2)"Tool: Bnd-1.50.0 Here are some pre-built hidapi OSGi bundles for popular Linux platforms: com.codeminders.hidapi-armhf-1.0.jar (ARMv6 hard float - Suitable for Raspberry Pi) com.codeminders.hidapi-armv5tel-1.0.jar (ARMv5- Suitable for Plug Computers) com.codeminders.hidapi-x86-1.0.jar (Linux/x86 32-bit) com.codeminders.hidapi-x86_64-1.0.jar (Linux/x86 64-bit) com.codeminders.hidapi-quark-1.0.jar (Linux/x86 Intel Quark processor) To get a feel for the changes made to the original source, here's what was done: A NetBeans project was created under the Maven->OSGi Bundle category The Java source code for javahidapi was placed in the project's src/main/java/com/codeminders/hidapi directory The architecture specific native library was placed in the project's src/main/resources/native/linux/architecture directory.  For example, the Linux/x86 version of the project places the libhidapi-jni-32.so file in the src/main/resources/native/linux/x86 directory. An Activator.java class was added in the project's src/main/java/com/codeminders/hidapi directory.  In OSGi, the start() method in this class gets called when this bundle is activated.  It is specified in the bundle's MANIFEST.MF file. The original ClassPathLibraryLoader.java file was simplified and is currently only appropriate for Linux deployments. As this is a Maven-based project, the project's pom.xml file was edited (here's what the x86 version looks like), such that at build time it will generate a MANIFEST.MF file similar to the one referenced above. And here are the associated NetBeans Projects which can be used to build the 4 bundles referenced above. hidapi_armhf hidapi_armv5tel hidapi_x86 hidapi_x86_64 hidapi_quark If you'd like to extend this template to include OSGi bundles for additional architectures, you can start with one of the projects above, clone it, and make the appropriate changes for your new environment.  If the native javahidapi component for your new platform is not available, you'll have to pull down the source for hidapi and build it for inclusion into your project.  If anyone is interested in going through this exercise, I'd be glad to post the fruits of their labor here.

One of the challenges in creating the content for the Java One 2014 Java SE Embedded Internet of Things Hands-on-Labconcerned interacting, via Java and OSGi, with a USB temperature sensor.  ...

Sun

Java SE Embedded IoT Hands On Lab Returns for Java One 2014

After a one year hiatus, a Hands On Lab featuring Java SE for Embedded devices will return for The Java One 2014 Conference.  Entitled HOL2097 Java SE Embedded IoT Hands On Lab, students who attend this lab will: Have their own individual embedded device to gain valuable hands on experience Learn how to create Java 8 Compact Profiles using the eJDKjrecreate tool.  Students will use and deploy those Compact Profiles on their individual device. Learn about profile-aware tools available both at the JDK and IDE (NetBeans) level that aid developers in creating Java applications suitable for specific Compact Profiles. Learn about and utilize OSGi, an important Java-based modular service platform ideal for dynamically managing the lifecycle of software components, especially suited for gateway devices. Attach a sensor to the gateway device and deploy a Java/OSGi bundle on the device to capture data from the sensor. Build and deploy a Java/OSGi web service bundle on the gateway device that publishes data captured by the sensor.  Students will be able to remotely access this web service to retrieve the sensor data in standard JSON format. For those attending the Java One 2014 Conference, seating for this lab is limited.  So reserve your spot early.  We could also conceivably host a smaller workshop at a customer location effectively duplicating this lab.  If you're interested, you can drop me a line at james.connors@oracle.com.

After a one year hiatus, a Hands On Lab featuring Java SE for Embedded devices will return for The Java One 2014 Conference.  Entitled HOL2097 Java SE Embedded IoT Hands On Lab, students who attend...

Sun

Java Serial Communications Revisited

Last touched upon in this article dated August 2011, it's high time to revisit the state of Java serial port communications options for the following reasons: As the hype around Internet of Things continues to ratchet up, serial communications are a requirement for a certain class of IoT sensors and devices. RxTx, the platform with arguably the most history, cannot be built with the recently released Java 8 JDK without some modifications. For some time now the website hosting the RxTx project: http://rxtx.qbang.org/wiki/index.php/Main_Page has not been available. Building support for additional specifications like RS-485 and parallel is available in RxTx but was never addressed in the previous serial communications article. An alternative framework called jSSC is gaining in popularity and is worth further discussion. Work in the OpenJDK Device I/O Project is progressing. Among the goals of this project is support for serial communications. RxTx In my customer experiences, RxTx, despite its age, still gets mentioned most when considering serial communications with Java.  For whatever reasons, the RxTx project has gone off line, and access to the project source code is not readily available.  Fortunately, we have a copy, and have made a few enhancements such that: The source can now be compiled with a JDK versions 6, 7 and 8. The original article discussed only enough modifications to build the  librxtxSerial.so shared object required for traditional serial communications.  The librxtxParallel.so, librxtxRaw.so, librxtxI2C.so and librxtxRS485.so shared objects could not be built.  With some very slight modifications, these can successfully be built too.  I make absolutely no promises as to their usefulness, but they do compile. :) The source code is based upon the original 2.1-7r2 version and in this instance is now called 2.1.7r2-Java8.  You can download the source here.  If you want to get a feel for the changes made, take a look at this file:  JAVA8_MODS.txt which can be found in the main directory of the source code.  To build RxTx on your native platform: $ tar xvf 2.1.7r2-Java8.tar.gz $ cd 2.1.7r2-Java8/ $ ./configure $ make jSSC The Java Simple Serial Connector, or jSSC for short, is another open source project that can be found here: http://code.google.com/p/java-simple-serial-connector/.  It is available for a host of processor/OS combinations and, in addition to source availability, comes bundled in binary form too.  Here's a list of supported platforms for its current 2.6.0 release: Win32 Win64 Linux_x86 Linux_x86_64 Linux_ARM Solaris_x86 Solaris_x86_64 MacOSX_x86 MacOSX_x86_64 MacOSX_PPC MacOSX_PPC64 Like RxTx, it contains a native component but is packaged in a nice transparent fashion.  The one potential challenge here may be in trying to figure out how to support a new platform that isn't on this list.  I didn't have a whole lot of success finding out how to build the binary, and admittedly didn't spend an inordinate amount of time trying to figure it out either.  Nonetheless, this project is gaining in popularity and has a dedicated individual supporting the software. OpenJDK Device I/O Finally, a project is underway to treat serial communication and device I/O in general as a first class citizen for the Java SE standard. The wiki can be found here: https://wiki.openjdk.java.net/display/dio/Main. It is based on the work done to provide device I/O to the Java ME 8 platform.  Further solidifying the universality theme of Java 8, the ultimate goal would be to have a consistent device I/O API across both Java SE and Java ME.  If you want to further understand what those APIs look like you can view them here: http://docs.oracle.com/javame/8.0/api/dio/api/index.html. In conclusion, support for serial communications in Java SE is -- albeit slowly -- progressing.  There are multiple open source projects and commercial alternatives too.  Ideally, it will be great to see a formal API supported by the Java SE Standard.

Last touched upon in this article dated August 2011, it's high time to revisit the state of Java serial port communications options for the following reasons: As the hype around Internet of Things...

Personal

An Embedded Java 8 Lambda Expression Microbenchmark

It's been a long road, but Java 8 has finally arrived.  Much has been written and said about all the new features contained in this release, perhaps the most important of these is the introduction of Lambda Expressions.  Lambdas are now intimately integrated into the Java platform and they have the potential to aid developers in the traditionally tricky realm of parallel programming. Following closely behind, Compact Profiles promise to open up the tremendous benefits of Java Standard Edition compatibility to embedded platforms previously thought to be too small.  Can you see where this is heading?  It might be interesting to use these two technologies simultaneously and see how well they work together.  What follows is the description of a small program and its performance measurements -- a microbenchmark if you will -- that aims to highlight how programming with the new Lambda Expression paradigm can be beneficial not only for typical desktops and servers, but also for a growing number of embedded platforms too. The Hardware/OS Platform(s) Of primary interest for this article is the Boundary Devices BD-SL-i.MX6 single board computer.  It is a quad-core ARM® Cortex™-A9 based system with 1GB RAM running an armhf  Debian Linux distribution.  At the time of this article's publication, its list price is US $199. What makes it more interesting is that we'll not only run Java 8 Lambda Expressions on device, we'll do it within the confines of the new Java 8 Compact1 profile.  The static footprint of this Java runtime environment is 10½ MB. A second system, altogether different in capability and capacity from our embedded device will be used as a means to compare and contrast execution behavior across disparate hardware and OS environments.  The system in question is a Toshiba Tecra R840 laptop running Windows 7/64-bit.  It has a dual-core Intel® Core™ i5-2520M processor with 8GB RAM and will use the standard Java 8 Runtime Environment (JRE) for Windows 64-bit. The Application Looking for a sample dataset as the basis for our rudimentary application, this link provides an ideal (and fictional) database of employee records.  Among the available formats, a comma-delimited CSV file is supplied with approximately 300,000 entries.  Our sample application will read this file and store the employee records into a LinkedList<EmployeeRec>.  The EmployeeRec has the following  fields: public class EmployeeRec {    private String id;    private String birthDate;    private String lastName;    private String firstName;    private String gender;    private String hireDate;    ...} With this data structure initialized, our application is asked to perform one simple task:  calculate the average age of all male employees. Old School First off let's perform this calculation in a way that predates the availability of Lambda Expressions.  We'll call this version OldSchool.  The code performing the "average age of all male employees" calculation looks like this: double sumAge = 0;long numMales = 0;for (EmployeeRec emp : employeeList) {    if (emp.getGender().equals("M")) {        sumAge += emp.getAge();        numMales += 1;    }}double avgAge = sumAge / numMales; Lamba Expression Version 1 Our second variation will use a Lambda expression to perform the identical calculation.  We'll call this version Lamba stream().  The key statement in Java 8 looks like this: double avgAge = employeeList.stream()                .filter(s -> s.getGender().equals("M"))                .mapToDouble(s -> s.getAge())                .average()                .getAsDouble(); Lambda Expression Version 2 Our final variation uses the preceding Lambda Expression with one slight modification: it replaces the stream() method call with the parallelStream() method, offering the potential to split the task into smaller units running on separate threads.  We'll call this version Lambda parallelStream(). The Java 8 statement looks as follows: double avgAge = employeeList.parallelStream()                .filter(s -> s.getGender().equals("M"))                .mapToDouble(s -> s.getAge())                .average()                .getAsDouble(); Initial Test Results The charts that follow display execution times of the sample problem solved via our three aforementioned variations.  The left chart represents times recorded on the ARM Cortex-A9 processor while the right chart shows recorded times for the Intel Core-i5.  The smaller the result, the faster, both examples indicate that there is some overhead to utilizing a serial Lambda stream() over and above the old school pre-Lambda solution.  As far as parallelStream() goes, it's a mixed bag.  For the Cortex-A9, the parallelStream() operation is negligibly faster than the old school solution, whereas for the Core-i5, the overhead incurred by parallelStream() actually makes the solution slower. Without any further investigation, one might conclude that parallel streams may not be worth the effort. But what if performing a trivial calculation on a list of 300,000 employees simply isn't enough work to show the benefits of parallelization?  For this next series of tests, we'll increase the computational load to see how performance might be effected. Adding More Work to the Test For this version of the test, we'll solve the same problem, that is to say, calculate the average age of all males, but add a varying amount of intermediate computation.  We can variably increase the number of required compute cycles by introducing the following identity method to our programs: /*  * Rube Goldberg way of calculating identity of 'val', * assuming number is positive */private static double identity(double val) {    double result = 0;    for (int i=0; i < loopCount; i++) {        result += Math.sqrt(Math.abs(Math.pow(val, 2)));        }    return result / loopCount;} As this method takes the square root of the square of a number, it is in essence an expensive identity function. By changing the value of loopCount (this is done via command-line option), we can change the number of times this loop executes per identity() invocation.  This method is inserted into our code, for example with the Lambda ParallelStream() version, as follows: double avgAge = employeeList.parallelStream()                .filter(s -> s.getGender().equals("M"))                .mapToDouble(s -> identity(s.getAge()))                .average()                .getAsDouble(); A modification identical to what is highlighted in red above is also applied to both Old School and Lambda Stream() variations.  The charts that follow display execution times for three separate runs of our microbenchmark, each with a different value assigned to the internal loopCount variable in our Rube Goldberg identity() function. For the Cortex-A9, you can clearly see the performance advantage of parallelStream() when the loop count is set to 100, and it becomes even more striking when the loop count is increased to 500.  For the Core-i5, it takes a lot more work to realize the benefits of parallelStream().  Not until the loop count is set to 50,000 do the performance advantages become apparent.  The Core-i5 is so much faster and only has two cores; consequently the amount of effort needed to overcome the initial overhead of parallelStream() is much more significant. Downloads The sample code used in this article is available as a NetBeans project.  As the project includes a CSV file with over 300,000 entries, it is larger than one might expect.  The blogs.oracle.com  site prohibits storing files larger than 2MB in size so this project source has been compressed and split into three parts.  Here are the links: LambdaMicrobench.zip.part1 LambdaMicrobench.zip.part2 LambdaMicrobench.zip.part3 Just concatenate the three downloaded files together to recreate the original LambdaMicrobench.zip file.  In Linux, the command would look something like this: $ cat LambdaMicrobench.zip.part? > LambdaMicrobench.zip Conclusion A great deal of effort has been put into making Java 8 a much more universal platform.  Our simple example here demonstrates that even an embedded Java runtime environment as small as 10½ MB can take advantage of the latest advances to the platform.  This is just the beginning.  There is lots more work to be done to further enhance the performance characteristics of parallel stream Lambda Expressions.  We look forward to future enhancements.

It's been a long road, but Java 8 has finally arrived.  Much has been written and said about all the new features contained in this release, perhaps the most important of these is the introduction of L...

Sun

Introducing the EJDK

In lock step with the introduction of Compact Profiles, Java 8 includes a new distribution mechanism for Java SE Embedded called the EJDK.  As the potential exists to confuse the EJDK with the standard JDK (Java Development Kit), it makes sense to dedicate a few words towards highlighting how these two packages differ in form and function. The JDK The venerable Java Development Kit is the mainstay of Java developers.  It incorporates not only a standard Java Runtime Environment (JRE), but also includes critical tools required by those same developers.  For example, among many others, the JDK comes with a Java compiler (javac), a Java console application (jconsole), the Java debugger (jdb) and the Java archive utility (jar).  It also serves as the underpinnings for very popular Java Integrated Development Environments (IDEs) such as NetBeans, Eclipse, JDeveloper and IntelliJ to name a few. Like Java, the Java Development Kit is constantly evolving, and Java 8 brings about its fair share of enhancements to the JDK.  For Java 8, javac can now be instructed (via the -profile command-line option) to insure that your source code is compatible with a specific compact profile.  Furthermore, the Java 8 JDK comes with a new useful tool called jdeps, providing a means to analyze your compiled class and jar files for dependencies. The EJDK The EJDK is new to Java 8, and although similar in namesake to the JDK, it serves quite a different purpose.  Prior to Java 8, supported Java SE-Embedded runtime platforms were provided as binaries by Oracle.  With the advent of Compact Profiles, the number of possible binary options per supported platform would simply be too unweildy.  Rather than furnishing binaries for each of the possible combinations, an EJDK will be supplied for each supported Java SE-Embedded platform.  It contains the tools needed to create the profile you wish to use. The EJDK is designed to be run with either Windows or Linux/Unix platforms alongside a Java runtime environment.  It contains a wrapper called jrecreate (jrecreate.sh for Unix/Linux and jrecreate.bat for Windows) whose function it is to create deployable compact profile instances. In the examples that follow, we'll show two sample invocations. First off, let's briefly take a look at the contents of a typical EJDK.   For our first example, we've installed the EJDK on a linux/x86 system.   Listing the contents of the ejdk1.8.0/ directory, we see a subdirectory named linux_arm_vfp_hflt/.  This tells us what platform this instance of the EJDK supports.  For all our examples we'll use an EJDK that creates compact profiles suitable for Linux/Arm Hard Float platform, often times referred to as armhf. $ ls ejdk1.8.0bin  doc  lib  linux_arm_vfp_hflt Looking one level deeper into the bin/ directory, we see the jrecreate.bat and jrecreate.sh files: $ ls ejdk1.8.0/binjrecreate.bat  jrecreate.config.properties  jrecreate.sh As we're on a Linux system, let's use the jrecreate.sh script to create a compact profile: $ ./ejdk1.8.0/bin/jrecreate.sh --profile compact1 --dest compact1-minimal --vm minimal Briefly reviewing this invocation, the --profile compact1 option instructs jrecreate to use the Compact1 profile.  The --profile option accepts [compact1 | compact2 | compact3]  as an argument. The --dest compact1-minimal option specifies the name of the destination directory containing the newly generated profile.  Note that the directory argument to --dest must not exist prior to invocation.  Finally, the --vm minimal option tells jrecreate to use the minimal (i.e. the smallest) virtual machine for this instance.  The --vm option accepts  [minimal | client | server | all] as an argument.  Running the complete jrecreate.sh command, we get the following output: $ ./ejdk1.8.0/bin/jrecreate.sh --profile compact1 --dest compact1-minimal --vm minimalBuilding JRE using Options {    ejdk-home: /home/java8/ejdk1.8.0    dest: /home/java8/compact1-minimal    target: linux_arm_vfp_hflt    vm: minimal    runtime: compact1 profile    debug: false    keep-debug-info: false    no-compression: false    dry-run: false    verbose: false    extension: []}Target JRE Size is 10,595 KB (on disk usage may be greater).Embedded JRE created successfully This creates a Compac1 profile distribution of about 10 ½ MB in the compact-1-minimal/ directory.  For our second example, we'll create a profile based on Compact2 and the client VM, this time from a Windows 7/64-bit system: c:\demo>ejdk1.8.0\bin\jrecreate.bat --profile compact2 --dest compact2-client --vm clientBuilding JRE using Options {    ejdk-home: c:\demo\ejdk1.8.0\bin\..    dest: c:\demo\compact2-client    target: linux_arm_vfp_hflt    vm: client    runtime: compact2 profile    debug: false    keep-debug-info: false    no-compression: false    dry-run: false    verbose: false    extension: []}Target JRE Size is 17,552 KB (on disk usage may be greater).Embedded JRE created successfully This Compact2 instance is created in the compact2-client/ directory and has an approximate footprint of 17 ½ MB.  Additional options to jrecreate are available for further customization. Finally, lets migrate the generated profiles over to a real device.  As a host platform we'll use none other than the ubiquitous Raspberry Pi.  Here's a listing of the two profiles and their size (in 1K blocks) on the filesystem: pi@pi0 ~/java8 $ lscompact1-minimal  compact2-clientpi@pi0 ~/java8 $ du -sk compact*10616   compact1-minimal17660   compact2-client And here's what each version outputs when java -version is run: pi@pi0 ~/java8 $ ./compact1-minimal/bin/java -versionjava version "1.8.0"Java(TM) SE Embedded Runtime Environment (build 1.8.0-b127, profile compact1, headless)Java HotSpot(TM) Embedded Minimal VM (build 25.0-b69, mixed mode)pi@pi0 ~/java8 $ ./compact2-client/bin/java -versionjava version "1.8.0"Java(TM) SE Embedded Runtime Environment (build 1.8.0-b127, profile compact2, headless)Java HotSpot(TM) Embedded Client VM (build 25.0-b69, mixed mode) In conclusion, you are encouraged to experiment with the EJDK.  It will very quickly give you a feel for the compact profile configuration options available for your device.

In lock step with the introduction of Compact Profiles, Java 8 includes a new distribution mechanism for Java SE Embedded called the EJDK.  As the potential exists to confuse the EJDK with the...

Sun

Java SE Embedded Pricing Explained

You're probably asking yourself, "Pricing?  Really?  In a techie blog?", and I would normally agree wholeheartedly with your assessment.  But in this one instance the topic might be worthy of a few words.  There is, as the expression goes, no such thing as a free lunch.  Whether you pay for software outright, or roll your own with open source projects, a cost must be paid. Like clockwork, we regularly receive inquiries for Java embedded information that go something like this: Dear Oracle,  We've downloaded and evaluated Java SE-Embedded and have found it to be a very appealing platform to run our embedded application.  We understand this is commercial software; before we decide to deploy our solution with your runtime, can you give us a feel for the royalties associated with shipping x number of units? Seems pretty straightforward, right?  Well, yes, except that in the past Oracle required the potential customer to sign a non-disclosure agreement prior to receiving any embedded pricing information.  It didn't matter if the customer was interested in deploying ten units or ten thousand, they all had to go through this process.  Now certain aspects of pricing may still require confidential agreements, but why not make quantity 1 list prices available?   With the release of this document, that pricing information is now public. The evidence is out there, both anecdotal and real, demonstrating that Oracle's Java SE-Embedded platform is unquestionably superior in quality and performance to the OpenJDK variants.  For the latest example, take a look at this blog entry.  So the question becomes, is it actually more affordable to pay for a commercial platform that is fully supported, faster and more reliable or to opt for a "free" platform and support it yourself. So What Does Java SE-Embedded Cost? The universal answer to such a question is: it depends.  That is to say it depends upon the capability of the embedded processor.  Before we lose you, let's show the list price for Java embedded licensing associated with three platforms and then explain how we arrived at the numbers.  As of the posting of this entry, 06 December, 2013, here they are: Per-unit cost for a Raspberry Pi: US $0.71 Per-unit cost for system based on Intel Atom Z510P: US $2.68 Per-unit cost for a Compulab Trim-Slice: US $5.36 How Does It Work? These bullet points help describe the process, then we'll show how we arrived at our three sample platform prices. Pricing is done on a per-core basis. Processors are classified based on their capability and assigned a core factor.  The more capable the processor, the higher the core factor. Per-core pricing is determined by multiplying the standard per-core Java embedded price by the core factor. A 19% Software Update License & Support Fee is automatically added onto each system. The core factor table that follows, found in the Oracle Java Embedded Global Price List, dated September 20, 2013, groups processors of similar capabilities into buckets called chip classes.  Each chip class is assigned a core factor. Example 1 To compute the per-unit cost, use this formula: Oracle Java Embedded per-core license fee  *  core factor  *  number of cores  *  support uplift The standard per-core license fee is always $300.  The Raspberry Pi is a Class I device and therefore has a core factor of .002.  There is only one core in the Raspberry Pi, and the Software Update License & Support fee is always 19%.  So plugging in the numbers, we get: $300  *  .002  *  1  *  1.19  =  $0.714 Example 2 The processor in this example, the Intel Atom Z510P, is a Class II device and has a core factor of .0075.  Using the same formula from Example 1, here's what we get: $300  *  .0075  *  1  *  1.19  =  $2.6775 Example 3 The processor for the Trim-Slice is based on the ARM Cortex-A9, a Class II device.  Furthermore it is a dual-core system.  Using the same formula as the previous examples, we arrive at the following per-unit pricing: $300  *  .0075  *  2  *  1.19  = $5.355 Conclusion With your hardware specs handy, you should now have enough information to make a reasonable estimate of Oracle Java embedded licensing costs.  At minimum, it could be a help in your "buy vs. roll your own" decision making process.  And of course, if you have any questions, don't be afraid to ask.

You're probably asking yourself, "Pricing?  Really?  In a techie blog?", and I would normally agree wholeheartedly with your assessment.  But in this one instance the topic might be worthy of a few...

Sun

Comparing Linux/Arm JVMs Revisited

It's been about 18 months since we last compared Linux/Arm JVMs, and with the formal release of the much anticipated Java SE Embedded for Arm hard float binary, it marks a good time to revisit JVM performance.  The information and results that follow will highlight the following comparisons: Java SE-E Arm VFP (armel) vs. Arm Hard Float (armhf) Java SE-E armhf Client Compiler (c1) vs. armhf Server Compiler (c2) And last but certainly not least ... Java SE-E 7u40 armhf vs. Open JDK armhf The Benchmark For the sake of simplicity and consistency, we'll use a subset of the DaCapo benchmark suite.  It's an open source group of real world applications that put a good strain on a system both from a processor and memory workload perspective. We are aware of customers who use DaCapo to gauge performance, and due to its availability and ease of use, enables anyone interested to run their own set of tests in fairly short order. The Hardware It would have been grand to run all these benchmarks on one platform, most notably the beloved Raspberry Pi, but unfortunately it has its limitations: There is no Java SE-E server compiler (c2) for the Raspberry Pi.  Why?  Because the Pi is based on an ARMv6 instruction set whereas the Java SE-E c2 compiler requires a minimum ARMv7 instruction set. Demonstrating how rapidly advances are being made in the hardware arena, the Raspberry Pi, within the context of these tests, is quite a humble platform.  With 512MB RAM, it runs out of memory when running some of the large DaCapo component applications. For these tests we'll primarily use a quad-core Cortex-A9 based system, and for one test we'll utilize a single core Marvell Armada system just to compare what effect the number of cores has on server compiler performance.  The devices in question are: Boundary Devices BD-SL-i.MX6, quad core 1GHz Cortex-A9 (Freescale i.MX6), 1GB RAM, Debian Wheezy distribution, 3.0.35 kernel (for both armel and armhf configurations) GlobalScale D2Plug, single core 800MHz ARMv6/v7 processor (Marvell PXA510), 1GB RAM, Debian Wheezy distribution, 3.5.3-cubox-di+ kernel for armhf Java SE-E armel vs. armhf The chart that follows compares the relative performance of the armel JavaSE-E 7u40 JRE with the armhf JavaSE-E 7u40 JRE for 8 of the DaCapo component applications.  These tests were conducted on the Boundary Devices BD-SL-i.MX6.  Both armel and armhf environments were based on the Debian Wheezy distribution running a 3.0.35 kernel.  For all charts, the smaller the result, the faster the run. In all 8 tests, the armhf binary is faster, some only slightly, and in one case (eclipse) a few percentage points faster.  The big performance gain associated with the armhf standard deals with floating point operations, and in particular, the passing of arguments directly into floating point registers.  The performance gains realized by the newer armhf standard will be seen more in the native application realm than for Java SE-Embedded primarily because  the Java SE-E armel VM already uses FP registers for Java floating point methods.  There are still however certain floating point workloads that may show a modest performance increase (in the single digit percent range) with JavaSE-E armhf over Java SE-E armel. Java SE-E Client Compiler (c1) vs. Server Compiler (c2) In this section, we'll show tests results for two different platforms, the first a single core system, followed by the same tests on a quad-core system.  To further demonstrate how workload changes performance, we'll take advantage of the ability to run the DaCapo component applications in three different modes: small, default (medium) and large.  The first chart displays the aggregate time required to run the tests for the three modes, utilizing both the 7u40 client (c1) compiler and the server (c2) compiler.  As expected, c1 outperforms c2 by a wide margin for the tests that run only briefly.  As the total time to run the tests increases from small to large, the c2 compiler gets a chance to "warm up" and close the gap in performance.  But it never does catch up.   Contrast the first chart with the one that follows where small, medium and large versions of the tests were run on a quad core system.  The c2 compiler is better able to utilize the additional compute resources supplied by this platform, the result being that initial gap in performance between c1 and c2 for the small version of the test is only 19%.  By the time we reach the large version, c2 outperforms c1 by 7%.  The moral of the story here is, given enough resources, the server compiler might be the better of the VMs for your workload if it is a long-lived process. Java SE-E 7u40 armhf vs. Open JDK armhf For this final section, we'll break out performance on an application-by-application basis for the following JRE/VMs: Java SE Embedded 7u40 armhf Client Compiler (c1) Java SE Embedded 7u40 armhf Server Compiler (c2) OpenJDK 7 IcedTea 2.3.10 7u25-2.3.10-1~deb7u1 OpenJDK Zero VM (build 22.0-b10, mixed mode) OpenJDK 7 IcedTea 2.3.10 7u25-2.3.10-1~deb7u1 JamVM (build 1.6.0-devel, inline-threaded interpreter with stack-caching) OpenJDK 6 IcedTea6 1.12.6 6b27-1.12.6-1~deb7u1 OpenJDK Zero VM (build 20.0-b12, mixed mode) OpenJDK 6 IcedTea6 1.12.6 6b27-1.12.6-1~deb7u1 JamVM (build 1.6.0-devel, inline-threaded interpreter with stack-caching) OpenJDK IcedTea6 1.12.6 6b27-1.12.6-1~deb7u1 CACAO (build 1.6.0+r68fe50ac34ec, compiled mode) The OpenJDK packages were pulled from the Debian Wheezy distribution. It appears the bulk of performance work to OpenJDK/Arm still revolves around the OpenJDK 6 platform even though Java 7 was released over two years ago (and Java 8 is coming soon).  Regardless, Java SE still outperforms most OpenJDK tests by a wide margin, and perhaps more importantly appears to be the much more reliable platform considering the number of tests that failed with the OpenJDK variants.  As demonstrated in previous benchmark results, the older armel OpenJDK VMs appear to be more stable than the armhf versions tested here.  Considering the stated direction by the major linux distributions is to migrate towards the armhf binary standard, this is a bit eye opening. As always, comments are welcome.

It's been about 18 months since we last compared Linux/Arm JVMs, and with the formal release of the much anticipated Java SE Embedded for Arm hard float binary, it marks a good time to revisit JVM...

Sun

Compact Profiles Demonstrated

Following up on an article introducing compact profiles, the video that follows demonstrates how this new feature in the upcoming Java 8 release can be utilized.  The video: Describes the compact profile feature and the rationale for its creation. Shows how to use the new jrecreate utility to generate compact profiles that can be readily deployed. Demonstrates that even the smallest of profiles (less than 11MB) is robust enough to support very popular and important software frameworks like OSGi. The software demonstrated is in early access.  For those interested in trying it out before the formal release of Java 8, there are two options: Members of the Oracle Partner Network (OPN) with a gold membership or higher can download the early access Java 8 binaries of Java SE-Embedded shown here.  For those not at this level, it may still be possible to get early access software, but it will require a qualification process beforehand. It's not as intimidating as it sounds, you can pull down the source code for OpenJDK 8, and build it yourself.  By default, compact profiles are not built, but this forum post shows you how.  The reference platform for this software is linux/x86.  Functionally, the generated compact profiles will contain the pared down modules for each compact profile, but you'll find the footprint for each to be much larger than the ones demonstrated in this video, as none of the Java SE-Embedded space optimizations are performed by default. Not having any premium privileges on YouTube, the maximum allowed length of a video is 15 minutes.  There's actually lots more to talk about with compact profiles, including enhancements to java tools and utilities (javac, jar, jdeps, and the java command itself) that have incorporated intelligence for dealing with profiles. Hmm.  Maybe there's an opportunity for a Compact Profiles Demonstrated Part II?

Following up on an article introducing compact profiles, the video that follows demonstrates how this new feature in the upcoming Java 8 release can be utilized.  The video: Describes the compact...

Sun

An Introduction to Java 8 Compact Profiles

Java SE is a very impressive platform indeed, but with all that functionality comes a large and ever increasing footprint.  It stands to reason then that one of the more frequent requests from the community has been the desire to deploy only those components required for a particular application instead of the entire Java SE runtime environment.  Referred to as subsetting, the benefits of such a concept would seem to be many: A smaller Java environment would require less compute resources, thus opening up a new domain of devices previously thought to be too humble for Java. A smaller runtime environment could be better optimized for performance and start up time. Elimination of unused code is always a good idea from a security perspective. If the environment could be pared down significantly, there may be tremendous benefit to bundling runtimes with each individual Java application. These bundled applications could be downloaded more quickly. Despite these perceived advantages, the platform stewards (Sun, then Oracle) have been steadfast in their resistance to subsetting.  The rationale for such a stance is quite simple: there was sincere concern that the Java SE platform would fragment.  Agree or disagree, the Java SE standard has remained remarkably in tact over time.  If you need any further evidence of this assertion, compare the state of Java SE to that of Java ME, particularly in the mobile telephony arena.  Better still, look how quickly Android has spawned countless variants in its brief lifespan. Nonetheless, a formal effort has been underway having the stated goal of providing a much more modular Java platform.  Called Project Jigsaw, when complete, Java SE will be composed of a set of finer-grained modules and will include tools to enable developers to identify and isolate only those modules needed for their application.  However, implementing this massive internal change and yet maintaining compatibility has proven to be a considerable challenge.  Consequently full implementation of the modular Java platform has been delayed until Java 9. Understanding that Java 9 is quite a ways off, an interim solution will be available for Java 8, called Compact Profiles.  Rather than specifying a complete module system, Java 8 will define subset profiles of the Java SE platform specification that developers can use to deploy.  At the current time three compact profiles have been defined, and have been assigned the creative names compact1, compact2, and compact3. The table that follows lists the packages that comprise each of the profiles.  Each successive profile is a superset of its predecessor.  That is to say, the compact2 profile contains all of the packages in compact1 plus those listed under the compact2 column below.  Likewise, compact3 contains all of compact2 packages plus the ones listed in the compact3 column. compact1 compact2 compact3-------------------------- ----------------------- --------------------------java.io java.rmi java.lang.instrumentjava.lang java.rmi.activation java.lang.managementjava.lang.annotation java.rmi.registry java.security.acljava.lang.invoke java.rmi.server java.util.prefsjava.lang.ref java.sql javax.annotation.processingjava.lang.reflect javax.rmi.ssl javax.lang.modeljava.math javax.sql javax.lang.model.elementjava.net javax.transaction javax.lang.model.typejava.nio javax.transaction.xa javax.lang.model.utiljava.nio.channels javax.xml javax.managementjava.nio.channels.spi javax.xml.datatype javax.management.loadingjava.nio.charset javax.xml.namespace javax.management.modelbeanjava.nio.charset.spi javax.xml.parsers javax.management.monitorjava.nio.file javax.xml.stream javax.management.openmbeanjava.nio.file.attribute javax.xml.stream.events javax.management.relationjava.nio.file.spi javax.xml.stream.util javax.management.remotejava.security javax.xml.transform javax.management.remote.rmijava.security.cert javax.xml.transform.dom javax.management.timerjava.security.interfaces javax.xml.transform.sax javax.namingjava.security.spec javax.xml.transform.stax javax.naming.directoryjava.text javax.xml.transform.stream javax.naming.eventjava.text.spi javax.xml.validation javax.naming.ldapjava.util javax.xml.xpath javax.naming.spijava.util.concurrent org.w3c.dom javax.scriptjava.util.concurrent.atomic org.w3c.dom.bootstrap javax.security.auth.kerberosjava.util.concurrent.locks org.w3c.dom.events javax.security.sasljava.util.jar org.w3c.dom.ls javax.sql.rowsetjava.util.logging org.xml.sax javax.sql.rowset.serialjava.util.regex org.xml.sax.ext javax.sql.rowset.spijava.util.spi org.xml.sax.helpers javax.toolsjava.util.zip javax.xml.cryptojavax.crypto javax.xml.crypto.domjavax.crypto.interfaces javax.xml.crypto.dsigjavax.crypto.spec javax.xml.crypto.dsig.domjavax.net javax.xml.crypto.dsig.keyinfojavax.net.ssl javax.xml.crypto.dsig.specjavax.security.auth org.ieft.jgssjavax.security.auth.callbackjavax.security.auth.loginjavax.security.auth.spijavax.security.auth.x500javax.security.cert You may ask what savings can be realized by using compact profiles?  As Java 8 is in pre-release stage, numbers will change over time, but let's take a look at a snapshot early access build of Java SE-Embedded 8 for ARMv5/Linux.  A reasonably configured compact1 profile comes in at less than 14MB.  Compact2 is about 18MB and compact3 is in the neighborhood of 21MB.  For reference, the latest Java 7u21 SE Embedded ARMv5/Linux environment requires 45MB. So at less than one-third the original size of the already space-optimized Java SE-Embedded release, you have a very capable runtime environment.  If you need the additional functionality provided by the compact2 and compact3 profiles or even the full VM, you have the option of deploying your application with them instead. In the next installment, we'll look at Compact Profiles in a bit more detail.

Java SE is a very impressive platform indeed, but with all that functionality comes a large and ever increasing footprint.  It stands to reason then that one of the more frequent requests from the...

Sun

Is it armhf or armel?

Arm processors come in all makes and sizes, a certain percentage of which address a market where cost, footprint and power requirements are at a premium.  In this space, the inclusion of even a floating point unit would be considered an unnecessary luxury.  To perform floating point operations with these processors, software emulation is required. Higher-end Arm processors come bundled with additional capability that enables hardware execution of floating point operations.  The difference between these two architectures gave rise to two separate Embedded Application Binary Interfaces or EABIs for ARM: soft float and VFP (Vector Floating Point).  Although there is forward compatibility between soft and hard float, there is no backward compatibility.  And in fact, when it comes to providing binaries for Java SE Embedded for Arm, Oracle provides two separate options: a soft float binary and a VFP binary.  In the Linux community, releases built upon both these EABIs are refereed to as armel based distributions. Enter armhf.  Although a big step up in performance, the VFP EABI utilizes less-than-optimal argument passing when a floating point operations take place.  In this scenario, floating point arguments must first be passed through integer registers prior to executing in the floating point unit.  A new EABI, referred to as armhf optimizes the calling convention for floating point operations by passing arguments directly into floating point registers.  It furthermore includes a more efficient system call convention.  The end result is applications compiled with the armhf standard should demonstrate modest performance improvement in some cases, and significant improvement for floating point intensive applications. Alas, armhf represents yet another binary incompatible standard, but one that has already gained considerable traction in the community. Although still relatively early, the transition from armel to armhf is underway.  In fact, Ubuntu has already announced that future releases will only be built to the armhf standard, effectively obsoleting armel. As mentioned in Henrik's Stahl's Blog, an armhf version of Java SE Embedded is in the works, and we have already made available a armhf-based developer Preview of JDK 8 with JavaFX. In the interim, we will have to deal with the incompatibilities between armel and armhf.  Most recently we've seen a rash of failed attempts to run the ArmV7 VFP Java SE Embedded binary on top of an armhf-based Linux distro.  During diagnosis, the question becomes, how can I determine whether my Linux distribution is based on armel or armhf?  Turns out this is not as straightforward as one might think.  Aside from experience and anecdotal evidence, one possible way to ascertain whether you're running on armel or armhf is to run the following obscure command: $ readelf -A /proc/self/exe | grep Tag_ABI_VFP_args If the Tag_ABI_VFP_args tag is found, then you're running on an armhf system.  If nothing is returned, then it's armel.  To show you an example, here's what happens on a Raspberry Pi running the Raspbian distribution: pi@raspberrypi:~$ readelf -A /proc/self/exe | grep Tag_ABI_VFP_args   Tag_ABI_VFP_args: VFP registers This indicates an armhf distro, which in fact is what Raspbian is.  On the original, soft-float Debian Wheezy distribution, here's what happens: pi@raspberrypi:~$ readelf -A /proc/self/exe | grep Tag_ABI_VFP_args Nothing returned indicates that this is indeed armel. Many thanks to the folks participating in this Raspberry Pi forum topic for providing this suggestion.

Arm processors come in all makes and sizes, a certain percentage of which address a market where cost, footprint and power requirements areat a premium.  In this space, the inclusion of even a...

Sun

Source Code for JavaFX Scoreboard Now Available

For the last few years, many of the JavaFX related blog posts found here have made reference to all or parts of a Scoreboard application written in JavaFX.  For example, the entry prior to this contains a video demonstrating how this Scoreboard application can be run on an embedded device such as a Raspberry Pi and displayed on an ordinary flat screen TV. Alongside those posts, some have asked for access to the source code to this application.  I've always felt uncomfortable releasing the code, not under the delusion that it has any commercial value, but rather because, let us say, it was not developed with the strictest software engineering standards in mind.  Having gotten over those insecurities (not really), I've decided to place it in a java.net project called javafx-scoreboard. If you are interested in gaining access to this code you'll need to do the following: Register on http://home.java.net/projects  to get a java.net account.  It's free and painless.  If you already have an account, terrific! You can skip this step. Send an email to me at james.connors@oracle.com  asking to be added to the javafx-scoreboard project.  Include your java.net username in the email. Once you're added to the project, you'll have download access to the contents of the project. For grins, here's document that gives an overview of the Scoreboard application. Cheers.

For the last few years, many of the JavaFX related blog posts found here have made reference to all or parts of a Scoreboard application written in JavaFX.  For example, the entry prior to this contai...

Sun

A Raspberry Pi / JavaFX Electronic Scoreboard Application

As evidenced at the recently completed JavaOne 2012 conference, community excitement towards the Raspberry Pi and its potential as a Java development and deployment platform was readily palpable.  Fast forward three months, Oracle has announced the availability of a JDK 8 (with JavaFX) for Arm Early Access Developer Preview where the reference platform for this release is none other than the Raspberry Pi. What makes this especially interesting to me is the addition of JavaFX to the Java SE-Embedded 8 platform.  It turns out that at $35US, the (not so) humble Raspberry Pi has a very capable graphics processor, opening up a Pandora's box of graphics applications that could be applied to this beloved device.  As a first step in becoming familiar with just how this works, I decided to dust off a two year old JavaFX scoreboard application, originally written for a Windows laptop, and see how it would run on the Pi.  Low and behold, the application runs unmodified (without even a recompile). The video that follows shows how an ordinary flat screen TV can be converted into a full screen electronic scoreboard driven by a Raspberry Pi.  The requirements for such a solution are incredibly straightforward: (1) the TV needs access to a power receptacle and (2) it must be within range of a WiFi network in order to receive scoreboard update packets.  The device is so compact and miserly from a power perspective, that we velcro the Pi to the back of the TV and get our power from the TV's USB port. If you can spare a few moments, it just might be worth your while to take a look.

As evidenced at the recently completed JavaOne 2012 conference, community excitement towards the Raspberry Pi and its potential as a Java development and deployment platform was readily palpable. ...

Sun

Sprinkle Some Magik on that Java Virtual Machine

GE Energy, through its Smallworld subsidiary, has been providing geospatial software solutions to the utility and telco markets for over 20 years.  One of the fundamental building blocks of their technology is a dynamically-typed object oriented programming language called Magik.  Like Java, Magik source code is compiled down to bytecodes that run on a virtual machine -- in this case the Magik Virtual Machine. Throughout the years, GE has invested considerable engineering talent in the support and maintenance of this virtual machine.  At the same time vast energy and resources have been invested in the Java Virtual Machine. The question for GE has been whether to continue to make that investment on its own or to leverage massive effort provided by the Java community? Utilizing the Java Virtual Machine instead of maintaining its own virtual machine would give GE more opportunity to focus on application solutions.   At last count, there are dozens, perhaps hundreds of examples of programming languages that have been hosted atop the Java Virtual Machine.  Prior to the release of Java 7, that effort, although certainly possible, was generally less than optimal for languages like Magik because of its dynamic nature.  Java, as a statically typed language had little use for this capability.  In the quest to be a more universal virtual machine, Java 7, via JSR-292, introduced a new bytecode called invokedynamic.  In short, invokedynamic affords a more flexible method call mechanism needed by dynamic languages like Magik. With this new capability GE Energy has succeeded in hosting their Magik environment on top of the Java Virtual Machine.  So you may ask, why would GE wish to do such a thing?  The benefits are many: Competitors to GE Energy claimed that the Magik environment was proprietary.  By utilizing the Java Virtual Machine, that argument gets put to bed.  JVM development is done in open source, where contributions are made world-wide by all types of organizations and individuals. The unprecedented wealth of class libraries and applications written for the Java platform are now opened up to Magik/JVM platform as first class citizens. In addition, the Magik/JVM solution vastly increases the developer pool to include the 9 million Java developers -- the largest developer community on the planet. Applications running on the JVM showed substantial performance gains, in some cases as much as a 5x speed up over the original Magik platform. Legacy Magik applications can still run on the original platform.  They can be seamlessly migrated to run on the JVM by simply recompiling the source code. GE can now leverage the huge Java community.  Undeniably the best virtual machine ever created, hundreds if not thousands of world class developers continually improve, poke, prod and scrutinize all aspects of the Java platform.  As enhancements are made, GE automatically gains access to these. As Magik has little in the way of support for multi-threading, GE will benefit from current and future Java offerings (e.g. lambda expressions) that aim to further facilitate multi-core/multi-threaded application development. As the JVM is available for many more platforms, it broadens the reach of Magik, including the potential to run on a class devices never envisioned just a few short years ago.  For example, Java SE compatible runtime environments are available for popular embedded ARM/Intel/PowerPC configurations that could theoretically host this software too. As compared to other JVM language projects, the Magik integration differs in that it represents a serious commercial entity betting a sizable part of its business on the success of this effort.  Expect to see announcements not only from General Electric, but other organizations as they realize the benefits of utilizing the Java Virtual Machine.

GE Energy, through its Smallworld subsidiary, has been providing geospatial software solutions to the utility and telco markets for over 20 years.  One of the fundamental building blocks of their...

Personal

Raspberry Pi and Java SE: A Platform for the Masses

One of the more exciting developments in the embedded systems world has been the announcement and availability of the Raspberry Pi, a very capable computer that is no bigger than a credit card.  At $35 US, initial demand for the device was so significant, that very long back orders quickly ensued. After months of patiently waiting, mine finally arrived.  Those initial growing pains appear to have been fixed, so availability now should be much more reasonable. At a very high level, here are some of the important specs: Broadcom BCM2835 System on a chip (SoC) ARM1176JZFS, with floating point, running at 700MHz Videocore 4 GPU capable of BluRay quality playback 256Mb RAM 2 USB ports and Ethernet Boots from SD card Linux distributions (e.g. Debian) available So what's taking place taking place with respect to the Java platform and Raspberry Pi? A Java SE Embedded binary suitable for the Raspberry Pi is available for download (Arm v6/7) here.  Note, this is based on the armel architecture, a variety of Arm designed to support floating point through a compatibility library that operates on more platforms, but can hamper performance.  In order to use this Java SE binary, select the available Debian distribution for your Raspberry Pi. The more recent Raspbian distribution is based on the armhf (hard float) architecture, which provides for more efficient hardware-based floating point operations.  However armhf is not binary compatible with armel.  As of the writing of this blog, Java SE Embedded binaries are not yet publicly available for the armhf-based Raspbian distro, but as mentioned in Henrik Stahl's blog, an armhf release is in the works. As demonstrated at the just-completed JavaOne 2012 San Francisco event, the graphics processing unit inside the Raspberry Pi is very capable indeed, and makes for an excellent candidate for JavaFX.  As such, plans also call for a Pi-optimized version of JavaFX in a future release too. A thriving community around the Raspberry Pi has developed at light speed, and as evidenced by the packed attendance at Pi-specific sessions at Java One 2012, the interest in Java for this platform is following suit. So stay tuned for more developments...

One of the more exciting developments in the embedded systems world has been the announcement and availability of the Raspberry Pi, a very capable computer that is no bigger than a credit card.  At...

Personal

Java One 2012 Java SE Embedded Hands On Lab Returns!

After successful runs at Java One 2011 San Francisco and Tokyo, The Java SE Embedded Hands On Lab returns for Java One 2012.  If you're attending the Java One event in San Francisco (Sept 30 - Oct 4), please consider signing up for this session.  As an added incentive, we will be raffling off a couple of the Plug Computer devices that you'll gain experience with during this lab.  Seating is limited to 100 students, so register early. Here's an overview: This hands-on lab aims to show that developers already familiar with the Java develop/debug/deploy lifecycle can apply those same skills to develop Java applications, using Java SE Embedded, on embedded devices. The participants in the lab will: Have their own individual embedded device so they can gain valuable hands-on experience Turn their embedded device into a web container, using off-the-shelf software Learn how to deploy embedded Java applications, developed with an IDE, onto their device Learn how embedded Java applications can be remotely debugged from their desktop IDE Learn how to remotely monitor and manage embedded Java applications from their desktop The course description can be found here:HOL 7889: Java SE Embedded Development Made Easy In addition, 2012 marks the first year that we will have a venue specifically tailored for the Java embedded community.  Entitled Java Embedded @ Java One,  this event takes place during the JavaOne/OpenWorld week in San Francisco on October 3-4.  To Quote from the Java Embedded @ Java One URL: The conference will feature dedicated business-focused content from Oracle discussing how Java Embedded delivers a secure, optimized environment ideal for multiple network-based devices, as well as meaningful industry-focused sessions from peers who are already successfully utilizing Java Embedded. So if you want to participate in what many consider to be the next big trend in computing -- the internet of things -- come join us 10/3-4 in San Francisco.

After successful runs at Java One 2011 San Francisco and Tokyo, The Java SE Embedded Hands On Lab returns for Java One 2012.  If you're attending the Java One event in San Francisco (Sept 30 - Oct 4),...

Sun

Take Two: Comparing JVMs on ARM/Linux

Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  But there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java...

Sun

Comparing JVMs on ARM/Linux

For quite some time, Java Standard Edition releases have included both client and server bytecode compilers (referred to as c1 and c2 respectively), whereas Java SE-Embedded binaries only contained the client c1 compiler.  The rationale for excluding c2 stems from the fact that (1) eliminating optional components saves space, where in the embedded world, space is at a premium, and (2) embedded platforms were not given serious consideration for handling server-like workloads.  But all that is about to change.  In anticipation of the ARM processor's legitimate entrance into the server market (see Calxeda), Oracle has, with the latest update of Java SE-Embedded (7u2), made the c2 compiler available for ARMv7/Linux platforms, further enhancing performance for a large class of traditional server applications.  These two compilers go about their business in different ways.  Of the two, c1 is a lighter optimizing compiler, but has faster start up.  It delivers excellent performance and as the default bytecode compiler, works extremely well in almost all situations.  Compared to c1, c2 is the more aggressive optimizer and is suited for long-lived java processes.  Although slower at start up, it can be shown to achieve better performance over time.  As a case in point, take a look at the graph that follows. One of the most popular Java-based applications, Apache Tomcat, was installed on an ARMv7/Linux device.   The chart shows the relative performance, as defined by mean HTTP request time, of the Tomcat server run with the c1 client compiler (red line) and the c2 server compiler (blue line).  The HTTP request load was generated by an external system on a dedicated network utilizing the ab (Apache Bench) program.  The closer the response time is to zero the better, you can see that for the initial run of 25,000 HTTP requests, the c1 compiler produces faster average response times than c2.  It takes time for the c2 compiler to "warm up", but once the threshold of 50,000 or so requests is met, the c2 compiler performance is superior to c1.  At 250,000 HTTP requests, mean response time for the c2-based Tomcat server instance is 14% faster than its c1 counterpart. It is important to realize that c2 assumes, and indeed requires more resources (i.e. memory).  Our sample device with 1GB RAM, was more than adequate for these rounds of tests.  Of course your mileage may vary, but if you have the right hardware and the right workload, give c2 a further look. While discussing these results with a few of my compadres, it was suggested that OpenJDK and some of its variants be included in on this comparison.  The following chart shows mean http request times for 6 different configurations: Java SE Embedded 7u2 c1 Client Compiler Java SE Embedded 7u2 c2 Server Compiler OpenJDK Zero VM (build 20.0-b12, mixed mode) OpenJDK 1.6.0_24-b24 (IcedTea6 1.12pre) JamVM (build 1.6.0-devel, inline-threaded interpreter with stack-caching) OpenJDK 1.6.0_24-b24 (IcedTea6 1.12pre) CACAO (build 1.1.0pre2, compiled mode) OpenJDK 1.6.0_24-b24 (IcedTea6 1.12pre) Interpreter only: OpenJDK Zero VM (build 20.0-b12, interpreted mode) OpenJDK 1.6.0_24-b24 (IcedTea6 1.12pre) Results remain pretty much unchanged, so only the first 4 runs (25K-100K requests) are shown.  As can be seen, The Java SE-E VMs are on the order of 3-5x faster than their OpenJDK counterparts irrespective of the bytecode compiler chosen.  One additional promising VM called shark was not included in these tests because, although it built from source successfully, it failed to run Apache Tomcat.  In defense of shark, the ARM version may still be in development (i.e. non-stable) mode. Creating a really fast virtual machine is hard work and takes a lot of time to perfect.  Considering the resources expended by Oracle (and formerly Sun), it is no surprise that the commercial Java SE VMs are excellent performers.  But the extent to which they outperform their OpenJDK counterparts is surprising.  It would be no shock if someone in the know could demonstrate better OpenJDK results.  But herein lies one considerable problem:  it is an exercise in patience and perseverance just to locate and build a proper OpenJDK platform suitable for a particular CPU/Linux configuration.  No offense would be taken if corrections were presented, and a straightforward mechanism to support these OpenJDK builds were provided.

For quite some time, Java Standard Edition releases have included both client and server bytecode compilers (referred to as c1 and c2 respectively), whereas Java SE-Embedded binaries only contained...

Sun

Java ONE 2011 Hands on Lab for Java SE Embedded

Now that the dust has settled, sincere thanks go out to my compadres (you know who you are) for helping make The Java One 2011 Java SE Embedded Hands on Lab such a success.  In fact it was so well received, our peers in Asia are already planning on replicating the effort for JavaOne Tokyo in April 2012.  In addition to the Tokyo event,  we hope to provide future opportunities for this venue elsewhere.  In the interim, we'd seriously consider hosting this lab, albeit on a smaller scale (Java One had 105 networked devices and workstations), for interested customers.  To give you a feel for the lab contents, here's a synopsis: Java One 2011 Hands on Lab 24642: Java SE Embedded Development Made EasyThis Hands-on Lab aims to show that developers already familiar with the Java develop/debug/deploy lifecycle can apply those same skills to develop Java applications, using Java SE Embedded, on embedded devices.  Each participant in the lab will: Have their own individual embedded device to gain valuable hands on experience Turn their embedded device into a Java Servlet container Learn how to deploy embedded Java applications, developed with the NetBeans IDE, onto their device Learn how embedded Java applications can be remotely debugged from their desktop NetBeans IDE Learn how to remotely monitor and manage embedded Java applications from their desktop If the logistics for setting up a lab prove to be a bit too much, as an alternative, we've given quite a few live presentations/demonstrations with similar flair.  So please, by all means, contact me at james.connors@oracle.com, if you're interested in learning more.  For those of you who run developer user groups, most notably Java User Groups, and are looking for a speaker at your next meeting, please consider us.  We will not disappoint.

Now that the dust has settled, sincere thanks go out to my compadres (you know who you are) for helping make The Java One 2011 Java SE Embedded Hands on Lab such a success.  In fact it was so well...

Sun

Serial Port Communication for Java SE Embedded

The need to communicate with devices connected to serial ports is a common application requirement.  Falling outside the purview of the Java SE platform, serial and parallel port communication has been addressed with a project called RXTX.  (In the past, you may have known this as javacomm).  With RXTX,  Java developers access serial ports through the RXTXcomm.jar file.  Alongside this jar file, an underlying native layer must be provided to interface with the operating system's UART ports.  For the usual suspects (Windows, Linux/x86, MacOS, Solaris/Sparc), pre-compiled binaries are readily available.  To host this on an alternative platform, some (hopefully minimal) amount of work is required. Here's hoping the following notes/observations might aid in helping you to build RXTX for an embedded device utilizing one of our Java SE Embedded binaries.  The device used for this particular implementation is my current favorite: the Plug Computer. Notes on Getting RX/TX 2.1-7-r2 Working on a Plug Computer1. At this early juncture with Java 7, be wary of mixing Java 7 with code from older versions of Java. The class files generated by the JDK7 javac compiler contain an updated version byte with a value that results in older (Java 6 and before) JVMs refusing to load these classes.2. The RXTX download location http://rxtx.qbang.org/wiki/index.php/Download has binaries for many platforms including Arm variants, but none that worked for the Plug Computer, so one had to be built from source.3. Using the native GCC for the Plug Computer and the RXTX source, binaries (native shared objects) were compiled for the armv5tel-unknown-linux-gnu platform.4. The RXTX "stable" source code found at the aforementioned site is based on version rxtx 2.1-7r2.  This code appears to be pretty long in the tooth, in that it has no knowledge of Java 6.  Some changes need to be made to accommodate a JDK 6 environment.  Without these modifications, RXTX will not build with JDK6SUGGESTED FIX, most elegant, not recommended:Edit the configure.in file in the source directory and look for the following:     case $JAVA_VERSION in 1.2*|1.3*|1.4*|1.5*) and change the second line to:     1.2*|1.3*|1.4*|1.5*|1.6*) Upon modification, the autogen.sh script found in the rxtx source directory must be re-run to recreate the ./configure script.  Unfortunately, this requires loading the autoconf, automake and libtool packages (plus dependencies) and ended up resulting in libtool incompatibilies when running the resultant ./configure script.RECOMMENDED FIX:Instead, edit ./configure and search for the occurrences (there are more than one) of     case $JAVA_VERSION in 1.2*|1.3*|1.4*|1.5*) and change the second line to:     1.2*|1.3*|1.4*|1.5*|1.6*) Run './configure', then 'make' to generate the RXTXcomm.jar and platform specific .so shared object libraries.5. You may also notice in the output of the make, that there were compilation errors for source files which failed to find the meaning of "UTS_RELEASE".  This results in some of the shared object files not being created.  These pertain to the non-serial aspects of RXTX.  As we were only interested in librxtxSerial.so, this was no problem for us.6. Once built, move the following files into the following directories:     # cd rxtx-2.1-7-r2/ # cp RXTXcomm.jar $JAVA_HOME/lib/ext # cd armv5tel-unknown-linux-gnu/.libs/ # cp librxtxSerial-2.1-7.so $JAVA_HOME/lib/arm # cd $JAVA_HOME/lib/arm # ln -s librxtxSerial-2.1-7.so librxtxSerial.so Now Java applications which utilize RXTX should run without any java command-line additions. The RXTXcomm.jar file can be downloaded here.  To spare you the effort, a few pre-built versions of  librxtxSerial-2.1-7.so are provided at this location: librxtxSerial-2.1-7.so for ARMv5 based Plug Computers librxtxSerial-2.1-7.so for ARMv6l armel based systems (e.g. original Raspberry Pi Debian distro) librxtxSerial-2.1.7.so for ARMv6l armhf (hard float) systems (e.g. Raspberry Pi Raspbian distro) librxtxSerial-2.1-7.so for ARMv7l architecture - Many thanks to Daniel Ryan If you've gone through this exercise on any additional architectures, send them my way and I'll post them here.

The need to communicate with devices connected to serial ports is a common application requirement.  Falling outside the purview of the Java SE platform, serial and parallel port communication has...

Sun

Observations in Migrating from JavaFX Script to JavaFX 2.0

Observations in Migrating from JavaFXScript to JavaFX 2.0 Introduction Having been available for a few years now, there is a decentbody of work written for JavaFX using the JavaFX Script language. With the general availability announcement of JavaFX 2.0 Beta, the natural question arises aboutconverting the legacy code over to the new JavaFX 2.0 platform. This article reflectson some of the observations encountered while porting source code over fromJavaFX Script to the new JavaFX API paradigm. The Application The program chosen for migration is an implementation of theSudoku game and serves as a reference application for the book JavaFX – Developing Rich InternetApplications. The design of theprogram can be divided into two major components: (1) A user interface (ideallysuited for JavaFX design) and (2) the puzzle generator. For the context of this article, our primaryinterest lies in the user interface. Thepuzzle generator code was lifted from a sourceforge.net project and is written entirelyin Java. Regardless which version of theUI we choose (JavaFX Script vs. JavaFX 2.0), no code changes were required for the puzzle generator code. The original user interface for the JavaFX Sudoku applicationwas written exclusively in JavaFX Script, and as such is a suitable candidate toconvert over to the new JavaFX 2.0 model. However, a few notable points are worthmentioning about this program. Firstoff, it was written in the JavaFX 1.1 timeframe, where certain capabilities ofthe JavaFX framework were as of yet unavailable. Citing two examples, this program createsmany of its own UI controls from scratch because the built-in controls were yetto be introduced. In addition, layout ofgraphical nodes is done in a very manual manner, again because much of theautomatic layout capabilities were in flux at the time. It is worth considering that thisprogram was written at a time when most of us were just coming up to speed onthis technology. One would think thathaving the opportunity to recreate this application anew, it would look a lotdifferent from the current version. Comparing the Size ofthe Source Code An attempt was made to convert each of the original UI JavaFXScript source files (suffixed with .fx) over to a Java counterpart. Due to language feature differences, thereare a small number of source files which only exist in one version or theother. The table below summarizes thesize of each of the source files. JavaFX Script source file Number of Lines Number of Character JavaFX 2.0 Java source file Number of Lines Number of Characters ArrowKey.java 6 72 Board.fx 221 6831 Board.java 205 6508 BoardNode.fx 446 16054 BoardNode.java 723 29356 ChooseNumberNode.fx 168 5267 ChooseNumberNode.java 302 10235 CloseButtonNode.fx 115 3408 CloseButton.java 99 2883 ParentWithKeyTraversal.java 111 3276 FunctionPtr.java 6 80 Globals.java 20 554 Grouping.fx 8 140 HowToPlayNode.fx 121 3632 HowToPlayNode.java 136 4849 IconButtonNode.fx 196 5748 IconButtonNode.java 183 5865 Main.fx 98 3466 Main.java 64 2118 SliderNode.fx 288 10349 SliderNode.java 350 13048 Space.fx 78 1696 Space.java 106 2095 SpaceNode.fx 227 6703 SpaceNode.java 220 6861 TraversalHelper.fx 111 3095 Total 2,077 79,127 2531 87,800 A few notes about this table are in order: The number of lines in each file was determinedby running the Unix ‘wc –l’ command over each file. The number of characters in each file wasdetermined by running the Unix ‘ls –l’ command over each file. The examination of the code could certainly bemuch more rigorous. No standard formatting was performed on these files.  All comments however were deleted. There was a certain expectation that the new Java versionwould require more lines of code than the original JavaFX script version. As evidenced by a count of the total numberof lines, the Java version has about 22% more lines than its FX Scriptcounterpart. Furthermore, there was an additional expectationthat the Java version would be more verbose in terms of the total number ofcharacters.  In fact the preceding data shows that on average the Java source files contain fewer characters per line than the FX files.  But that's not the whole story.  Upon further examination, the FX Script source files had a disproportionate number of blank characters.  Why?  Because of the nature of how one develops JavaFX Script code.  The object literal dominates FX Script code.  Its not uncommon to see object literals indented halfway across the page, consuming lots of meaningless space characters. RAM consumption Not the most scientific analysis, memory usage forthe application was examined on a Windows Vista system by running the WindowsTask Manager and viewing how much memory was being consumed by the Sudokuversion in question. Roughly speaking, theFX script version, after startup, had a RAM footprint of about 90MB and remainedpretty much the same size. The Javaversion started out at about 55MB and maintained that size throughout its execution. What About Binding? Arguably, the most striking observation about the conversionfrom JavaFX Script to JavaFX 2.0 concerned the need for data synchronization, or lack thereof. In JavaFX Script, the primary means to synchronizedata is via the bind expression (using the “bind” keyword), and perhaps to alesser extent it’s “on replace” cousin. The bind keyword does not exist in Java, so for JavaFX 2.0 a Data Binding API has been introduced as a replacement. To give a feel for the difference between the two versionsof the Sudoku program, the table that follows indicates how many binds wererequired for each source file. ForJavaFX Script files, this was ascertained by simply counting the number ofoccurrences of the bind keyword. As can be seen, binding had been usedfrequently in the JavaFX Script version (and does not take into considerationan additional half dozen or so “onreplace” triggers). The JavaFX 2.0 program achieves the same functionality as the original JavaFX Script version,yet the equivalent of binding was only needed twice throughout the Java version of the source code. JavaFX Script source file Number of Binds JavaFX Next Java source file Number of “Binds” ArrowKey.java 0 Board.fx 1 Board.java 0 BoardNode.fx 7 BoardNode.java 0 ChooseNumberNode.fx 11 ChooseNumberNode.java 0 CloseButtonNode.fx 6 CloseButton.java 0 CustomNodeWithKeyTraversal.java 0 FunctionPtr.java 0 Globals.java 0 Grouping.fx 0 HowToPlayNode.fx 7 HowToPlayNode.java 0 IconButtonNode.fx 9 IconButtonNode.java 0 Main.fx 1 Main.java 0 Main_Mobile.fx 1 SliderNode.fx 6 SliderNode.java 1 Space.fx 0 Space.java 0 SpaceNode.fx 9 SpaceNode.java 1 TraversalHelper.fx 0 Total 58 2 Conclusions As the JavaFX 2.0 technology is so new, and experiencewith the platform is the same, it is possible and indeed probable that some ofthe observations noted in the preceding article may not apply across otherattempts at migrating applications. Thatbeing said, this first experience indicates that the migrated Java code willlikely be larger, though not extensively so, than the original Java FX Scriptsource. Furthermore, although veryimportant, it appears that the requirements for data synchronization viabinding, may be significantly less with the new platform.

Observations in Migrating from JavaFX Script to JavaFX 2.0 Introduction Having been available for a few years now, there is a decentbody of work written for JavaFX using the JavaFX Script language. With...

Sun

The Unofficial Java SE Embedded SDK

Developing applications for embedded platforms gets simpler all the time, thanks in part to the tremendous advances in microprocessor design and software tools.  And in particular, with the availability of Java SE compatible Virtual Machines for the popular embedded platforms, development has never been more straightforward. The real beauty behind Java SE Embedded development lies in the fact that you can use your favorite IDE (Eclipse, NetBeans, JDeveloper ...) to create, test and debug code in the identical fashion in which you'd develop a standard desktop or server application.  When the time comes to try it out on a Java SE Embedded capable device, it's just a matter of shipping the bytecodes over to the device and letting it run.  There is no need for complicated emulators, toolchains and cross-compilers.  The exact same bytecodes that ran on your PC, run unmodified on the embedded device. In fact, because all versions of Java SE (embedded or not) share a considerable amount of common code, we have plenty of anecdotal evidence which supports the notion that behavior -- correct or incorrect -- manifests itself identically across platforms.  We refer specifically here to bugs.  Now no one wants bugs, but believe it or not, our customers like the fact that behavior is consistent across platforms whether it's right or not. "Bug for bug compatibility" has actually become a strong selling point! Having espoused the virtues of transparently developing off device, many still wish to test and debug on-device regularly as part of their development cycle.  If you're the touchy/feely type, there are ample examples of affordable and supported off-the-shelf devices that could fit the bill for an Unofficial Java SE Embedded SDK.  One such platform is the Plug Computer. The reference platform for the Plug Computer is supplied by Marvell Technology Group. Manufacturers then license the technology from Marvell to create their own specific implementations.  Two such vendors are GlobalScale and Ionics.  These are incredibly capable devices that include Arm processors in the 1.2GHz to 2.0GHz range, and sport 512MB of RAM and flash.  There are a host of external port and interface options including USB, µUSB, SATA, GBE, SD, WiFi, ZigBee, Z-Wave and soon HDMI.  Additionally, several Linux distros are available for these systems too.  The typical cost for a base model is $99, and perhaps the most disruptive aspect of these systems, they consume on average about 5 watts of power. Alongside developing in the traditional manner, the ability to step through and examine state on these devices via remote debugging comes as a standard feature with the Java SE-E VM.  Furthermore, you can use the JConsole application from your desktop to remotely monitor performance and resource consumption on the device. So what would a bill of materials look like for The Unofficial Java SE Embedded SDK?  Pretty simple actually: Latest NetBeans Java SE IDE (or any other of your favorite Java IDEs)  Java SE Embedded ARMv5 Linux - Headless Ionics Nimbus 100 or GlobalScale SheevaPlug Dev Kit That's about it.  Of course, for higher level functionality, you can add additional packages.  For example, Apache runs beautifully here.  Could anyone imagine a large number of these devices acting as a parallel web server?

Developing applications for embedded platforms gets simpler all the time, thanks in part to the tremendous advances in microprocessor design and software tools.  And in particular, with the...

Sun

Java SE Embedded Refreshed

As embedded processor designs continue their inexorable drive towards ever increasing capability, the natural desire to utilize more robust software platforms, previously reserved for powerful desktop computers and servers, follows suit.  Recognizing this trend, a version of the Java Standard Edition platform, called Java SE Embedded, has been developed to address this growing market.  In addition to bringing all of the benefits of the ubiquitous Java Standard Edition to the most popular embedded processors, static footprint and memory optimizations have been realized.  To get a feel for some of the space savings, check out this article. Partly due to the turmoil surrounding the Oracle acquisition of Sun, a refresh of the Java SE Embedded binaries took longer than anticipated.  The new versions are now available for download with these supported configurations.  During that time frame, Java SE-E engineers were able take care of some internal housekeeping which should help for better synchronization of future releases of Java SE with Java SE-E.  In the past, Java SE-E engineers would take a snapshot of the Java Standard Edition code and incorporate their modifications to create a new release, forking from the standard edition source,.  Now, the Java SE-E code is part of the overall Java SE project, such that future Java SE enhancements and bug fixes should now be automatically incorporated into the Java SE-E code base. The refreshed Java SE-E binaries are based on the Java SE 6 Update 21, and represent substantial security, performance and quality enhancements.  Benchmark tests show that, for example, simply replacing the previous versions of the Java SE-E virtual machines with the latest binaries produces on average about 20% performance gain for the same hardware/OS combination.  Having recently introduced a Just-In-Time (or JIT) compiler to the Android Dalvik Virtual Machine, we thought it would be an interesting exercise to compare performance of Java SE-E to Android on identical hardware.  For a full explanation of the results and methodology, check out Bob Vandette's blog on this topic.  To cut to the chase, for the selected benchmarks, Java SE-E outperforms Android by a factor of two.  Improving virtual machine performance is a tedious process that takes time.  The bottom line is this:  Java SE Virtual Machine engineers have been at this for a very long time, and have had the benefit of fifteen years of scrutiny from the computer science community.  It will take considerable time and effort for Android to come close to this capability.  In the interim, the Java Virtual Machine performance and quality improvement marches on.    Next up: Take a look at these pictures.  Based upon Marvell's Plug Computer design, these amazing devices pack a 1+GHz Arm processor with 512MB RAM and consume a minuscule 5 watts of power. They run Java SE-E beautifully.  Combined with an IDE like NetBeans, this makes for an ideal, although unofficial, Java SE Embedded Development Kit.

As embedded processor designs continue their inexorable drive towards ever increasing capability, the natural desire to utilize more robust software platforms, previously reserved for powerful desktop...

Sun

Why I like the New JavaFX

I have a vested interest in seeing the original JavaFXScript based platform prosper. As an earlyadopter of this technology, a good portion of my life these past few years hasbeen spent developing, blogging and even co-authoring a book on thesubject. So when the inevitable demiseof JavaFX Script was formally announced, those of us intimately involved didnot take it lightly. Perhaps not unlike yourselves, I’ve viewed the plans to morph JavaFX into a Java API with a bit ofskepticism. The new resulting coding paradigmis unquestionably less stylish than its predecessor and can be downrightverbose. But the new way grows on you. Having the privilege to experiment with earlyversions, I’ve come to like the new platform. In fact I like it a lot. Here’swhy I think the new JavaFX platform is more attractive: The community hasspoken and it doesn’t feel the need for yet another scripting language. The attempt to lure graphics designers intothe Java camp by offering a simplified Rich Internet Application environment neverreally panned out. Why? First, there already exists a wealth of mature,established RIA scripting alternatives. These address much of what designers need and JavaFX Script is not sufficientlydifferent enough. Second, the clamor toprovide RIA capabilities in Java comes from the Java community proper, not thegraphics artists. These developers have a lot invested in Javaand are not interested in learning a new language. What they want is a Java API with RIA capabilities. By making JavaFX a first class citizen of theJava platform, it goes a long way towards meeting these desires. The JavaFX API is amore universal solution. By buildingan API in Java, the opportunity for developers of other dynamic languages (likeJRuby and JavaScript) to access JavaFX has been made much easier. Moreover, as the trend to host otherlanguages atop the Java Virtual Machine accelerates, these too will profit fromthis move. No more mapping betweenJavaFX Script and Java. A derivativeof Java, one of the touted advantages of JavaFX Script is its ability toseamlessly integrate and leverage the wealth of Java code written already. Indeed a very important benefit, Java/JavaFXScript integration is mostly straightforward; however there are subtledifferences between both languages that the developer must take intoconsideration. Mapping primitive Javadata types to JavaFX basic types can be an issue. The original JavaFX classes can only extendJava classes that contain a default (no arguments) constructor. Features familiar to Java programmers, likemultidimensional arrays, generics, annotations, and multi-threading have noreal equivalent in JavaFX Script. Bringing the JavaFX class libraries directly onto the Java platformeliminates all of these concerns. If youwant to use some external Java code, just use it. Superior DevelopmentEnvironment. Attempting to debug JavaFXScript within an Integrated Development Environment is at best a trickyendeavor and at worst a waste of time. Additionallyonly NetBeans, and to a lesser extent Eclipse, are the only viable JavaFXScript capable IDEs. As the new JavaFX platformis entirely based on Java, not only is debugging support first rate, the optionof choosing other Java IDEs opens up considerably. The new JavaFX results in, for lack of a better term, morepredictability. One of the primary JavaFXScript mechanisms used to define and place graphical assets (Nodes) into thescenegraph is the object literal notation. Being static in nature, the notion of a sequential flow of events insidean object literal may not make much sense. To compensate, JavaFX Script introduces the powerful concept of binding. Without question, binding is very elegant, butit comes with a price. It is notuncommon to see liberal and arguably unnecessary use of binding throughout JavaFXscript code. One can argue that bringing JavaFX to Java should lessen,but certainly not eliminate, the need for binding. Rather than defining objects inside a static initializer,JavaFX API objects are created and defined sequentially by constructor and methodcalls respectively. By defining objectsin an order that makes sense, there is potential to eliminate a certain classof dependencies that were previously resolved with binding. Improved performance. Although by default the JavaFX compiler generatesJava bytes codes from JavaFX Script source, there is a command-line optionwhich can be invoked to save the intermediate Java code that is produced aspart of the process. A brief perusal ofthis source shows that even the most humble of JavaFX Script constructs churnsout a lot of complicated Java code. Eliminatingthis overhead is bound to improve performance in many instances. Furthermore, significant optimizations tomemory and static footprint as well as startup time are underway. Finally a new lightweight, fast graphicssubsystem, dubbed project prism, will obviate the need to utilize older Javagraphics windowing systems. It’s not how youfeel, it’s how you look. Asuperficial difference, but nonetheless one that should not be underestimated, liesin what your code looks like. In JavaFXScript, graphical Nodes are typically placed in the scenegraph via thedefinition of object literals. Even the least sophisticated of object literalscenegraphs can be grouped and nested several levels deep, each nesting movingthe source code further to the right. Some JavaFX Script code is so indented it leaves little room to writeanything of consequence without having to split a line of code into two or morelines. It didn’t take very long to come up with these talkingpoints. No doubt, as development progressesand more folks jump on board, additional benefits will become apparent.

I have a vested interest in seeing the original JavaFX Script based platform prosper. As an early adopter of this technology, a good portion of my life these past few years hasbeen spent developing,...

Sun

NetBeans with Subversion, SSH and Windows

Having spent too much time wrestling with the various components required to get NetBeans to access a subversion repository via ssh, I thought it might make sense to jot down a few notes in an effort to save others from such hardships. NetBeans does have built-in support for CVS, Mercurial and Subversion, but that doesn't mean that these source code revision systems work in a turnkey fashion.  In particular, subversion, especially with Windows, does require some work. 1. To start:Get the latest JDK.  At the time of this writing it was JDK 6 update 18Get the latest version of NetBeans,  in this case, NetBeans 6.8.  Install it referencing the latest JDK and furthermore, once inside NetBeans, utilize the update manager to make sure all of your modules are the latest and greatest. 2. Upon starting NetBeans, the menu running along the top of the window has a "Team" entry.  Click on Team and follow Subversion->Checkout (No, you won't be able to check anything out yet). This will bring up an error window which requires you to install a subversion client. Selecting the default action, which installs a subversion plugin for NetBeans, didn't result in much success. Instead choose the second option: "Install Subversion Commandline Client".  The recommended download for the client points to CollabNet, which is an excellent choice.  Unfortunately the version referred to (v 1.5.5 for Windows) may ultimately prove problematic if you're communicating with a subversion repository that's based on a later version. For this exercise, CollabNet Subversion Command-Line Client v1.6.9 (for Windows) was downloaded and installed. 3. Once Subversion is installed, clicking on Team->Subversion->Checkout again will hopefully bring up a second window which requires you fill in two textfields: (1) A repository URL and (2) a tunnel command.  Let's focus on the tunnel command first.  For windows this requires the equivalent of the Unix/Linux ssh command.  To get this functionality download and install a version of the PuTTY software package. 4. Unlike the Collabnet Subversion Client, PuTTY installation does not put the executables of this package into your PATH.  Make sure to add the PuTTY path (typically C:\\Program Files\\PuTTY) to your PATH. 5. You'll need to edit a subversion configuration file to specify the tunnel command as follows: C:\\>cd %APPDATA%\\SubversionC:\\>edit config Look for the comment that starts like this: #ssh = $SVN_SSH ssh Uncomment it and make it look like the statement below.  The plink.exe executable is part of the PuTTY software bundle: ssh = $SVN_SSH plink.exe -l user -pw password Where user is your remote user name and password is your user's password. You can also use private key authentication if you're uncomfortable with putting your password in the clear, which might look something like: ssh = $SVN_SSH plink.exe -l user -i C:\\my_private_key.ppk 6. Prior to trying out Netbeans, first connect to your subversion repository via ssh as follows: C:\\> plink.exe user@host Prior to running this command, the NetBeans attempt to access the subversion repository hangs, apparently looking for a host fingerprint cache entry.  The plink.exe command above accomplishes its creation, once the correct password is entered.  In addition, it assures that SSH is correctly set up on your Subversion repository server. 7. Returning to the Team->Subversion->Checkout selection from the main NetBeans window, now it's time to fill in the two textfields.  The first entry should look something like this: svn+ssh://host/subversion_repository_path for example: svn+ssh://127.0.0.1/home/svn/myProject Next should come the tunnel command which should be similar to the entry placed in the %APPDATA%\\Subversion\\config file, namely plink -l user -pw password Where, again, user is your remote user name and password is this user's password. 8. With information correctly filled in, and assuming your subversion server is correctly configured, you should be able to begin utilizing subversion through ssh with NetBeans.  For further information, check out: Guided Tour of Subversion with NetBeans NetBeans Subversion FAQ with SSH Good Luck!

Having spent too much time wrestling with the various components required to get NetBeans to access a subversion repository via ssh, I thought it might make sense to jot down a few notes in an...

Sun

JavaFX, Sockets and Threading: Lessons Learned

When contemplating how machine-dependent applications might communicate with Java/JavaFX, JNI or the Java Native Interface, having been created for just such a task, would likely be the first mechanism that comes to mind.  Although JNI works just fine thank you, a group of us ultimately decided against using it for a small project because, among others: Errors in your JNI implementation can corrupt the Java Virtual Machine in very strange ways, leading to difficult diagnosis. JNI can be time consuming and tedious, especially if there's a varied amount of interchange between the Native and Java platforms. For each OS/Platform supported, a separate JNI implementation would need to be created and maintained. Instead we opted for something a bit more mundane, namely sockets.  The socket programming paradigm has been around a long time, is well understood and spans a multitude of hardware/software platforms.   Rather than spending time defining JNI interfaces, just open up a socket between applications and send messages back and forth, defining your own message protocol.  Following are some reflections on using sockets with JavaFX and Java.  For the sake of simplicity, we'll skip the native stuff and focus on how sockets can be incorporated into a JavaFX application in a thread safe manner. Sockets and Threading Socket programming, especially in Java, lends itself to utilizing threads.  Because a socket read() will block waiting for input, a common practice is to place the read loop in a background thread enabling you to continue processing while waiting for input at the same time.  And if you're doing this work entirely in Java, you'll find that both ends of the socket connection -- the "server" side and the "client" side -- share a great deal of common code.  Recognizing this, an abstract class called GenericSocket.java was created which is responsible for housing the common functionality shared by "server" and "client" sockets including the setup of a reader thread to handle socket reads asynchronously. For this simple example, two implementations of the abstract GenericSocket class, one called SocketServer.java, the other called SocketClient.java have been supplied.  The primary difference between these two classes lies in the type of socket they use.  SocketServer.java uses java.net.ServerSocket,  while SocketClient.java uses java.net.Socket.  The respective implementations contain the details required to set up and tear down these slightly different socket types. Dissecting the Java Socket Framework If you want to utilize the provided Java socket framework with JavaFX, you need to understand this very important fact:  JavaFX is not thread safe and all JavaFX manipulation should be run on the JavaFX processing thread.1 If you allow a JavaFX application to interact with a thread other thanthe main processing thread, unpredictable errors willoccur.  Recall that the GenericSocket class created a reader thread to handle socket reads.  In order to avoid non-main-thread-processing and its pitfalls with our socket classes, a few modifications must take place. [1] Stolen from JavaFX: Developing Rich Internet Applications - Thanks Jim Clarke Step 1: Identify Resources Off the Main Thread The first step to operating in a thread safe manner is to identify those resources in your Java code, residing off the main thread, that might need to be accessed by JavaFX.  For our example, we define two abstract methods, the first, onMessage(), is called whenever a line of text is read from the socket.  The GenericSocket.java code will make a call to this method upon encountering socket input. Let's take a look at the SocketReaderThread code inside GenericSocket, to get a feel for what's going on. class SocketReaderThread extends Thread { @Override public void run() { String line; waitForReady(); /\* \* Read from from input stream one line at a time \*/ try { if (input != null) { while ((line = input.readLine()) != null) { if (debugFlagIsSet(DEBUG_IO)) { System.out.println("recv> " + line); } /\* \* The onMessage() method has to be implemented by \* a sublclass. If used in conjunction with JavaFX, \* use Entry.deferAction() to force this method to run \* on the main thread. \*/onMessage(line); } } } catch (Exception e) { if (debugFlagIsSet(DEBUG_EXCEPTIONS)) { e.printStackTrace(); } } finally { notifyTerminate(); } } Because onMessage() is called off the main thread and inside SocketReaderThread, the comment states that some additional work, which we'll explain soon, must take place to assure main thread processing. Our second method, onClosedStatus(), is called whenever the status of the socket changes (either opened or closed for whatever reason).  This abstract routine is called in different places within GenericSocket.java -- sometimes on the main thread, sometimes not.  To assure thread safety, we'll employ the same technique as with onMessage(). Step 2: Create a Java Interface with your Identified Methods Once identified, these method signatures have to be declared inside a Java interface.  For example, our socket framework includes a SocketListener.java interface file which looks like this: package genericsocket; public interface SocketListener { public void onMessage(String line); public void onClosedStatus(Boolean isClosed); } Step 3: Create Your Java Class, Implementing Your Defined Interface With our SocketListener interface defined, let's take a step-by-step look at how the SocketServer class is implemented inside SocketServer.java.  One of the first requirements is to import a special Java class which will allow us to do main thread processing, achieved as follows: import com.sun.javafx.runtime.Entry; Next, comes the declaration of SocketServer.  Notice that in addition to extending the abstract GenericSocket class it also must implement our SocketListener interface too: public class SocketServer extends GenericSocket implements SocketListener { Inside the SocketServer definition, a variable called fxListener of type SocketListener is declared: private SocketListener fxListener; The constructor for SocketServer must include a reference to fxListener.  The other arguments are used to specify a port number and some debug flags. public SocketServer(SocketListener fxListener, int port, int debugFlags) { super(port, debugFlags);this.fxListener = fxListener; } Next, let's examine the implementation of the two methods which are declared in the SocketListener interface.  The first, onMessage(), looks like this: /\*\* \* Called whenever a message is read from the socket. In \* JavaFX, this method must be run on the main thread and \* is accomplished by the Entry.deferAction() call. Failure to do so \* \*will\* result in strange errors and exceptions. \* @param line Line of text read from the socket. \*/ @Override public void onMessage(final String line) {Entry.deferAction(new Runnable() { @Override public void run() { fxListener.onMessage(line); } }); } As the comment points out, the Entry.deferAction() call enables fxListener.onMessage() to be executed on the main thread.  It takes as an argument an instance of the Runnable class and, within its run() method, makes a call to fxListener.onMessage().  Another important point to notice is that onMessage()'s String argument  must be declared as final. Along the same line, the onClosedStatus() method is implemented as follows: /\*\* \* Called whenever the open/closed status of the Socket \* changes. In JavaFX, this method must be run on the main thread and \* is accomplished by the Entry.deferAction() call. Failure to do so \* will\* result in strange errors and exceptions. \* @param isClosed true if the socket is closed \*/ @Override public void onClosedStatus(final Boolean isClosed) {Entry.deferAction(new Runnable() { @Override public void run() { fxListener.onClosedStatus(isClosed); } }); } Another Runnable is scheduled via Entry.deferAction() to run fxlistener.onClosedStatus() on the main thread. Again, onClosedStatus()'s Boolean argument must also be defined as final. Accessing the Framework within JavaFX With this work behind us, now we can integrate the framework into JavaFX.  But before elaborating on the details, lets show screenshots of two simple JavaFX applications, SocketServer and SocketClient which, when run together, can send and receive text messages to one another over a socket. These JavaFX programs were developed in NetBeans and utilize the recently announced NetBeans JavaFX Composer tool.  You can click on the images to execute these programs via Java WebStart.  Note: depending upon your platform, your system may ask for permission prior to allowing these applications to network.  Source for the JavaFX applications and the socket framework in the form of NetBeans projects can be downloaded here. Step 4: Integrating into JavaFX To access the socket framework within JavaFX, you must implement the SocketListener class that was created for this project.  To give you a feel for how this is done with our JavaFX SocketServer application, here are some code excerpts from the project's Main.fx file, in particular the definition of our ServerSocketListener class: public class ServerSocketListener extends SocketListener { public override function onMessage(line: String) { insert line into recvListView.items; } public override function onClosedStatus(isClosed : java.lang.Boolean) { socketClosed = isClosed; tryingToConnect = false; if (autoConnectCheckbox.selected) { connectButtonAction(); } }} Sparing all of the gory details, the onMessage() method will place the line of text read from the socket in to a JavaFX ListView control which is displayed in the program user interface.  The onClosedStatus() method primarily updates the local socketClosed variable and attempts to reconnect the socket if the autoconnect option has been selected. To demonstrate how the socket is created, we examine the connectButtonAction() function: var socketServer : SocketServer;...public function connectButtonAction (): Void { if (not tryingToConnect) { if (socketClosed) {socketServer = new SocketServer(ServerSocketListener{}, java.lang.Integer.parseInt(portTextbox.text), javafx.util.Bits.bitOr(GenericSocket.DEBUG_STATUS, GenericSocket.DEBUG_IO)); tryingToConnect = true;socketServer.connect(); } } } Whenever the user clicks on the "Connect" button, the connectButtonAction() function will be called.  On invocation, if the socket isn't already open, it will create a new SocketServer instance.  Recognize also that the SocketServer constructor includes an instance of the ServerSocketListener class which was defined above. To round this out, when the user clicks on the "Disconnect" button, the disconnectButtonAction() function is called.  When invoked, it tears down the SocketServer instance. function disconnectButtonAction (): Void { tryingToConnect = false; socketServer.shutdown(); } Conclusion Admittedly, there's a fair amount to digest here.  Hopefully, by carefully reviewing the steps and looking at the complete code listing, this can serve as a template if you wish to accomplish something similar in JavaFX.

When contemplating how machine-dependent applications might communicate with Java/JavaFX, JNI or the Java Native Interface, having been created for just such a task, would likely be the first...

Sun

GlassFish on a Handheld

Until now, the idea of running something akin to a Java EE application server on a small handeld device would have been greeted with ridicule.  Suddenly that notion doesn't seem so ridiculous when considering the recent technology that's been made available. In particular, the following software advances make this pipe dream more of a reality: Java Standard Edition for Embedded Devices: A series of Java virtual machines are available from Sun for many of the popular embedded hardware/OS platforms. They are not only Java SE compatible, but have been space optimized from a static footprint and RAM perspective to perform in embedded environments.  To give you an idea of some of those optimizations, read this. Java Enterprise Edition 6 Platform Specification and the Web Profile:  The Java EE 6 specification allows for the creation of a subset of the component technologies, called "profiles".  The first of these has been dubbed the Web Profile and contains the common technologies required to create small to medium web applications.  Rather than having to use a full blown Java EE application server in all its glory, you can take advantage of a significantly smaller, less complex framework. Embedded GlassFish: This capability, which is now part of GlassFish v3, enables you to run GlassFish inside your Java application, as opposed to the other way around. Simply put, there is no need install GlassFish or create GlassFish domains in this scenario.  Instead, you include an instance of glassfish-embedded-web.jar in your classpath, make a few GlassFish Embedded API calls from your standard Java application, and voila! you've got a web application up and running. The Hardware Rather than opting for one of the many embedded development platforms around (because I'm cheap), I instead decided to investigate what was available from a handheld perspective and see if that environment could be adapted to suit my needs.  After some searching, it looked like the Nokia N810 just might fit the bill.  Courtesy of my buddy Eric Bruno, here's a picture of the N810: To get a feel for this very capable device, check out Eric's Article. What most interested me was that (1) it has 128MB RAM, (2)  a 400MHz Arm v6 processor, (3) runs a common embedded version of Linux (maemo), (4) has a version of Java SE Embedded (from Sun) which runs fine on this platform and (5) can be had for a relatively affordable price on eBay. The Operating System The Nokia N810 is powered by the maemo distribution, an open source platform with a thriving community of developers.  Knowing full well that any attempt to get a web application up and running on this device would stretch its resources to the limit, it was necessary to reclaim as much RAM as possible before starting out.  Here's a description of some of the kludgery involved: You'll need to download and install some additional applications which can be retrieved from the N810's Application Manager program.  They include: rootsh to enable root access to the device and openssh-client and openssh-server to remotely access the device. A quick and dirty way to reclaim RAM is to shut down the X-server and kill all of the windowing applications that run in parallel. There are certainly more elegant ways to do this, but in addition to being cheap, I'm lazy too.  What you quickly find out is that any attempt to manually kill off some of these processes results in a reboot of the tablet.  Why? Because by default, the N810 includes a watchdog process that monitors the state of the system.  If it detects any irregularities, it forces a restart. You can get around this problem by putting the device into what is called "R&D" mode.  This is achieved by downloading the "flasher" utility from maemo.org and establishing a USB connection between the N810 and your host computer.  Directions for this process can be found here. Once established, you can invoke the following flasher command:  flasher3.5 --set-rd-flags=no-lifeguard-reset. If this was done successfully, you'll notice that a wrench appears on the tablet screen when it is rebooted. Once in R&D mode you'll have to remotely ssh into the device via the WiFi connection. The following script called set-headless.sh has been provided to kill off the windowing system.  After executing this script, the N810 in effect becomes a headless device.  The only way to communicate with it is through the network. The Environment Here's what was required to get the web application up and running: Ssh into the device. Download an evaluation copy of Java SE Embedded (ARMv6 Linux - Headless).  For this example the download file was gunzip'ed and untar'ed into the N810 device's /usr directory resulting in a new /usr/ejre1.6.0_10 directory. Download a copy of glassfish-embedded-web-3.0.jar and place this file in the /usr/ejre1.6.0_10/lib directory. Modify your PATH variable to include /usr/ejre1.6.0_10/bin and set your JAVA_HOME variable to /usr/ejre1.6.0_10 Create a temporary directory, for this example we'll create a /root/tmp directory. Compile the following Java source file, Embedded.java,  on a JDK equipped system, which is a slightly hacked version of the original provided by Alexis Moussine-Pouchkine. Create a glassfish directory under /root/tmp/ and place the compiled Embedded.class file there Download the sample hello web application, hello.war, and place it in the /root/tmp directory. Reclaim as much RAM as possible by running the set-headless.sh script Run the web application from inside the /root/tmp directory via the the following command-line invocation: # java -cp /usr/ejre1.6.0_10/lib/glassfish-embedded-web-3.0.jar:. glassfish/Embedded hello.war 600 As the N810 cannot match even the most modest of laptops in terms of performance, be prepared to wait around a little before the application is ready.  Check this output to see what is printed out to the console during invocation. For this run the, N810 was assigned a WiFi IP address of 192.168.1.102, thus the browser is pointed to that address with port 8888.  Here's what comes up: And Interacting with the web app produces this: So this is obviously not ready for prime time, but it does open up a whole lot more possibilities in the near future. Happy New Year! 

Until now, the idea of running something akin to a Java EE application server on a small handeld device would have been greeted with ridicule.  Suddenly that notion doesn't seem so ridiculous when...

Personal

Consumer Electronics: Your Utility Company's Best Friend

If you're reading this article, the chances are real good that your home is full of electronic gadgets.  Moreover many are permanently plugged into wall sockets.   Having recently purchased a P3 International P4460 Kill A Watt EZ power meter, I've been running around getting a feel for how much energy some of these common components use. For this first table, I wanted to see how much energy was being consumed by audio/visual components that were plugged into the wall, but not powered on.  Indeed some of these numbers are eye-opening:  Device  Watts consumed (powered off) Visio 22" LCD HDTV (circa 2007)  1.58 Visio 32" LCD HDTV (circa 2009)  1.18 25" RCA Tube TV (circa 1990)  4.17 Marantz AV Surround Reveiver SR7200  2.79 Marantz CD Changer CC9100  1.43 Marantz DVD Player DV6200  3.60 Boston Micro90pv II Subwoofer 14.53 Microsoft Xbox 360  3.64 Nintendo Wii  2.70 Sony PlayStation 2  0.67 Scientific Atlanta Explorer 4250HD set top 17.27 Scientific Atlanta Explorer 8300HD DVR set top 20.00 The granularity of the P4460 is hundredths of kilowatt-hours, which means that in order to get a decent reading on a low-power device, you've got to leave it attached for a while.  No doubt some of these measurements could be more accurate, but I think you get the point.  Of note: The 4 components that comprise the audio system consume over 22 watts all day, every day 24x7. As set top boxes gain more functionality (e.g. DVR), they suck up even more power.   At this rate, you'd think the power utility would subsidize the cable companies to get as many of these things installed as possible. All told, the devices in the preceding table consume an amount of energy similar to a 75 watt incandescent light bulb being left on all the time. For the most part, these components are non-critical, and one could argue that some (e.g  the stereo) should be connected to a power strip with an on/off button. This should not pose much of an inconvenience.  However, it's a completely different story when it comes to the video components.  When powered cycled, set top boxes take forever to reach full functionality as they reboot over the cable network.  And televisions, if left untethered to electric power for too long, go through this whole channel search sequence when they're powered up.  In short, when it comes to video, you're currently stuck with paying the hidden price of "convenience". For the computer nerd in us, here's a table comprising some of the computer/network-related components that are left on 24x7. Device  Watts Consumed Belkin Wireless G Router  5.09 Motorola SBV5120 SURFboard Cable Modem  5.40 Netgear ProSafe 16 port 10/100 Switch (FS116)  8.33 HP Photosmart c6280 printer (circa 2008) (networked)  4.78 HP Laserjet 4P (circa 1994)  5.32 Canon ImageCLASS MF4270 all-in-one laser printer (networked) (circa 2009)  3.60 Fit-PC Slim with external USB hard disk (network file server)  7.50 A decade ago, devices like those listed above were virtually non-existent in the home.  So what's the cost of being Internet ready anytime, anywhere?  About 40 watts all day and all night. Some final points: In general, carefully consider keeping around older gadgets.  The circa 1990s devices definitely use more energy than their newer, more functional counterparts. Notice no computers are included.  You know you have more than one, some (present company included) have way more than one.  Let's hope you're turning these off or at least putting them to sleep at night!

If you're reading this article, the chances are real good that your home is full of electronic gadgets.  Moreover many are permanently plugged into wall sockets.   Having recently purchased a P3 Intern...

Sun

JavaFX Scenegraph Performance Revisited

Prior to the release of JavaFX 1.2, an earlier blog post explained how creating an application with a scenegraph containing a large number of nodes can have performance implications.  That entry was subsequently picked up by Java Lobby and recently posted here.  Partly because it was a few months old, it resulted in a rash of, shall we say, interesting comments. As one commenter pointed out, the initial results represent JavaFX performance in the 1.0/1.1 timeframe.  JavaFX 1.2 has since been released, and performance has improved substantially.  As a case in point, you can click on the image below to run the clock application.  This same application, developed and compiled with JavaFX 1.1 can be run from the previous blog post.  Further instructions are there.  This application, compiled with JavaFX 1.2 and run on identical hardware,uses about a third of the CPU resources of the original version.  Specifically, using OpenSolaris vmstat(1M) to monitor CPU usage, the following average statistics were collected for one minute when the clock display is updated every 10th of a second.  The abbreviations have the following meanings: us = percentage usage of CPU time in user space sy = percentage usage of CPU time in system space id = percentage usage of CPU time idling The sum of (us + sy + id) should approximate 100%. And here are the utilization numbers:  Version  # Nodes per Digit  CPU Utilization  BulbClockNode JavaFX 1.1  27 BulbNodes  us: 22%  sy: 2%  id: 76%  BulbClockNode JavaFX 1.2  27 BulbNodes  us: 7%  sy: 2%  id: 91% Yes,  performance has improved significantly and will continue to do so.  In fact, the JavaFX team is promising even better results with the advent of JavaFX 1.3 (code named SoMa), when considerable refining of the underlying architecture will take place.  At this stage in it's lifecycle, it's important to "subscribe" to the JavaFX technology.  Advances are coming fast and furious, and they don't promise to slow down anytime soon.

Prior to the release of JavaFX 1.2, an earlier blog post explained how creating an application with a scenegraph containing a large number of nodes can have performance implications.  That entry was...

Sun

Getting the JavaFX Screen Size

In a previous post, I showed one way in which you could size the nodes in your scenegraph relative to the screen size without having to use binding, thus eliminating the potential for a bindstorm.  At the time, the suggestion was to get the screen coordiantes via a call to AWT (Advanced Windowing Toolkit) as follows:  var AWTtoolkit = java.awt.Toolkit.getDefaultToolkit (); var screenSizeFromAWT = AWTtoolkit.getScreenSize (); This currently works for JavaFX Desktop deployments but is far from an optimal solution primarily because it is non-portable.  There is no guarantee that AWT will exist-- in fact I'm pretty sure it will not exist -- in the JavaFX Mobile and JavaFX TV space.  So attempting to utilize the code snippet above in a JavaFX Mobile or TV context will result in an error.  Unfortunately at the time of the original post (Java FX 1.1), I didn't know of a better alternative. With the advent of JavaFX 1.2, this problem is solved.   A new class called javafx.Stage.Screen  is provided which describes the characteristics of the display.  The API definition of the class can be found here.  So now you can query the JavaFX runtime in a portable fashion to get the screen coordinates as follows:  import javafx.stage.\*; //set Stage boundaries to consume entire screen Stage { fullscreen: true width: Screen.primary.bounds.minX as Integer height: Screen.primary.bounds.minY as Integer... } Cheers.

In a previous post, I showed one way in which you could size the nodes in your scenegraph relative to the screen size without having to use binding, thus eliminating the potential for a bindstorm.  At...

Sun

Early Access version of Java RTS 2.2 Released

Just in time for the start of JavaOne 2009, Sun released the latest version of its real-time Java platform, Java RTS 2.2, in early access form.  New features of this release include but are not limited to: OS Platform support for Solaris 10 (x86 and Sparc - Update 5 & 6), Red Hat MRG1.1 and  SUSE Linux Enterprise Real-Time 10 (SP2 Update 4) 64-bit support for the three operating systems mentioned above. Inaddition, 32-bit support continues to be supported. Faster throughput. Java RTS 2.2 utilizes a new dynamic (JIT) compiler targetedfor Java SE 7 which contains several performance improvements including anewregister allocator. TSV (Thread Scheduling Visualizer) enhancements. NetBeans plug-in support for Java RTS and TSV.  New Initialization-Time-Compilation (ITC) options including the ability to wild card a set of methods vs listing single methods and support for specifying compilation based on thread type. Auto-tuning and startup improvements made to the Real-time Garbage Collector (RTGC). New documentation including a Java RTS Quick Start Guide toget new users started quickly with minimum programming andconfiguration. A 90-day evaluation is available here: http://java.sun.com/javase/technologies/realtime/index.jsp.  For more detailed information on OS support and new functionality, check out the2.2Release Notes.

Just in time for the start of JavaOne 2009, Sun released the latest version of its real-time Java platform, Java RTS 2.2, in early access form.  New features of this release include but are not...

Personal

You Are What You Eat?

So you may ask, what effect does the food we eat have on our blood pressure?  My experience, documented below, won't likely hold up to whole lot of scientific scrutiny, but it's good enough lesson for me. A combination of poor eating habits and genetics lands me in that ever-increasing group of individuals who have high blood pressure.  At first I was what you might consider borderline hypertensive.  Being thin and active, in conjunction with my doctor, we only monitored my levels to make sure they didn't get worse.  Unfortunately they did. In an attempt at avoiding medication, I started seriously watching my sodium intake.  The resulting change in blood pressure was noticeable in pretty short order.   See the table below for daily readings for the last two weeks.  In general, I try to take my reading at or around 8:00AM if possible. Having spent my whole life eating pretty much what I please, it's not easy to make this lifestyle change.   But with the help of loved ones, things have been going quite well.  But then came Mother's day (5/10).   A combination of brunch with my parents and dinner with my in-laws put the kabash on any diet plans.  I knew this would happen, and was frankly looking forward to the feast.  It only took one day for my pressure to skyrocket.  Amazing.  But man did I enjoy eating the waffles, bacon, crumbcake, bagels, Mimosas (Orange Juice and Champagne) cookies, macaroni, meatballs, turkey, chocoloate cake, Coke, Espresso, wine, Sambuca and Cognac to name a few.  Date  Time  Blood Pressure  4/29 8:01 AM  124/73  4/30 8:19 AM  124/73  5/1 8:18 AM  123/71  5/2 7:48 AM  119/75  5/3 9:22 AM  126/74  5/4 7:53 AM  129/79  5/5 8:20 AM  119/69  5/6 7:16 AM  126/75  5/7 6:19 AM  126/72  5/8 6:56 AM  125/75  5/9 7:48 AM  133/71  5/10 9:14 AM  129/73  5/11 7:56 AM  159/81 Back to the bland diet, that is, until the next feast.

So you may ask, what effect does the food we eat have on our blood pressure?  My experience, documented below, won't likely hold up to whole lot of scientific scrutiny, but it's good enough lesson for...

Sun

Node Count and JavaFX Performance

In a recent blog entry entitled Best Practices for JavaFX Mobile Applications (Part 2), Michael Heinrichs espouses that keeping the scenegraph as small as possible helps JavaFX applications perform optimally. Regardless what version of JavaFX you're using, this is sage advice.  Having spent some time trying to create components for a scoreboard-like application, I was concerned over the amount of CPU time being consumed by the clock component pictured directly below. You can click on the preceding image to run this mini application via Java WebStart.   By placing your mouse over any of the digits and typing in, via keyboard input, a valid number, you can set the clock.  Clicking on the "START/STOP" text will toggle the clock on and off.  Like many scoreboard clocks, when the time remaining is less than one minute, 10ths of seconds are displayed.  It is during this phase, when digits are being updated every tenth of a second, that this application can be especially taxing.  You can imagine how troublesome this clock might be if it were to be part of say a hockey scoreboard which could have an additional 4 penalty clocks ticking simultaneously. The major factor affecting performance appears to be the sheer number of nodes in the scenegraph that require recalculation for every clock tick.  For this first implementation, each of the five clock digits is comprised of 27 BulbNodes, (my naming) which are switched on or off depending upon what value needs to be displayed. In an effort to see how decreasing the node count might effectperformance, this second implementation of the clock component uses thesame underlying framework, except that each digit is now composed of 7LED SegmentNodes (my naming again) instead of 27 BulbNodes.   You can run this version of the clock component by clicking on the image that follows. For our final example, in order to truly minimize node count, each digit is represented by a single ImageView node. When the value of a digit changes, a new Imageis displayed.  By caching all of the possible digit values (blank, 0-9)you can very quickly switch images.  No doubt, prettier images can be created, but I think you get the point.  Click on the image that follows to try this version. The Results The slower the compute platform, the more pronounced the differences in performance should be.  Thinking along those lines, a very modest 1.4 GHz Pentium M laptop was chosen as the test environment to compare CPU utilization for these applications.  OpenSolaris provides an easy-to-use well-known command-line tool called vmstat(1M), which was chosen as the mechanism to analyze the individual applications. In contrast, the Performance Tab which is part of the Windows Task Manager, seemed to produce wild performance variations. For each run,  the clocks were set to one minute, and run until the time expired.  The data shown below represents the average CPU utilization, after startup, for each of the three implementations.  In particular we'll look at the following fields returned by vmstat: us - percentage usage of CPU time in user space sy - percentage usage of CPU time in system space id - percentage usage of CPU time idling The sum of (us + sy + id) should approximate 100%. Number of Nodes per Digit CPU Utilization Implementation 1: BulbClockNode  27 BulbNodes  us: 22%  sy: 2%  id: 76% Implementation 2: LEDClockNode  7 SegmentNodes  us: 9%    sy: 2%  id: 89% Implementation 3: ImageClockNode  1 ImageNode  us: 3%    sy: 2%  id: 95% The JavaFX engineering team is well aware of this limitation, and hopes to redesign the underlying scenegraph plumbing in the future.  Regardless, it's still a good idea to take into consideration the size of your scenegraph. JavaFX book status:  Our upcoming book, JavaFX: Developing Rich Internet Applications, is in copy edit.  Coming soon: Rough cuts of certain chapters will be available on Safari.

In a recent blog entry entitled Best Practices for JavaFX Mobile Applications (Part 2), Michael Heinrichs espouses that keeping the scenegraph as small as possible helps JavaFX applications perform...

Sun

Bindstorming

It is within our nature, even in the most infinitesimal way, to leave our mark on this world before we exit it.  I'd like to coin the following term, heretofore unseen in the JavaFX space, and submit it as my humble contribution to the human collective: bindstorm \\'bïnd•storm\\ (noun): condition where a multitude of JavaFX bind recalculations severely hampers interactive performance Yeah, I know, using the word you wish to define inside its definition is bad, but there is precedent for this: (1) Fancy-schmancy, hoity-toity college dictionaries do it all the time. (2) Mathematicians and computer scientists call this recursion: that mysterious concept which developers use to impress others of their programming prowess. Don't get me wrong, JavaFX binding is incredibly powerful.  Heck, we dedicated a whole chapter to it in our soon-to-be-released book JavaFX: Developing Rich Internet Applications.  But binding does come with a price, and like most anything else, over-consumption can lead to abuse. Consider this use case: you've got a JavaFX application with dozens or maybe even hundreds of Nodes that are part of the scenegraph.  Each of the Nodes are ultimately sized and positioned in proportion to height and width instance variables that are passed on down.  If you define width and height at startup and have no interest in a resizeable interface, then you stand a good chance of avoiding the use of many bind expressions.  The one potential twist here is that if you're sincerely interested in a non-resizeable application, but want it to consume the entire screen, what do you do?  As screens come in all shapes and sizes, you may not know what the resolution is at start time.  JavaFX has an elegant solution for this which uses binding. Here's a simple application which defines a Rectangle and Circle that fill the entire screen.  You can click anywhere within the Circle to exit the application.  Notice the number of binds required to get this to work. import javafx.stage.\*;import javafx.scene.\*;import javafx.scene.shape.\*;import javafx.scene.paint.\*;import javafx.scene.input.\*;function run() : Void {    var stage: Stage = Stage {        fullScreen: true        scene: Scene {            content: [                Rectangle {                    width: bind stage.width                    height: bind stage.height                    fill: Color.BLUE                }                Circle {                    centerX: bind stage.width / 2                    centerY: bind stage.height / 2                    radius: bind if (stage.width < stage.height) then                            stage.width / 2 else stage.height / 2                    fill: Color.RED                    onMouseClicked: function(me: MouseEvent) {                        FX.exit();                    }                }            ]        }    }} Imagine what this would look like if you had lots of complex custom components with many more dependencies on height and width.  In addition to the potential performance impact, this could be error-prone and cumbersome to code.  To avoid the over usage of binding and the potential for a bindstorm, applications of this sort could be re-written as follows: import javafx.stage.\*;import javafx.scene.\*;import javafx.scene.shape.\*;import javafx.scene.paint.\*;import javafx.scene.input.\*;function run() : Void {var AWTtoolkit = java.awt.Toolkit.getDefaultToolkit (); var screenSizeFromAWT = AWTtoolkit.getScreenSize (); Stage { fullScreen: true scene: Scene { content: [ Rectangle { width: screenSizeFromAWT.width height: screenSizeFromAWT.height fill: Color.BLUE } Circle { centerX: screenSizeFromAWT.width / 2 centerY: screenSizeFromAWT.height / 2 radius: if (screenSizeFromAWT.width < screenSizeFromAWT.height) then screenSizeFromAWT.width / 2 else screenSizeFromAWT.height / 2 fill: Color.RED onMouseClicked: function(me: MouseEvent) { FX.exit(); } } ] } }} We achieve the same effect as the first example by first making a call to a method in the java.awt.Toolkit package.  With this information we can statically define our scenegraph without the use of binding. There is one caveat to this solution.  As the AWT (Advanced Windowing Toolkit) is an integral part of Java SE, this code should be portable across all JavaFX desktops.  However, if you wish to deploy a JavaFX Mobile solution, the AWT calls would likely change.  Is there a mechanism that might work across both models? As a final thought, while we're on this theme of coining terms, my compadres Jim Clarke and Eric Bruno, co-authors of the aforementioned JavaFX book, jokingly asked what word could be used to describe this scenario: "Condition where binds lead to binds that leads back to the original bind, ending up in a Stack fault?" BindQuake? BindTsunami? Bindless? BindSpin? BindHole (BlackHole)? BindPit?

It is within our nature, even in the most infinitesimal way, to leave our mark on this world before we exit it.  I'd like to coin the following term, heretofore unseen in the JavaFX space, and submit...

Sun

Registering Multiple Actions (or Handlers) in JavaFX

Java developers, especially those performing any type of GUI work, will ultimately encounter Java's event-driven programming paradigm.  In short, if programmers want to act upon some kind of event they bundle up a chunk of code into a Java method, typically referred to as a handler, and register the handler with that event.  Whenever that event occurs, the handler code will automatically be executed. JavaFX provides a similar mechanism.  For a straightforward example, the code below defines a simple timer in JavaFX with a resolution of 1 second.  Each time a second expires, the function specified by the action instance variable will be executed.  Here's what it looks like: import javafx.animation.\*;public class SimpleTimer { public def timeline = Timeline { repeatCount: 5 interpolate: false keyFrames: [ KeyFrame { time: 1saction: function () : Void { println("tick"); } } ] }} Adding a run() function, as follows, to the bottom of this source will enable you run an instance of this timer: function run() : Void {    var s = SimpleTimer{};    s.timeline.playFromStart();} The output from this run looks like this: tickticktickticttick It's all well and good if you only need a single action.  What if you wanted to perform multiple actions and/or dynamically add or subtract a number of actions?  We can enhance our previous SimpleTimer class to dynamically register and unregister handlers by taking advantage of two of JavaFX's features: sequences and function pointers. Our new class provides more flexibility: It defines an instance variable called duration, which enables the user to specify the resolution of a clock tick at object instantiation. It defines two additional public functions called registerHandler() and unRegisterHandler()which take a function pointer (a handler) as an argument.  By registering ahandler, the function will be included in the list of handlers to beexecuted each time the specified duration expires. A handler is registered by inserting it's function pointer argument into an internal sequence of function pointers called handlers[]. A handler is similarly unregistered by deleting it's function pointer argument from the handlers[] sequence. The action instance variable, which is part of the KeyFrame instance, now calls an internal function called runHandlers().  runHandlers() sequentially executes the functions found in the handlers[] sequence. Here's the new class: import javafx.animation.\*;public class Timer {    /\*\*     \* User-definable:  specifies the length of time for one cycle.     \*/    public var duration = 100ms;    public def timeline = Timeline {        repeatCount: Timeline.INDEFINITE        interpolate: false        keyFrames: [            KeyFrame {                time: duration                action: runHandlers            }        ]    }    // Holds the list of handlers to run    protected var handlers: function() [];    /\*\*     \* Add the function, represented by the handler argument, to the list     \* of handlers.  These will run when the elapsed time, specified     \* by the duration instance variable, expires.     \*/    public function registerHandler(handler : function()) : Void {        for (func in handlers) {            if (handler == func) {                return;  // handler already registered -- skip            }        }        insert handler into handlers;    }    /\*\*     \* Remove the function, represented by the handler argument, from     \* the list of handlers.     \*/    public function unRegisterHandler(handler : function()) : Void {        delete handler from handlers;    }    protected function runHandlers() : Void {        for (handler in handlers) {            handler();        }    }} To test this class out, we'll add a run() function at the script level.  The run() function creates a Timer instance and registers two handler functions, decrementTenthsRemaining() and processTicks().  Here's the code: function run() : Void {    var t = Timer {};    var tenthsRemaining = 100;    var decrementTenthsRemaining = function() : Void {        tenthsRemaining -= 1;    }    var processTick = function() : Void {        if (tenthsRemaining mod 10 == 0) {            println("seconds left: {tenthsRemaining / 10}");        }        if (tenthsRemaining == 0) {            t.timeline.stop();        }    };    t.registerHandler(decrementTenthsRemaining);    t.registerHandler(processTick);    t.timeline.play();} And this is the output from the run: seconds left: 9seconds left: 8seconds left: 7seconds left: 6seconds left: 5seconds left: 4seconds left: 3seconds left: 2seconds left: 1seconds left: 0 Shameless Promotion:  Keep up to date with the latest status of our upcoming JavaFX Book entitled JavaFX: Developing Rich Internet Applications at jfxbook.com.

Java developers, especially those performing any type of GUI work, will ultimately encounter Java's event-driven programming paradigm.  In short, if programmers want to act upon some kind of event...

Sun

JavaFX Book Coming to a Theatre Near You

Despite the considerable attention JavaFX has garnered, publications (i.e. books) that discuss JavaFX in any detail are few and far between, and anything that has been published, as good as it may have been, is unfortunately hopelessly out of date.  The reality is that up until recently, the JavaFX platform has been a moving target.  With the advent of JavaFX 1.1 however, the platform has stabilized to the point where you should begin to see legitimate books appearing on the subject. Jim Clarke, Eric Bruno and I have been working steadfastly on a book entitled JavaFX: Developing Rich Internet Applications, which should be among the first -- if not the first -- of these new books.  From our standpoint the content is nearly finished.  What remains is the editing and publication process which takes a few additional months to complete.  Plans call for this book to be available in time for the JavaOne 2009 Conference, if not earlier. We also plan on making rough cuts of certain chapters available on Safari.  As soon as these are ready, we'll let you know.  Finally, check out our website, jfxbook.com, dedicated to the book.  There you will find additional resources that accompany the book, including sample code and applications.  One such application is a JavaFX version of the popular Sudoku game, pictured below. Visit jfxbook.com and give it a try.

Despite the considerable attention JavaFX has garnered, publications (i.e. books) that discuss JavaFX in any detail are few and far between, and anything that has been published, as good as it may...

Sun

Overhead in Increasing the Solaris System Clock Rate

In a previous entry entitled Real-Time Java and High Resolution Timers, we discussed how Sun's Java Real-Time System requires access to timers with a resolution greater than the default 10ms to do anything really interesting.   It was also stated that most modern processors have an APIC or Advanced Programmable Interrupt Controller which supports much finer-grained clock tick rates. Unfortunately there are many instances where a system does indeed contain an APIC, but it is not exposed by the BIOS.  Furthermore, we've found that some of the embedded, low-power x86-based processors do not contain an APIC at all.  For an example, take a look at the AMD Geode LX 800 based fit-PC Slim. So if you wanted to utilize higher resolution timers for this class of system, you'd have to resort to alternative methods.  Solaris and OpenSolaris provide two /etc/system parameters called hires_tick and hires_hz to facilitate increasing your default system clock tick.  By adding the following line to /etc/system, you'll increase the system clock tick rate from the default of 100 per second to 1,000 per second, effectively changing the clock resolution from 10ms to 1ms.    set hires_tick=1 If you want to further increase the clock resolution, you can do so via the hires_hz system tunable parameter.  Although formally unsupported, it does work.   In order to, for example, increase the clock tick rate to 10,000, add this to /etc/system:     set hires_tick=1 set hires_hz=10000 To achieve the desired effect above, you must include both the hires_tick assignment in addition to setting the hires_hz parameter. These modifications do not come without side-effects, and depending upon the hardware in question and the granularity of the desired timer resolution, they could be significant.  In short, it takes additional CPU cycles to field all those timer interrupts.  So I thought it'd be interesting to see what effect changing the clock tick rate had on two separate systems.   Here they are:  System  fit-PC Slim  Panasonic Toughbook CF-30 (Revision F)  CPU  AMD Geode LX 800 (500 Mhz)  Intel Core 2 Duo L7500 1.60GHz  OpenSolaris Version  snv_98  snv_101b The measuring tool used for this simple experiment is vmstat(1m).  Solaris aficionados will likely point out that there are much more accurate alternatives, but I think vmstat(1m) gives a decent feel for what's going on without having to expend a whole lot of extra energy.  In particular we'll look at the following fields returned by issuing a 'vmstat 5', and picking one of the interim samples: in - interrupts per second us - percentage usage of CPU time in user space sy - percentage usage of CPU time in system space id - percentage usage of CPU time idling The sum of (us + sy + id) should approximate 100%.  The table below shows sample vmstat output on various clock tick settings for our two hardware platforms. Clock tics/sec  100  1000  10000  100000 /etc/system settings  none (default)  set hires_tick=1 set hires_tick=1set hires_hz=10000 set hires_tick=1set hires_hz=100000 vmstat(5) sample fit-PC in: 201 us: 0 sy: 1 id: 99 in: 2001 us: 0 sy: 5 id: 95 in: 19831 us: 0 sy: 43 id: 57 n/a vmstat(5) sample CF-30 in: 471 us: 0 sy: 0 id: 99 in: 2278 us: 0 sy: 1 id: 99 in: 20299 us: 0 sy: 5 id: 95 in: 200307 us: 0 sy: 21 id: 79 Notes/Conclusions: The vmstat(5) was let run for about a minute.  The output above shows one of the typical 5 second samples.  The other 5 second samples are almost identical. The user (us) CPU time numbers give us a reasonable idea that thesesystems were predominantly in an idle state when being sampled. The number of interrupts serviced per second is directly proportional to the clock tick rate. And of course, the larger the number of interrupts, the more system CPU time is required. The amount of overhead taken up by increasing the clock rate is a function of system capability.  The CF-30 not only has a much faster processor, it also has two cores to share the interrupt load.  As such it could accommodate a much higher clock tick rate.  For the fit-PC, performance is impacted profoundly even at modest clock tick rates.

In a previous entry entitled Real-Time Java and High Resolution Timers, we discussed how Sun's Java Real-Time System requires access to timers with a resolution greater than the default 10ms to do...

Sun

Why JavaFX is Relevant

This week marks the formal release of JavaFX 1.0.  During the interval between the early marketing blitz and now, we've heard a lot from our friends in the press and the blogosphere, and in many instances what they had to say was not very pretty.  Some think the Rich Internet Application platform battle lines are already drawn between Adobe and Microsoft, and dismiss Sun as having arrived too late to the party.  Others opine that JavaFX's underlying Java platform is so yesterday.  In fact Java is the primary reason why JavaFX will, much to the chagrin of many, receive serious consideration.  Here's why: Java is ubiquitous.  It is the proven, de-facto platform for web-based deployment.  On the desktop, it is estimated that approximately 90% of PCs have Java installed. In fact the major PC OEMs have seen fit to install it for you out of the box.  In the mobile world, Java is the dominant deployment platform.  Billions (that's with a 'b') of devices run Java. The Java development community is arguably the largest on the planet.  Java gained initial widespread acclaim as a productive development environment, and continues to do so.  As JavaFX is an evolution of Java and seamlessly integrates with it, human nature tells us that individuals will naturally want to work with and leverage that which they already know and are familiar with. Alternatives are still no match for the Java Virtual Machine.  It has been extensively studied, vetted, scrutinized, poked, prodded, abused, cloned, and optimized more than any other virtual machine in the history of computing. And just in case you're under the impression that the Java Virtual Machine is limited only to the Java (and now JavaFX script) programming languages, think again.  At last count there were over 200 projects integrating modern dynamic languages to the Java VM.  That list includes the usual suspects like PHP, Ruby, JavaScript, Python, and [insert your favorite language here]. The amount of Java Standard Edition online updates is staggering.  We know.  We supply the downloads.  And once a desktop is upgraded, it will be able to take full advantage of the features JavaFX brings to the table, effectively trivializing the barriers to entry. Many of our most talented folks have been working feverishly to reach this milestone.  That being said, there's still lot's more work to do.  But we're off to a real nice start.  Check out http://javafx.com.  Hmm.  looks like the site is a little sluggish right now.  Maybe we underestimated all the interest?

This week marks the formal release of JavaFX 1.0.  During the interval between the early marketing blitz and now, we've heard a lot from our friends in the press and the blogosphere, and in many...

Sun

OpenSolaris on the Smallest System Yet?

One of my compadres forwarded me this link from CompuLab, an Israeli company which has created this real small, very energy efficient PC.  It may just be the smallest full-featured system to date.  I'm a sucker for these types of things, and thought it would be interesting to get a reduced footprint version of OpenSolaris up and running on it.  Haven't gotten around to playing with wi-fi, or for that matter the graphics (as the reduced footprint version of OpenSolaris has no windowing system), but every indication points to a system that doesn't require any magic incantations to get OpenSolaris up and running.  Here are some of the specs CPU: AMD Geode LX800 500MHz      Chipset: AMD CS5536 Display: Integrated Geode LX display controller up to 1920x1440   Memory: 512MB DDR 333MHz soldered on-board Hard disk: 2.5” IDE 60GB Ports:  RJ45 Ethernet port 100Mbps WLAN 802.11b/g 54Mbps 3 x USB 2.0 HiSpeed 480Mbps  mini RS-232 (cable available from CompuLab) VGA DB15 Stereo line-out audio (headphones) Stereo line-in audio / Mic The system has a serial port, and upon request, CompuLab will provide the required custom serial cable.  By instructing GRUB to redirect the console to the serial port, you can connect the serial cable to another system and communicate directly with the console.  To get a rough idea of how to accomplish this, check here.  The screenshots below show that from an OpenSolaris terminal window, you can use the tip(1) command to accomplish this. So the question is, how minimal is this configuration?  Issuing `ps -ef' shows that only 14 processes are currently running, of which I'm sure there's an opportunity to eliminate one or two if need be. To give you an idea of how much RAM is currently consumed, here's what issuing the 'memstat' macro under mdb(1m) yields: Who says Solaris can't fit in small places?

One of my compadres forwarded me this link from CompuLab, an Israeli company which has created this real small, very energy efficient PC.  It may just be the smallest full-featured system to date.  I'm...

Sun

Fast Booting Solaris

A veteran Java ONE keynote presenter, Perrone Robotics has developed some real interesting technologies which take the concept of using autonomous (i.e. unmanned) vehicles to a whole new level.  One of their key ingredients is the MAX software platform which utilizes common commercially available components to enable Perrone to very quickly and cost-effectively retrofit nearly any vehicle in short order. The MAX robotics platform runs on a (roughly 4" x 6") low-power PC board atop Solaris and Sun's Java Real-Time System (Java RTS).  This combination gives Perrone the ability to leverage the huge Java development community, and assures that their critical software components behave in a predictable and deterministic fashion. During the Java ONE 2007 conference, I was speaking with Paul Perrone about the notion of creating a minimized version of Solaris over which his platform might run.  The helicopter pictured above, boots from a relatively small (4-8Gb)  IDE flash drive, where standard Solaris takes up a substantial chunk.  It leaves them precious little space to collect valuable information like telemetry or terrain data.  Paul asked to revist this idea for a future project.  That's where we left off. Not that we've ignored them since, but it wasn't until a year later that small Solaris reared its head again.  In addition to saving space, their main interest in this environment was in seeing how much faster Solaris might boot up.  The ability to be fully functional from power-up in as short a time as possible is of critical importance. So before investigating what advantages there might be, let's provide some background information:Hardware Two separate systems were used, and for argument's sake, represent two ends of the x86 compute spectrum.  Embedded Profile Modern Profile  System iGologic i84810 Panasonic Toughbook CF-30 (Rev. F) CPU 1GHz Celeron M Core 2 Duo L7500 1.60GHz RAM 512MB 1GB Disk 4GB Flash IDE Drive Solid State SATA Drive Operating Systems Minimal configurations were created for Solaris 10 08/07 (Update 4) and OpenSolaris Nevada build 93.  These configurations boot entirely into RAM and consume less than 100MB ramdisk space.  With a little more effort they can be may significantly smaller.  The original blog post describing the environment is here.   You can download the framework for these hardware/OS combinations here, and can get a feel for the build environment by taking a look at this README. Definitions Within the context of this discussion, here are the key terms along with their meanings. Total Boot Time: This is thetime it takes from power-up till a user is prompted to log in. Typically for a full Solaris installation, the windowing system mustfirst start up and present the user with a login screen.  In a minimalSolaris environment, there is no windowing system.  Instead, the total boot time is defined as the time it takes from power-up till a user is prompted with the console login: prompt. POST Time: POST or Power On Self Testis the time taken by the system at pre-boot to handle things likediagnostics,  BIOS and device initialization.  For this discussion,we'll define POST time as the time it takes from power-up until the user is presented with a GRUBboot menu.  We call out this segment of the total time because in manycases we are at the mercy of the PC/BIOS manufacturer and can't overlyinfluence how quickly or slowly this proceeds. Solaris Boot Time:The time it takes from being presented with a GRUB boot menu till aSolaris user is prompted to log in.  Again, depending upon whether awindowing system is configured or not, this may represent the time ittakes to be prompted with a login screen or console login: prompt respectively.  This represents the segment of time that we can influence. We can conclude from these definitions that:    Total Boot Time = POST Time + Solaris Boot Time Results Embedded Profile: iGoLogic i84810 system  OS Post Time Solaris Boot Time Total Boot Time  Solaris 10 08/07 13 sec 26 sec 39 sec  OpenSolaris Nevada Build 93 13 sec 18 sec 31 sec  Modern Profile: Panasonic Toughbook CF-30 OS POST Time Solaris Boot Time  Total Boot Time  Solaris 10 08/07  6 sec  18 sec  24 sec  OpenSolaris Nevada Build 93  6 sec  9 sec  15 sec Conclusions/Notes 1. These times were taken by hand with the stopwatch feature on my Timex.  If anything, the times might be slightly shorter than actually recorded as there is a human delay in reacting to seeing the necessary prompts.  I ran the tests several times, and the same numbers consistently appear. 2. The version of the OS appears to matter a lot, as OpenSolaris nvx_93 boots up nearly twice as fast as Solaris 10 U4 on the same hardware. 3. The type of I/O device subsystem seems to be a big factor too.  For example, by switching out the IDE Flash Drive with a 5400 RPM IDE hard disk, i84810 total boot time decreased by about 6 seconds for both Solaris 10 and OpenSolaris.  4. The minimal Solaris environment is currently only available in 32-bit mode. 5. With relative ease, Solaris can be configured to boot in less that 10 seconds on modern x86 hardware.  My unofficial record stands at 9 seconds (or slightly less).   No doubt it could boot faster on more robust hardware (eliminating POST time).  Any takers?

A veteran Java ONE keynote presenter, Perrone Robotics has developed some real interesting technologies which take the concept of using autonomous (i.e. unmanned) vehicles to a whole new level.  One...

Sun

Real-Time Java in a Zone

As is often the case, Sun's technologies and offerings are beingapplied in ways which we hadn't necessarily anticipated.  Yet anotherexample has reared its head in the govenrment/military space wherecustomers have expressed interest in using Sun's Java Real-Time System with Solaris Trusted Extensions. As it stands right now, Java RTS will neither operate nor installwithin the confines of such an environment.  Let's investigate why thisis so, and see what current opportunities there are forworking around this shortcoming. So what is it thatcauses Trusted Extensions and Java RTS not to play together nicely?  It happens to revolve around Trusted Extension's extensive usage of Solaris zonesto limit access between differing security levels.  Solaris packages must be specifically configured to accommodate zones, which has yet to formally take place with Java RTS.  As zones are a corecomponent of Solaris, we can, for the sake of simplicity, just usestandard Solaris to demonstrate how we can work around this temporary limitation.  These modificationsshould apply to Trusted Extensions just as well.   To get Java RTS to work within a zone, follow these steps: 1. Install the Java RTS cyclic driver (only) in the global zone. global# pkgadd -d . SUNWrtjc Processing package instance <SUNWrtjc> from </cyclic/src/packages_i386>## Installing package <SUNWrtjc> in global zone Java Real-Time System cyclic driver(i386) 2.1.0,REV=2008.06.13.16.10Copyright 2008 Sun Microsystems, Inc.  All rights reserved.Use is subject to license terms.Using </> as the package base directory.## Processing package information.## Processing system information.   5 package pathnames are already properly installed.## Verifying package dependencies.## Verifying disk space requirements.## Checking for conflicts with packages already installed.## Checking for setuid/setgid programs. This package contains scripts which will be executed with super-userpermission during the process of installing this package. Do you want to continue with the installation of <SUNWrtjc> [y,n,?] y Installing Java Real-Time System cyclic driver as <SUNWrtjc> 2. Create a zone called 'rtjzone': global# mkdir -p /zonebash-3.00# zonecfg -z rtjzonertjzone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:rtjzone> createzonecfg:rtjzone> set zonepath=/zone/rtjzonezonecfg:rtjzone> verifyzonecfg:rtjzone> commitzonecfg:rtjzone> exit global# zoneadm list -vc  ID NAME             STATUS     PATH                           BRAND    IP 0 global           running    /                              native   shared - rtjzone          configured /zone/rtjzone                  native   shared global# zoneadm -z rtjzone installPreparing to install zone <rtjzone>.Creating list of files to copy from the global zone.Copying <6984> files to the zone.Initializing zone product registry.Determining zone package initialization order.Preparing to initialize <1074> packages on the zone.Initialized <1074> packages on zone.Zone <rtjzone> is initialized.Installation of <1> packages was skipped.The file </zone/rtjzone/root/var/sadm/system/logs/install_log> contains a log of the zone installation. 3.  Modify the zone to allow access to the cyclic device, and to allow additional privileges global# zonecfg -z rtjzonezonecfg:rtjzone> set limitpriv=default,proc_priocntl,proc_lock_memory,proc_clock_highreszonecfg:rtjzone> add devicezonecfg:rtjzone:device> set match=/dev/cycliczonecfg:rtjzone:device> endzonecfg:rtjzone> verifyzonecfg:rtjzone> commitzonecfg:rtjzone> exitglobal# zoneadm -z rtjzone rebootNote:  One privilege that is useful with Java RTS is sys_res_config. This is typically used to assign a real-time process to a processor set. Unfortunately zones cannot currently be given this privilege.  You canhowever, from the global zone, assign a processor set to a zone, whichmight be a reasonable workaround. 4.  Get a copy of the SUNWrtjv package and modify it sothat it will install in a zone.  The postinstall script and postremovescript must replaced with those provided by these hyperlinks justmentioned. rtjzone# cd /scratchrtjzone# lsSUNWrtjvpostinstallpostremovertjzone# cp postinstall SUNWrtjv/install/rtjzone# cp postremove SUNWrtjv/install/5. Modify the SUNWrtjv pkgmap file with the appropriate sizes,checksums and last access dates.  The source code for a sample Cprogram, called pkgmap_info, which prints out the necessary information, can be found here. rtjzone# cd SUNWrtjvrtjzone# grep post pkgmap1 i postinstall 5402 42894 11943444571 i postremove 2966 34854 1194344457rtjzone# cp pkgmap_info.c /tmprtjzone# cc -o /tmp/pkgmap_info /tmp/pkgmap_info.crtjzone# cd /scratch/SUNWrtjv/install/rtjzone# /tmp/pkgmap_info postinstallpostinstall 5820 9488 1213727841rtjzone# /tmp/pkgmap_info postremovepostremove 3092 45039 1213727538Replace the postinstall and postremove entries in the pkgmap file withthose produced by the pkgmap_info program.  You cannot simply use theexample data above because the last access times will not match.  Doing so will cause the install to fail. 6. Install the Java RTS SUNWrtjv package inside the zone. rtjzone# cd /scratchrtjzone# pkgadd -d . SUNWrtjv Processing package instance <SUNWrtjv> from </scratch> Java Real-Time System runtime environment(i386) 1.5.0_13_Java-RTS-2.0_01-b08_RTSJ-1.0.2,REV=2007.11.06.12.47Copyright 2005 Sun Microsystems, Inc.  All rights reserved.Use is subject to license terms. Where should this package be installed? (/opt): /opt To achieve predictable temporal behavior, the Java Real-Time Systemmust be granted access to a number of privileged Solaris resources.By default, access to these privileged resources is only granted tothe superuser (root). They can also be granted to additional usersby creating a rights profile, that is, a collection of authorizations,that can later be assigned to an administrative role or directlyto a user. As part of this package installation, a local 'Java Real-Time System User'rights profile can be created on this machine.This rights profile should NOT be created if such an action conflictswith your computer security management policies. If unsure, contactyour system administrator or your computer security manager.Also refer to the product's release notes for further details regardingthe privileges required by the Java Real-Time System. Should a local 'Java Real-Time System User' rights profile be created? [y,n] (no): y Using </opt> as the package base directory.## Processing package information.## Processing system information.## Verifying package dependencies.## Verifying disk space requirements.## Checking for conflicts with packages already installed.## Checking for setuid/setgid programs. This package contains scripts which will be executed with super-userpermission during the process of installing this package. Do you want to continue with the installation of <SUNWrtjv> [y,n,?] y ... ## Executing postinstall script.Creating the 'Java Real-Time System User' rights profile. Refer to the 'System Administration Guide: Security Services'documentation for further information regarding the way to assign the'Java Real-Time System User' rights profile to users, groups, oradministrative roles using the Solaris Management Console, smc(1M) orthe usermod(1M), groupmod(1M) and rolemod(1M) commands. Installation of <SUNWrtjv> was successful. 6.  Try running a real-time Java application in the zone. rtjzone# /opt/SUNWrtjv/bin/java -versionjava version "1.5.0_13"Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13_Java-RTS-2.0_01-b08_RTSJ-1.0.2)Java Real-Time System HotSpot(TM) Client VM (build 1.5.0_13-b08, mixed mode)We hope to more formally support Solaris zonesusage with Java RTS in the future.  In the interim this workaround can help you getstarted.  Many thanks to Jim Clarke, who did the lion's share of the legwork to find this solution.

As is often the case, Sun's technologies and offerings are being applied in ways which we hadn't necessarily anticipated.  Yet anotherexample has reared its head in the govenrment/military...

Sun

Are Solaris RAM Disks Swappable?

As memory access is typically orders of magnitude faster than disk access, the idea of using a part of RAM as an in-memory storage device has been one of the earliest performance optimizations realized by the computer science community.  Even though today this type of optimization takes place transparently inside modern operating systems (via mechanisms like the disk buffer cache),  there are still circumstances where manually creating a RAM disk might still be quite useful.A Solaris customer using the ramdiskadm(1m) utility to create a Ram disk device for a real-time application posed the following question: "Are RAM disks swappable?". From a real-time perspective, a RAM disk not only yields significantly better seek performance but also provides for more deterministic behavior compared to the traditional rotating disk platter.  The customer's concern here is that, under dire circumstances, is it possible for operating system swap out the RAM disk memory?  Needless to say, if a swap out occurred it would put a big crimp in the real-time application's predictability.To get an idea of what's going on when a RAM disk is created, let's use the Solaris kstat(1m) kernel statitics utility to see how memory is being allocated.  First let's see what memory looks like before creating a RAM disk:# pagesize4096# kstat -n system_pages | grep pagesfree       pagesfree                       96334# kstat -n system_pages | grep pageslocke     pageslocked                     26170So, on this particular system, where a page is 4096 bytes,  there are currently 96334 pages free, and 26170 pages that are locked.  Now let's create a 50MB RAM disk:# ramdiskadm -a rd 50m/dev/ramdisk/rd# ramdiskadmBlock Device                                                  Size  Removable/dev/ramdisk/rd                                           52428800    Yes # kstat -n system_pages | grep pagesfree       pagesfree                       83507# kstat -n system_pages | grep pageslocked       pageslocked                     38988Let's subtract the original number of pageslocked from the latest value and multiply by the pagesize:# pagesize4096# bc(38988-26170)\*409652502528\^DThe increase in locked pages can be attributed to the creation of the RAM disk (50m + a small amount of overhead).  So yes, these pages are locked into memory.  But it would be nice to get a definitive statement on what pageslocked actually means.  According to McDougal, Mauro and Gregg's Performance and Tools: DTrace and MDB Techniques for Solaris 10 and OpenSolaris, pageslocked is "Total number of pages locked into memory by the kernel and user processes".  Furthermore, the man page for plock(3C), a related library routine which enables programmers to lock memory segments, states that "Locked segments are immune to all routine swapping".  What's routine swapping? Hmm.  Anybody care to shed some light on this?

As memory access is typically orders of magnitude faster than disk access, the idea of using a part of RAM as an in-memory storage device has been one of the earliest performance optimizations...

Sun

Working Around a Simple Directory Server Install Issue with Windows

While working with a customer evaluating Sun's Directory ServerEnterprise Edition 6.3 for Windows, we came across a problem duringvery basic install and setup.  For posterity's sake, I thought it mightmake sense to document the issue and offer a few potential workarounds.  How to replicate1. Install the directory server.  In this case the downloadable image has been unzipped into the c:\\tmp\\dsee63\\DSEE_ZIP_DISTRIBUTION directory C:\\tmp\\dsee63\\DSEE_ZIP_Distribution> dsee_deploy --no-inter -i /tmp/ds63 Unzipping sun-ldap-shared-l10n.zip ... Unzipping sun-ldap-directory.zip ... Unzipping sun-ldap-console-gui-help-l10n.zip ... Configuring Cacao at /tmp/ds63/dsee6/cacao_2 ... You can now start your Directory Server Instances You can now start your Directory Proxy Server Instances2. Create a new Directory Server instance and start the instance C:\\tmp\\dsee63\\DSEE_ZIP_Distribution>cd \\tmp\\ds63 C:\\tmp\\ds63> set PATH=c:\\tmp\\ds63\\ds6\\bin;c:\\tmp\\ds63\\dsrk6\\bin;%PATH% C:\\tmp\\ds63>dsadm create /tmp/instance Choose the Directory Manager password: Confirm the Directory Manager password: Use 'dsadm start '/tmp/instance'' to start the instance C:\\tmp\\ds63>dsadm start /tmp/instance Waiting for Directory Server instance 'C:/tmp/instance' to start... Directory Server instance 'C:/tmp/instance' started: pid=21443. Create a suffix.  This is where the installation fails:C:\\tmp\\ds63>dsconf create-suffix -h localhost -p 1389 dc=example,dc=comEnter "cn=Directory Manager" password:Unable to bind securely on "localhost:1389".The "create-suffix" operation failed on "localhost:1389".The problem here is described in The Sun Java System Directory Server Enterprise Edition 6.3 Release Notes:On Windows systems, Directory Server does not allowStart TLS by default.This issue affects server instances on Windows systems only.This issue is due to performance on Windows systems when Start TLS is used.To work around this issue, consider using the -P optionwith the dsconf command to connect using the SSL port directly.Alternatively, if your network connection is already secured, consider usingthe -e option with the dsconf command.The option lets you connect to the standard port without requesting a secureconnection.Two Potential WorkaroundsWorkaround 1: Issue the 'dsconf create-suffix' command by directly connecting to the SSL port (-P 1636)C:\\tmp\\ds63>dsconf create-suffix -h localhost -P 1636 dc=example,dc=comCertificate "CN=TECRA-A1, CN=1636, CN=Directory Server, O=Sun Microsystems" presented by the server is not trusted.Type "Y" to accept, "y" to accept just once, "n" to refuse, "d" for more details: YEnter "cn=Directory Manager" password:Workaround 2: Modify the ds-start-tls-enabled attribute that is stored in the directory server configuration.a. Create a file, say c:\\tmp\\modify.ldif which looks like:  dn: cn=config  changetype: modify  replace: ds-start-tls-enabled  ds-start-tls-enabled: onb. Issue an ldapmodify command something like this: C:\\tmp>ldapmodify -h localhost -p 1389 -D "cn=Directory Manager" -w password < c:\\tmp\\modify.ldif modifying entry cn=configc. Confirm modification via ldapsearch command: C:\\tmp>ldapsearch -b "cn=config" -h localhost -p 1389 -D "cn=Directory Manager" -w password "cn=config" ds-start-tls-enabled version: 1 dn: cn=config ds-start-tls-enabled: on dn: cn=config,cn=chaining database,cn=plugins,cn=config dn: cn=config,cn=ldbm database,cn=plugins,cn=configc. Stop and restart the directory server instance C:\\tmp\\ds63>dsadm stop /tmp/instance Directory Server instance 'C:/tmp/instance' stopped C:\\tmp\\ds63>dsadm start /tmp/instance Directory Server instance 'C:/tmp/instance' started: pid=3560 d. Try creating a suffix with the standard port (1389): C:\\tmp\\ds63>dsconf create-suffix -h localhost -p 1389 dc=example1,dc=com Enter "cn=Directory Manager" password:Note:  Directory Server Enterprise Edition 6.3 is supported on Windows Server 2003, but not for Windows XP.  Although not formally supported, it is possible to experiment with XP.

While working with a customer evaluating Sun's Directory Server Enterprise Edition 6.3 for Windows, we came across a problem duringvery basic install and setup.  For posterity's sake, I thought it...

Sun

Modifying and Respinning a Bootable Solaris ISO Image

As an adjunct to the previous blog post, a slightly customized boot environment capable of enabling serial console kernel debugging was required to diagnose Solaris install problems.  The post itself mentioned that a nice way to accomplish this was to set up PXE network boot via Solaris jumpstart.  It is indeed flexible and enables one to experiment with modifications and quickly test whether they perform as expected.  The one downside to this environment is that an additional DHCP/TFTP boot server has to be configured.  To eliminate that service, you could, once the customizations are well understood, simply create a new version of the Solaris install DVD with your customizations.  Let's run through the process for a simple example.1. Get a Solaris DVD.  For this example, we'll use an interim build of the upcoming Solaris 10 Update 5.2. Extract the the entire contents of the DVD.# lofiadm -a /export/iso/s10x_u5b10_dvd.iso/dev/lofi/1# mkdir -p /iso# mount -r -F hsfs /dev/lofi/1 /iso# cd /iso# mkdir /export/modified-s10x_u5b10# find . -depth -print | cpio -pudm /export/modified-s10x_u5b104516208 blocks3. Modify the content contained in /export/modified-s10x_u5b10.  In this case, we'll change the boot/grub/menu.lst file found in this directory to look like:##pragma ident "@(#)install_menu 1.1 05/09/01 SMI"#default=0timeout=30serial --unit=0 --speed=9600terminal serialtitle Solaris Serial Console ttya kernel /boot/multiboot kernel/unix -B install_media=cdrom,console=ttya module /boot/x86.miniroot 4. Issue the following magic incantation to create a bootable ISO image based on the contents of the /export/modified-s10x_u5b10 directory.# mkisofs -R -b boot/grub/stage2_eltorito -no-emul-boot -boot-load-size 4 -R -L -r -D -U \\-joliet-long -max-iso9660-filenames -boot-info-table -o \\/export/iso/modified-s10x_u5b10.iso /export/modified-s10x_u5b10...Size of boot image is 4 sectors -> No emulation 1.80% done, estimate finish Fri Apr 18 15:55:13 2008 2.25% done, estimate finish Fri Apr 18 15:59:07 2008 2.70% done, estimate finish Fri Apr 18 15:58:37 2008 3.15% done, estimate finish Fri Apr 18 15:58:48 2008... 98.43% done, estimate finish Fri Apr 18 15:59:37 2008 98.88% done, estimate finish Fri Apr 18 15:59:37 2008 99.33% done, estimate finish Fri Apr 18 15:59:36 2008 99.78% done, estimate finish Fri Apr 18 15:59:36 2008Total translation table size: 2048Total rockridge attributes bytes: 3075577Total directory bytes: 18921472Path table size(bytes): 148014Max brk space used 1b240001112471 extents written (2172 MB) Voila!  Acknowledgements to Tom Haynes and this blog post which served as an excellent guide.

As an adjunct to the previous blog post, a slightly customized boot environment capable of enabling serial console kernel debugging was required to diagnose Solaris install problems.  The post itself...

Sun

Enabling Remote Console Debugging of Solaris x86 Boot/Install

Our partners do a fair amount of business supplying ruggedized Solaris-powered Panasonic Toughbook computers to their US government/military customers.  As a regular part of the product cycle, Sun usually works with both the integrators and Panasonic to assure that as new models become available, Solaris runs on these systems properly.  Furthermore, when we can get our grubby little hands on the systems, we'll run them through our certification suite of tests and formally place them on the Solaris Hardware Compatibility List.  As an example, here's the certification report for one of the versions of the Panasonic Toughbook CF-29.Panasonic recently introduced a new version of the Toughbook CF-30 (referred to as revision F) which tweaks some of the computer subsystems resulting in an all-too-familiar scenario: namely, these differences cause the current version of Solaris to fail to install.  Note: Solaris is not alone here, all Operating Systems must continually play this cat and mouse game to support the latest hardware/firmware advances.Our initial hypothesis lead us to believe that the problem was related to the introduction of the Intel ICH8 SATA chipset.  So we called on some of our Solaris I/O experts, based out of Sun's Beijing office, to take a peek at what was going on.  As the laptop is currently in New York, we needed a way for folks half way around the world to have access to this system.  There are lots of mechanisms available to remotely diagnose systems, what's somewhat unique here is the following: (1) the diagnosis takes place very early in the boot stage, way before any windowing or networking is set up and (2) The system in question is a laptop, not a server, where things like Lights Out Management (LOM) service processors are non-existent.The solution here was to utilize decades old RS-232 technology combined with some of features of the GRUB bootloader.  Here are two requirements needed:A serial connection must be established between the system to be diagnosed, which will be referred to henceforth as the target, and the system which accesses the target which we'll refer to as the remote host.   Unfortunately most laptop manufacturers have seen fit to eliminate serial port connectors in lieu of using USB to serial converters as a replacement technology.  At the early stages of boot, USB/serial capability is not yet available, so these systems are not good candidates for this type of diagnosis.  Thankfully the target in question here, the Panasonic CF-30 Toughbook, still comes with a serial port.A Jumpstart environment capable of installing Solaris on x86/x64 systems is strongly recommended.  As part of the process described below, we'll be modifying the target's GRUB environment  (menu.lst).  If you chose to use a DVD boot/install environment instead, you'd need to modify and burn a new DVD for each change made to the target's boot environment.  It took a bit of time to find the right incantations in the menu.lst file to get what was needed here; continually re-burning DVDs would have been excruciating.  This exercise is left to the reader, here's a good start to understanding the jumpstart setup process.Here's how to set up the remote console environment:1. A null modem cable must be physically connected between the remote host and target.  The most common cable required will be a DB-9 female-to-female connector.  Your configuration may vary.2. Check the BIOS of the remote host and target and make sure serial ports are enabled.3. Running Solaris on the remote host, we'll be using the tip(1) command to access the target via serial port.  Edit the /etc/remote file and look for the hardwire: entry.  Modify it to look like this:hardwire:\\ :dv=/dev/term/a:br#9600:el=\^C\^S\^Q\^U\^D:ie=%$:oe=\^D:4. As part of setting up a jumpstart install for the target, a series of files are created in the /tftpboot directory of the jumpstart server.  Under /tftpboot, there should be a custom menu.lst file for each managed install, suffixed by the unique network MAC address of the system in question.  For example, the network MAC address for the CF-30 in question is 0:b:97:db:c0:97.  The related /tftpboot file for the CF-30 turns out to be /tftpboot/menu.lst.01000B97DBC097. As your target will have a different MAC address, it's menu.lst  file will have a different suffix in the /tftpboot directory.  Edit that custom menu.lst file (for example, /tftpboot/menu.lst.01000B97DBC097) to look as follows:default=0timeout=30serial --unit=0 --speed=9600terminal serialtitle Solaris_10 s10x_u5b10 kernel /I86PC.Solaris_10-1/multiboot kernel/unix -B console=ttya,install_media=192.168.1.5:/export/s10x_u5b10 -vkd module /I86PC.Solaris_10-1/x86.minirootThe key modifications here involve (1) inclusion of the serial --unit=0 --speed=9600 and terminal serial lines plus (2) additional arguments added to the kernel directive.  Grub is very fussy about the order and placement of arguments; playing around with these will likely change grub's behavior. 5.  From the remote host, access the serial console of the target by issuing:$ tip hardwire6. Inside a terminal window, here's what the serial console looks like, after the system has been power cycled and runs through the POST sequence:After the miniroot is loaded, you'll be presented with an mdb prompt and a screen which looks like this:You can now issue mdb commands to diagnose.  In this scenario you should also be able to reboot the  system without any other manual intervention, like this: Here's what issuing the mdb commands ':c' and '$c' look like in this environment.   From this simple  trace we can ascertain that the SATA drivers were never even loaded.  Turns out this is likely a VM problem.  Here's the filed bug report.

Our partners do a fair amount of business supplying ruggedized Solaris-powered Panasonic Toughbook computers to their US government/military customers.  As a regular part of the product cycle,...

Sun

Reduced Footprint Java SE: Bringing Java Standard Edition Down to Size

A previous blog post demonstrated how you can, with minimal effort, lessen the disk footprint of a typical Java SE 5.0 runtime environment by about a third without violating the Java Standard Edition Licensing agreement. That post focused primarily on removing optional files and compressing class library jar files.  It turns out that with a little more engineering,  there is significant opportunity for further space optimization.These additional savings involve more intimate knowledge of the inner workings of Java SE.  Sun performs this engineering work and provides a series of reduced footprint versions of Java SE, in binary form, for some of the most common embedded platforms.  They include some of these enhancements:1Headless configuration - The inclusion of graphics subsystems like AWT and Swing comprise a large chunk of the Java SE footprint.  If your device has no graphics capability (i.e. is headless) why would you need to include this functionality?  Headless configurations:Do not support keyboard or mouse inputCannot create windows or display graphicsThrow a HeadlessException when a graphics API is calledStill support a functional Java 2D API for printing and off-screen renderingAre still 100% Java SE compatibleEliminate client or server compiler - Sun's Java SE implementations include two HotSpot compilers, tuned and designed for specific environments.  The client compiler focuses on things like fast user interactivity and quick startup, while the server compiler's priority is on optimizing large, long-lived server applications.  The Java SE VM can only use one of these compilers at a time, so removing the unused one can save considerable space.Minimizes memory consumption - Standard Java SE allocates RAM to house things like the JIT code cache and the object heap.  By default, each one of these areas (and others) can be assigned 64 MB.  In an embedded platform where the total amount of RAM might only be 64MB, Java SE will simply not have enough memory to run.  Java SE Embedded on the other hand, will automatically adapt memory usage to the amount of available RAM.  Consequently, 64MB is a reasonable amount of RAM for a large set of embedded Java applications.Space vs. speed tradeoffs - (1) Java SE implements a thread lookup table, which in layman's terms, helps save a few instructions when switching between Java threads.  By eliminating this table, a couple of MBs of RAM can be spared from your application's working set. (2) Java SE also sets aside an area (mmap'ed) for loading jar files into random access memory, which as was explained to me, may help performance, but may also result in duplicate copies of jar files in memory.  Removal of this area further reduces the Resident Set Size.An ExampleFollowing through with the exmaple in the last post., let's start with an unmodified version of Java SE 5.0 Update 13 for Linux/x86.  By default, the static footprint is approximately 88MB. jimc@jalopy:/tmp> du -sk ./jre1.5.0_13/ 88185   ./jre1.5.0_13/After following the directions in the previous post, we can pare it down to roughly 60MB. jimc@jalopy:/tmp> du -sk /tmp/jre1.5.0_13/ 59358 /tmp/jre1.5.0_13/ Downloading Sun's reduced footprint version of Java SE for x86/Linux yields:   jimc@jalopy:/tmp> du -sk /tmp/jre1.5.0_10/ 31003 /tmp/jre1.5.0_10/This version of the JRE is about one-third it's original size, and furthermore has been modified to use significantly less memory than the standard Java SE offerings.  Note: we are comparing slightly different updates of Java SE 1.5 (update 13 vs. update 10).  They are indeed not identical but their differences should be considered, for the sake of argument, negligible. [1] Many thanks to Bob Vandette, who though presentation and conversation, supplied this information.  One of many sources comes from Bob's Java ONE 2007 session called, Deploying Java Platform Standard Edition (Java SE Platform) in Today's Embedded Devices (TS-2602)

A previous blog post demonstrated how you can, with minimal effort, lessen the disk footprint of a typical Java SE 5.0 runtime environment by about a third without violating the Java Standard Edition...

Sun

Reducing Your Java SE Runtime Environment Footprint (Legally)

Because the resource requirements for Java Standard Edition are by no means insignificant, developers interested in utilizing Java on smaller platforms have traditionally opted for one of the Java Micro Edition configurations.   It should therefore come as no surprise that some of the Standard Edition functionality has to be sacrificed in these constrained Java ME environments.  However, as the trend towards more capable devices continues, it becomes reasonable to once again consider the advantages of utilizing Java SE.  Unfortunately, with a static footprint that could easily exceed 100MB, Java SE may still be too large to swallow for a lot of applications.It is duly noted that the criticism leveled at Sun for exacting too much controlover the Java platform has been widespread.  Like it or not though, one benefit of Sun's initial stewardship has been that Java SEhas remained a standard platform, and threats to splinter it have thusfar been reasonably thwarted.  Accordingly, in order to avoid incompatibilities, there are restrictions spelled out in the Java SE Licensing Agreement which prohibit modification of the Java SE binary distribution.That  being said, there are a list of optional files, specified by the Java Runtime Environment's README file which can be removed, without ramification, to lessen the footprint.  They include:Deployment tools (e.g. Java Web Start, Java plugin)IDL and RMI tools (e.g. rmid, tnameserv) Security tools (e.g. policytool, keytool)orbdLocalization (charsets.jar)In addition, further space optimization can be achieved by compressing the class library files contained in the rt.jar file.  By default, Java SE ships this jar file uncompressed.  The tradeoff here is space vs. performance, i.e. the classloader must expend cycles to uncompress the Java classes as they are being loaded.An ExampleSo let's download a sample JRE from java.sun.com and see how it's footprint can be minimized.   For this example, we'll use Java SE 1.5.0 Update 13 for Linux x86.After installation, the JRE is approximately 88MB jimc@jalopy:/tmp> du -sk ./jre1.5.0_13/ 88185   ./jre1.5.0_13/Here's what it looks like after removal of the optional files jimc@jalopy:/tmp> cd jre1.5.0_13/ jimc@jalopy:/tmp/jre1.5.0_13> /bin/rm -rf lib/charsets.jar lib/ext/sunjce_provider.jar \\ lib/ext/localedata.jar lib/ext/ldapsec.jar lib/ext/dnsns.jar bin/rmid \\ bin/rmiregistry bin/tnameserv bin/keytool bin/kinit bin/klist bin/ktab \\ bin/policytool bin/orbd bin/servertool bin/javaws, lib/javaws/ and lib/javaws.jar jimc@jalopy:/tmp/jre1.5.0_13> cd .. jimc@jalopy:/tmp> du -sk ./jre1.5.0_13/ 77227 ./jre1.5.0_13/And after rt.jar has been compressed jimc@jalopy:/tmp> mkdir rtjar jimc@jalopy:/tmp> cd rtjar/ jimc@jalopy:/tmp/rtjar> jar -xf /tmp/jre1.5.0_13/lib/rt.jar jimc@jalopy:/tmp/rtjar> zip -q -r /tmp/rt . jimc@jalopy:/tmp/rtjar> mv /tmp/rt.zip /tmp/jre1.5.0_13/lib/rt.jar jimc@jalopy:/tmp/rtjar> du -sk /tmp/jre1.5.0_13/ 59358 /tmp/jre1.5.0_13/ ConclusionIn many cases, you can lop off about a third of the Java Runtime Environment footprint with no ill effects.  In a future post, we'll discuss how Sun has further reduced Java SE significantly, not only from the point of view of static footprint, but also from a RAM usage perspective too.  For a preview you can check out Sun's Java SE Embedded technology.

Because the resource requirements for Java Standard Edition are by no means insignificant, developers interested in utilizing Java on smaller platforms have traditionally opted for one of the Java...

Sun

Good Things Come To Those Who Wait

Java RTS 2.1 EA (Early Access) marks the arrival of a commitment made some time back, namely that Sun would provide a version of the Java Real-Time System for Linux.  Perhaps to some, it was long time in the making, but in fact there are at least 2 good reasons why a Linux version wasn't available till now:Until recently, there was no standard Linux release/kernel which had true real-time support.  Typically the versions available were non-standard and did not constitute any considerable volume.  Mainstream Linux distributions are only now incorporating the necessary real-time underpinnings.Porting the Java Real-Time System VM to additional platforms is non-trivial.Support and testing for Java RTS 2.1 EA at this time is limited to the currently shipping SUSE Linux Enterprise Real Time10 platform and the upcoming Red Hat Enterprise MRG 1.0 release.  It is however possible that other versions of Linux could run Java RTS 2.1 EA as it utilizes the real-time POSIX programming interface.  At minimum they would require a 2.6.21 kernel or greater and a glibc of 2.5 or greater.  In addition, the latest RT patch would also be needed.This announcement pertains only to Linux, but of course a 2.1 EA version for both Solaris x86/x64 and Sparc will be shortly forthcoming. In the interim, a version of Java RTS 2.0 update 1 is readily available.  Documentation for both Java RTS 2.0 and 2.1 EA can be found here. Regardless of platform, an evaluation version of the software is available for download at the Java RTS download page. 

Java RTS 2.1 EA (Early Access) marks the arrival of a commitment made some time back, namely that Sun would provide a version of the Java Real-Time System for Linux.  Perhaps to some, it was long time...

Sun

General Purpose Always Wins ... Why Not for the Real-Time Market Too?

The brief but brilliant era of computing has seen companies come and go, many relegated to the ash heap of history because they failed to heed this simple rule:     In the long run general purpose solutions will always win out over specialized proprietary ones.  As a long time employee of Sun Microsystems, I've witnessed firsthand the effects, both positive and negative, that this law has had on my company.  Sun can attribute it's initial meteoric rise to the fact that it offered a viable alternative to the popular minicomputer platform of the day.  The first Sun workstations were built from commercial off-the-shelf components, and although nearly equal in performance to the minicomputer, they were so much more affordable that they became good enough.  And of course over time, as Moore's law might dictate, those initial deficiencies quickly dissipated, and in fact surpassed traditional minicomputer capabilities. Somewhere along the line Sun lost sight of this ideal and began incorporating more proprietary technology into their products.  At first the strategy appeared to be successful as Sun was well positioned to reap huge benefits during the Internet bubble.  Meanwhile, low-cost general purpose servers were continuously improving.  When the bubble burst they in turn became good enough alternatives to the powerful but costly Sun servers.  The resulting decline of Sun was rapid, and it's taken the better part of a decade for us to recover.   This story has been told -- and will continue to be again and again -- for those refusing to learn this lesson.  A professor of mine once told me, "If there's anything we've learned from history, it's that we haven't learned anything from history".When markets mature, even those where technology is paramount, economic considerations dominate. General purpose systems scale on every dimension (unit, management, training and integration costs) whereas proprietary systems do not.  A switch to more standard components should in no way be construed as stifling innovation.  Rather, general purpose systems help create new innovation by building from common elements economically appealing in their own right, and presumably even more economically beneficial once combined.1[1] The above paragraph was taken in bits and pieces from an email exchange with Dave Hofert, Java Embedded and Real-Time Marketing Manager.  His points were so compelling I couldn't help myself.Real-time industrial controllers could be the next market ripe for disruption.  Admittedly this is an entrenched and conservative lot.  But the economics of general purpose computing cannot be denied.  As organizations strive to further eliminate cost out of their system, revisiting usage and deployment of industrial controllers, typically via custom proprietary PLCs, is falling under review. Consequently, at the behest of one of the world's largest industrial corporations, Sun has partnered with iGoLogic, a systems integrator, and Axiomtek, an industrial PC board manufacturer, to create the real-time controller platform pictured below. Among others, here are some of the compelling benefits of this platform: It's based on standard x86 hardware.  The motherboards aren't much larger than a postcard, are energy efficient, and yet have PC class processing power.  The number of competing manufacturers in this space eliminates vendor lock-in and insures price sensitivity and further rapid innovation.It runs on standard Solaris.  No custom proprietary real-time operating system is required.  One of the best kept secrets of Solaris is its real-time capabilities.Real-time applications are developed using Sun's Java Real-Time System, enabling you to leverage the largest development community on the planet.  Obscure development languages and highly specialized environments are longer needed. Industrial Networking Protocols (e.g PROFIBUS, EtherCAT) are easily migrated to this platform, partly because of the wealth of development tools available.The system utilizes an IDE flash drive from Apacer.  In addition to eliminating the moving parts associated with a traditional disk drive, it consumes less power and makes the system more resistant to shock and vibration.  Overcoming the longevity limitations of flash memory, Apacer has done some interesting work on wear leveling algorithms effectively extending the lifetime of the flash device well past the expected lifetime of the industrial controller. Let this be our first salvo fired over the proprietary industrial encampment.  We believe the opportunity is immense, but also understand that to achieve any measure of success, partnering with organizations who are truly experts in this arena is critical.  If you think you can add further value, we'd love to talk.

The brief but brilliant era of computing has seen companies come and go, many relegated to the ash heap of history because they failed to heed this simple rule:       In the long run general purpose...

Sun

Real-Time Java and High Resolution Timers

Any modern x86/x64 processor worth its salt comes equipped with an Advanced Programmable Interrupt Controller, or APIC.  Among the features that an APIC provides is access to high resolution timers.  Without such capability, the default interrupt source for timer and cyclic operations has a precision on the order of 10 milliseconds -- hardly fine-grained enough for any serious real-time work.The cyclic subsystem, introduced in Solaris 8, gives Solaris the capability to support high resolution timers.  The Sun Java Real-Time System version 2.0, available for Solaris on both x86 and Sparc platforms, includes an additional package called SUNWrtjc:  Java Real-Time System cyclic driver.  This package exposes an interface to the cyclic subsystem, normally only available to the kernel.  For those already familiar with Sun's Java RTS, you've no doubt noticed that you either need to run as superuser or assign a set of fine-grained privileges to an ordinary user account. (sys_res_config, proc_priocntl, proc_lock_memory and proc_clock_highres).  The proc_clock_highres privilege gives access to those timers.Originally developed on an AMD Athlon-based PC, I recently moved a Real-Time Java project over to my Toshiba Tecra A1 laptop running the same version of Solaris and Java RTS.  With the goal of getting in a little development time during a long flight, that migration suddenly casued timings to be all wrong.  Why, might you ask, would moving an application from one seemingly identical Solaris environment to another cause this unusual behavior?  Turns out that even though the laptop, a Pentium 4M based system, has an APIC, it was never exposed by the laptop BIOS.  In this scenario, regardless what you do from a Solaris perspective, you'll never get access to the high-res timers.  This phenomenon appears to be most prevelant in laptops.  As they account for about 50% (or more?) of PCs sold today, developers have a realistic chance of running into this situation.  You can issue this magic Solaris incantation to determine if your system has high-res timer support:   # echo "psm_timer_reprogram::print"  | mdb -kIf anything but 'apic_timer_reprogram' is then displayed, your machinehas no exposed APIC, and is probably unsuitable for Java RTS.  In some cases the BIOS may be configurable with regards to APIC support; in many others it is simply not available.In the absence of an APIC, there is the potential to improve the high-resolution timing by setting the following tunable in /etc/system:   set hires_tick=1Following a reboot, this would change the clock tick frequency from 100 to 1000 ticks per second. This frequency can then be further tuned by setting the hires_hz tunable to the desired value:   set hires_hz=10000The default value is 1000 ticks per second; higher values are notofficially supported.Note that tuning your machine in this way does not come without cost.  It is likely to degrade overallperformance, as the system will need to spend a larger proportion of time handling the larger frequency of clock interrupts.1   [1] Thank you  Christophe Lizzi for your explanation of the problem and potential workaround. 

Any modern x86/x64 processor worth its salt comes equipped with an Advanced Programmable Interrupt Controller, or APIC.  Among the features that an APIC provides is access to high resolution timers. ...

Sun

Real-time Java Meets Wall Street

Jim Clarke, Eric Bruno and I have been selected to present this year at Java ONE.  The title of our session is called "TS-1205 The Sun Java Real-Time System Meets Wall Street".  We're scheduled to present on Wednsday, May 9 at 10:55AM - 11:55AM Pacific in the Moscone Convention Center in San Francisco.  If you plan on attending, please stop by and see us.  For those that can't make the event, we'll be sure to post the slides once we're permitted.To get an idea of what we'll discuss, take a look at the abstract below;  ABSTRACTToday's financial institutions use program trading systems to automatically execute split-second trades based on sophisticated algorithms that assess current market conditions. Trade decisions and execution must occur in a timely fashion to capitalize on the market. Missing a trade opportunity, even by a few seconds, can lead to significant losses.With improvements in Java technology performance, firms are starting to use Java technology to implement these algorithms and thereby realizing productivity gains over more-traditional C/C++ development. However, in these time-critical systems, there is still no guarantee that at any instant, the process will not be interrupted by the Java platform garbage collector. Although trade execution occurs most of the time within an acceptable response time, the trade execution is delayed every once in a while, due to garbage collection or some other system event.Sun's implementation of the Real-Time Specification for Java (JSR 001), the Sun Java Real-Time System, enables real-time processing through techniques that protect important threads from garbage collection and other system interrupts. This means that trading systems can confidently monitor the market and take action consistently within a calculated window of opportunity.To demonstrate the impact of these techniques, Sun's OEM Software Systems Engineering technical team has written a demonstration of a trading system that uses real trade data. The demonstration compares a regular Java virtual machine against the Sun Java Real-Time System. For each run, a graph shows the difference in the actual trade price and the price when the trade should have executed. Running this trading system with the Sun Java Real-Time System shows that no money is lost due to garbage collection latencies. The results will be contrasted with the same application run with the standard (non-real-time) Java virtual machine.This presentation demonstrates a working system and proves real-time Java technology's ability to satisfy hard real-time trading requirements.

Jim Clarke, Eric Bruno and I have been selected to present this year at Java ONE.  The title of our session is called "TS-1205 The Sun Java Real-Time System Meets Wall Street".  We're scheduled to...

Sun

Crosstool Environment in a Solaris Zone

BackgroundThe task of building Java ME CDC-HI binaries and their associated crossdevelopment environments tends to be very linux-centric.  Utilities like Crosstool,which makethis process much more tolerable, also make various linux and GNUassumptions that differ fromstandard Solaris.  By introducing new paths and GNU versions ofapplications, it is possible to mimic this environment inSolaris.   Furthermore, by creating a new zone with this environment, wecan isolate these changes without affecting other Solaris systemsettings.1. Create a Solaris zone called 'toolzone'Note: This step is system specific.  For example, the hostname and IP address for 'toolzone' was predefined.  In addition, the network interface (i.e. 'set physical=iprb0') is likely to be different also.Here's the command-line session for creating the zone:phoenix://# mkdir /zonephoenix://# zonecfg -z toolzonetoolzone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:toolzone> createzonecfg:toolzone> set zonepath=/zone/toolzonezonecfg:toolzone> set autoboot=truezonecfg:toolzone> add netzonecfg:toolzone:net> set address=toolzonezonecfg:toolzone:net> set physical=iprb0zonecfg:toolzone:net> endzonecfg:toolzone> infozonepath: /zone/toolzoneautoboot: truepool:inherit-pkg-dir: dir: /libinherit-pkg-dir: dir: /platforminherit-pkg-dir: dir: /sbininherit-pkg-dir: dir: /usrnet: address: toolzone physical: iprb0zonecfg:toolzone> verifyzonecfg:toolzone> commitzonecfg:toolzone> \^Dphoenix://# zoneadm list -vc ID NAME STATUS PATH 0 global running / - toolzone configured /zone/toolzonephoenix://# zoneadm -z toolzone installPreparing to install zone <toolzone>.Creating list of files to copy from the global zone.Copying <6666> files to the zone.Initializing zone product registry.Determining zone package initialization order.Preparing to initialize <945> packages on the zone.Initialized <945> packages on zone.Zone <toolzone> is initialized.Installation of <2> packages was skipped.Installation of these packages generated warnings: <SUNWcsu SUNWsogm>The file </zone/toolzone/root/var/sadm/system/logs/install_log> contains a log of the zone installation.2.  Configure the zone.2a. Create a zone-specific /usr/bin directory and copy the contents oftheglobal zone's /usr/bin to this new directory.phoenix://# mkdir -p /zone/toolzone/usr/binphoenix://# cd /usr/binphoenix://# tar cf - . | (cd /zone/toolzone/usr/bin; tar xfp -)2b. Create a loopback mount for toolzone to this new version of/usr/bin:phoenix://# zonecfg -z toolzonezonecfg:toolzone> addfszonecfg:toolzone:fs> set dir=/usr/binzonecfg:toolzone:fs> set special=/zone/toolzone/usr/binzonecfg:toolzone:fs> set type=lofszonecfg:toolzone:fs> endzonecfg:toolzone> \^D2c. Boot and configure the newly created zone:phoenix://# zoneadm -z toolzone bootphoenix://# zlogin -C toolzone2d. Change the path of the default shell to /usr/bin/bash.  Thisis required for crosstool to operate correctly.toolzone://# cd /usr/bintoolzone:bin/# mv sh sh.ORIGtoolzone:bin/# ln -s /usr/bin/bash sh2e. Create a new user in toolzone.  For this example, we'lluse 'cdc'.  This exercise is left to the user.toolzone://# grep cdc /etc/passwdcdc:x:600:10:CDC-HI build user:/export/home/cdc:/usr/bin/bash2f. Create a directoy called /opt/gnulinks and make sure the 'cdc' userowns it.  This directory will house versions and links  toGNU utilities which differ from their Solaris counterparts.toolzone://# mkdir /opt/gnulinkstoolzone://# chown cdc:staff /opt/gnulinkstoolzone://# ls -ld /opt/gnulinksdrwxr-xr-x 2 cdc staff 512 Jun 19 13:14 /opt/gnulinks3. As the newly created 'cdc' user, configure the zone to build boththe crosstool environment and CDC-HI3a. Login to the zone as the 'cdc' user.phoenix://# zlogin -l cdc toolzone[Connected to zone 'toolzone' pts/6]Last login: Mon Jun 19 12:57:05 on pts/6Sun Microsystems Inc. SunOS 5.10 Generic January 2005toolzone:~/$3b. To get a zone-specific prompt, you might want to have an entryin ~cdc/.bash_profile like:export PS1='\\h:\\W/\\$ '3c.  Create links in /opt/gnulinks/bin which point to GNU versionsof unix utilities.  This can be accomplised by executing the gnulinks.sh script which looks like:#!/bin/shGNULINKS_DIR=/opt/gnulinks/binGNU_PROGS="ar as egrep grep m4 make nm objcopy objdump strings strip tar thumb"mkdir -p ${GNULINKS_DIR}cd ${GNULINKS_DIR}for PROG in ${GNU_PROGS}do ln -s /usr/sfw/bin/g${PROG} ${PROG}donetoolzone:~/$ sh gnulinks.sh3d. Modify PATH of 'cdc' user by putting the following line in~cdc/.bash_profile:export PATH=/opt/gnulinks/bin:/usr/sfw/bin:$PATHtoolzone:~/$ which tar/opt/gnulinks/bin/tar3e.  Build and install GNU specific utilities required forcrosstool which differ from those provided by Solaris.  They includebinutils, fileutils, gawk(1), patch(1) and sed(1).  A build-gnu-bin.sh is furnished to automate this, and looks like:#!/bin/shGNULINKS_DIR=/opt/gnulinksmkdir -p ${HOME}/GNUfor PROG in $\*do cd ${HOME}/GNU PREFIX=`echo ${PROG} | sed 's/-.\*//'` wget ftp://ftp.gnu.org/gnu/${PREFIX}/${PROG}.tar.gz tar xzf ${PROG}.tar.gz cd ${PROG} ./configure --prefix=${GNULINKS_DIR} make make installdonetoolzone:~/$ sh build-gnu-bin.sh fileutils-4.1 gawk-3.1.5 patch-2.5.4 sed-4.1.43f. Create a native version of gcc(1).  The Solaris 10 version of gcc found in /usr/sfw/bin uses the stock Solaris linker, ld(1), found in /usr/ccs/bin.  Later versions of glibc will only accept the GNU version of ld(1).toolzone:~/$ cd ~/GNUtoolzone:GNU/$ wget ftp://ftp.gnu.org/gnu/gcc/gcc-3.4.3/gcc-3.4.3.tar.bz2toolzone:GNU/$ bunzip2 gcc-3.4.3.tar.bz2toolzone:GNU/$ tar -xf gcc-3.4.3.tartoolzone:GNU/$ cd gcc-3.4.3toolzone:gcc-3.4.3/$ ./configure --prefix=/opt/gnulinkstoolzone:gcc-3.4.3/$ make toolzone:gcc-3.4.3/$ make install3g. Build and install a GNU version of binutils using the aforementioned build-gnu-bin.sh script.toolzone:~/$ sh build-gnu-bin.sh binutils-2.16 4a.  As the 'cdc' user, build a cross development environment usingcrosstool.  These instructions are taken from http://www.kegel.com/crosstool/crosstool-0.43/doc/crosstool-howto.html#quicktoolzone:~/$ cdtoolzone:~/$ wget http://kegel.com/crosstool/crosstool-0.43.tar.gztoolzone:~/$ tar -xzvf crosstool-0.43.tar.gztoolzone:~/$ su -toolzone://# mkdir /opt/crosstooltoolzone://# chown cdc:staff /opt/crosstooltoolzone://# exittoolzone:~/$4b. As an example, use these configuration files to build a crosstoolfor a Sharp Zaurus SL-5000D running OpenZaurus 3.5.1:toolzone:~/$ cd ~/crosstool-0.43toolzone:crosstool-0.43/$ cat oz-3.5.1.sh#!/bin/shset -exTARBALLS_DIR=$HOME/downloadsRESULT_TOP=/opt/crosstoolexport TARBALLS_DIR RESULT_TOPGCC_LANGUAGES="c,c++"export GCC_LANGUAGES# Really, you should do the mkdir before running this,# and chown /opt/crosstool to yourself so you don't need to run as root.mkdir -p $RESULT_TOP# Build the toolchain. Takes a couple hours and a couple gigabytes.eval `cat arm-softfloat.dat oz-3.5.1.dat` sh all.sh --notestecho Done.toolzone:crosstool-0.43/$ cat oz-3.5.1.datBINUTILS_DIR=binutils-2.15GCC_DIR=gcc-3.4.2GLIBC_DIR=glibc-2.3.2LINUX_DIR=linux-2.4.18GLIBCTHREADS_FILENAME=glibc-linuxthreads-2.3.2GDB_DIR=gdb-6.44c. Build the toolchaintoolzone:crosstool-0.43/$ sh oz-3.5.1.sh > OUT.oz-3.5.1 2>&1 &

Background The task of building Java ME CDC-HI binaries and their associated cross development environments tends to be very linux-centric.  Utilities like Crosstool, which makethis process much more...

Sun

Solaris Was Real-time Before Real-time Was Cool

In the financial services market, there is a general trend to move key systems close to, or even right inside the exchanges themselves -- the idea being that the nearer you are to the source, the less network infrastructure and latency you'll experience.  With this advantage firms can potentially take on additional transaction loads at higher transaction rates.  These systems typically use the latest Intel or AMD processors and run a commercial distribution of Linux.1[1] Thank you Eric Bruno for your brief description, and for unknowingly letting me (slightly) plagiarize your comments.Indeed these co-located systems perform as expected almost all the time.  But there are periodic intervals where the latency increases by several orders of magnitude, the ramifications of which could be financially disastrous.  After eliminating other components, the street seems to be focusing its wrath on commercial Linux distributions and their lack of real-time capabilities.The linux community is actively working to include underpinnings to support real-time, but as of yet these capabilities are not part of a standard major (i.e. Red Hat, SuSE) distribution.  Instead, an alternate version of linux with real-time extensions is offered.  These extensions are in effect separate non-standard OS releases, and have not had the soak time required by many institutions.Back in the early 90's, I volunteered to move over to Sun's newly formed SunSoft business unit.  One of it's main charters was to push the concept of running Solaris on alternate, i.e. Intel, platforms.  (Don't get me started here, can you imagine where Solaris would be right now if Sun had actually taken this initiative seriously back then?)  As part of that transition, I had the opportunity to take a Solaris Internals course, and couldn't help but notice the effort put in architecturally to address short latencies.  I still have the course notebook; it is dated September 1993.The point is Solaris already has the real-time capabilities claimed by these add-on linux extensions.  It is built into the operating system, has been for quite some time, is rock solid and doesn't require any additional components.  A partial list of features include:Real-time Scheduling Class - In order to allow equal opportunity to system resources, traditional Unix schedulers transparently change process priorities to give competing processes a fair chance.  Although well suited for timesharing systems, this is unacceptable real-time behavior.  Since its outset, Solaris through its SVR4 roots, introduced the concept of alternate scheduling classes.  It includes a real-time scheduling class, which furnishes fixed-priority process scheduling at the highest priority levels in the system.Fine-Grained Processor Control / Processor Sets - Solaris allows threads and applications to be bound to specific individual processors. In addition, processors within a system can be grouped together as a processor set and dedicated to real-time tasks.2  Here's a nice article describing processor sets. Dated June 2001,  processor sets have been a part of Solaris since release 2.6.Interrupt Sheltering - Available since Solaris 7, this feature enables CPUs to be sheltered from unbound interrupts.  It can be used in conjunction with processor sets to shelter all CPUs in a processor set.  Note: At least one processor in the system must be kept unsheltered. Priority Inheritance - Priority inversion occurs when a high-priority thread blocks on a resource that is held by a lower-priority thread. A runnable thread with a priority between the high and low-priority threads creates a priority inversion because it can receive processor resources ahead of the high-priority thread. To avoid priority inversions with kernel synchronization primitives, Solaris employs a transient priority inheritance protocol. The protocol enables the low-priority thread holding the resource to “inherit” the higher priority of the blocked high-priority thread. This approach gives the blocking low-priority thread the CPU resources it needs to complete its task as soon as possible so that it can release the synchronization primitive. Upon completion, all threads are returned to their respective priorities by the kernel.3High Resolution Timers - Solaris 8 introduces the cyclic subsystem; this allows for timersof much better granularity -- in the microsecond and nanosecond range -- without burdening the system with a highinterrupt rate.Memory Locking - The paging in and out of data from disk to memory may be considered normal behavior for virtual memory systems, but it is unacceptable for real-time applications.  Solaris addresses this problem by allowing thelocking down of a process' pages into memory, using mlock(3C) or mlockall(3C) system calls.Early Binding - By default, linking of dynamic libraries in the Solarisis done on an as-needed basis. The runtime binding for a dynamically linkedfunction isn't determined until its first invocation. Though flexible,this behavior can induce indeterminism and unpredictable jitter in thetiming of a real-time application. To avoid jitter, the Solaris provides for early binding of dynamic libraries. By settingthe LD_BIND_NOW environment variable to "1", libraries are bound at applicationstartup time. Using early binding together with memory locking is a veryeffective approach to avoiding jitter.4[2,3,4] Shameful plagiarism from Scalable Real-Time Computing in the Solaris ™ Operating Environment. A Technical White Paper. To further prove the maturity of Solaris' real-time features, this document was written in the Solaris 8 time frame.  It was copyrighted in 2000.So why not give Solaris more consideration?  It's way more mature.  And in the unlikely event (chcukle, chuckle) that a lower-latency OS might not solve all your performance problems, I'd put my money on Solaris and DTrace over anything Linux could offer in finding the real problem.

In the financial services market, there is a general trend to move key systems close to, or even right inside the exchanges themselves -- the idea being that the nearer you are to the source, the less...

Personal

The Verbal Regret Coefficient

I am convinced that scientists will some day find an explanation formy continual verbal ineptitude.  It will probably be identified as asequence in our genome and they'll call it something like The Verbal Regret Coefficient. We are all born with it, and we can't escape how it influences useveryday.  In advance of its discovery, I propose this definition: TheVerbal Regret Coefficient is that innate ensemble of dispositions whichguarantees that after uttering a number of words, you will regrethaving said a certain percentage of them.  My first choice for a name was Regret Coefficient,but incredibly a quick search seems to indicate that the insuranceindustry already uses that term.   If my theory catches on -- and you'll know that when someone dedicates a wikipedia entry -- Idon't want to have to deal with copyright and trademark infringement. So Verbal Regret Coefficient it is.The Verbal Regret Coefficientor VRC will ultimately be quantified, and like a cholesterol count,we'll all be assigned a VRC value.  Most will fall into an averagerange, but there will be outliers.  A high VRC manifests itselfbehaviorally in many ways: some high VRC'ers are purposelycontroversial, outspoken, brash or arrogant.  While others simply failto think before they speak.  In case you're wondering, I am one ofthose outliers with a dangerously high VRC.  Furthermore, I'm quitesure I don't fit into the arrogant and controversial group.  But it'snot all good news for those with a low VRC.   Although low VRC'ers tendnot to say anything regrettable, they tend also not to say anything ofsubstance either.  We call these people politicians.If the current social trend continues, we'll get away with blaming any verbal faux pasto our VRC.  They'll be VRC support groups and celebrity VRC rehabilitation clinics.  Or maybe we could avoid all this silliness and just heed the advice ofMark Twain who once said:It is better to keep your mouth closed and let people think you are a fool than to open it and remove all doubt.But that is sooooo hard. 

I am convinced that scientists will some day find an explanation for my continual verbal ineptitude.  It will probably be identified as a sequence in our genome and they'll call it something like The V...

Sun

Framework to Help Create Small Footprint RAM Resident Solaris Configurations

As Sun continues to avail more of its intellectual property to thecommunity, the advantages Sun employees have regarding access tointernal resources almost disappear.  In fact now when attemptingto post questions to internal Sun mail aliases, I am often timesredirected to the community.  The ramifications of this change hit mesquare in the gut this summer. Having stumbled upon an internal project investigating how Solaris might be minimized forembedded use, I thought an interesting offshoot of this effort might beto create a ZFS appliance.  This device would boot from flash entirelyinto RAM, and all state would be maintained by the ZFS volumes.  Turnsout this may be a little more tricky than anticipated, and future ZFSenhancements to Solaris (ZFS boot) may make this idea moot. Based on an OpenSolaris ZFS discussionI initiated, observers went off and wrote about this topic elsewhere, somepredicting that Sun would be releasing embedded ZFS appliances.  Whoa,hold on there, not so fast.  We have no plans (at least that I know of) to do any such thing.  This was nothing more than a pet project of mine.  Serves me right for announcing that I was a Sun employee. But there was some good that came out of this dialog.  In addition to learningthe valuable lesson of being careful what you write, interest in the notion of using Solaris asan "embedded" OS was quite apparent.   As a consequence,  I thought it might make sense topublish the basic framework used to create a custom Solarisminiroot.  Included below is the introduction section of the README file:  1.0 IntroductionWiththe advent of Solaris 10 Update 1 and its migration to the grub(5)bootloader, it becomes quite feasible and straightforward to considercreating small footprint "embedded" versions of Solaris which bootdirectly into RAM. This project is based upon work done by Shudong Zhouto create a minimized Solaris for embedded use.  The doc/ directorycontains some of the original documentation and scripts used to buildsuch an environment.It is expected that entities maywant to provide further functionality and customizations to thisenvironment. In order to assist in this endeavor, the original work hasbeen enhanced to utilize the Java-based ant(1) build tool. For afurther description on how a miniroot image is created, see the sectionon "Understanding the ant(1) build process". You can download the framework here Miscellaneous notes: Thisframework assumes availability of a standard Solaris distribution. Although not confirmed, I suspect it may not be real hard toaugment for OpenSolaris. The current framework produces a RAM resident version of Solaris that is about 60MB in size.  Note: there is no windowing included. Theseconfigs are system specific.  The reference implementation for theincluded framework is an iGoLogic i84810 motherboard.  Some key specsare: 1 GHz Celeron Processor 512 MB RAM on board Compact Flash slot on bottom of motherboard 146mm x 104mm Runs Solaris 10 U3 out of the box For more info contact iGoLogic an http://www.igologic.com Once an image is created, you'll need to set up grub(5) on your media.  Here's a pointer to a URL explaining Grub on a stick.  A copy of this URL is also included in the framework under the doc/ directory. Here's a picture of the motherboard with ruler included to give you a feel for its size. 

As Sun continues to avail more of its intellectual property to the community, the advantages Sun employees have regarding access to internal resources almost disappear.  In fact now when attemptingto...

Oracle

Integrated Cloud Applications & Platform Services