Monday Aug 24, 2015

Moving Renewable Energy Embedded Systems Into the Cloud: Loopback, Look Ahead

NOTE: This is the fourth installment in my ongoing development of a fully cloud-connected, IoT-based Renewable Energy system. Please visit these earlier articles if you'd like to review what was discussed previously:

Welcome Back

Ever have one of those weeks where you accomplished a lot of very essential things, but none were exactly the stuff of spy novels? Last week was one of those for me, and while I have a great deal to report, you certainly won't mistake my actions for those of an international man of mystery. :) Next week promises to be fun and productive, but for now, put down the popcorn & grab your coffee, it was a bumpy week!

Rebuild and Update Node Concentrator OS/Config

As mentioned in an earlier installment, I encountered some issues with one of the sensor subcomponents I was using in my system. I'll share more details in the section below, but for now, I'll just say that I was thrilled to get the replacement part last week! In preparation for swapping out the old/bad for the new/good, I shutdown the Raspberry Pi node concentrator in the powerhouse...and found it wouldn't restart. Argh...file system corruption. Even with the best quality SD cards and careful restart procedures, it's bound to happen eventually.

What followed were several good news/bad news moments. I'm pretty rigorous about keeping backups - a catastrophic loss at some point will do that to you - but I found my latest backup for that SD card was almost exactly a year old. For about three years, I've been running the Pi/node concentrator in my utility shed, through the worst summers and winters the midwestern US can dish out...and a periodic backup seemed sufficient. I lost all interim updates, but at least I had a solid starting point. Sigh.

After restoring the backup to a new SD card, I updated the OS (Raspbian) and JDK (now a sporty 8u60), reinstalled the MQTT broker, and did some other miscellaneous tidying (copying/recreating a few scripts, symlinks, etc.), and generally got everything back to where it was before the failure...then performed an orderly shutdown and took a fresh backup. No more backups for another year! (Yes, I'm kidding.)  ;)

Sensor Swap, Test, Pop Champagne!

Now, about that sensor subcomponent I was replacing. After a quick and pleasant email exchange, the vendor rep offered to benchtest a replacement prior to sending it out - shout out to Tim @ Sparkfun for the EXCELLENT customer service - and after testing it at my desk and confirming all was working properly (which still seemed to be a sensible step), I swapped the one in the powerhouse for the new, proven module. After verifying the new sensor readings, I confirmed that all was indeed operating according to spec. Time for a brief, but well-deserved, celebration!

Refactor Node Concentrator Codebase

Well, THAT didn't last long. ;)

The next step in our "Journey to the Cloud" was to refactor the Java code running on the aforementioned node concentrator. You know, the code that had been running pretty smoothly for three years? It shouldn't take long, as I'd already updated it a bit at some point in there. Right?

Well, sort of. Turns out I *hadn't* replaced the underlying serial library like I thought I had. RXTX used to be a leading option for handling serial communication in Java, but in recent times, it seems to have become abandonware. Combining this with its list of (ahem) idiosyncracies, a change was long overdue. An excellent replacement is the JSSC (Java Simple Serial Connector) library, which I had used in other, more recent development efforts. Incorporating JSSC gave me an excuse to make a few changes that better isolated the serial interface code, increasing its cohesiveness and maintainability.

Next up was the WebSocket and related code. I had been guilty of a bit of over-engineering before, so I cut a wide swath through the code. The WebSocket code is now fairly lean, and there is much less of it to look at, another win.

One change I *had* made to the code at some point along the way was to add support for MQTT, publishing sensor readings to a designated MQTT broker. I updated the publisher code and verified it all still worked as intended...all good, no surprises.

Finally, I turned my attention to the code changes I actually expected to make in the first place: incorporating the additional sensors' readings into the processing and communication, both serial (to/from the microcontrollers) and Cloud. With those code changes in place, the work on the node concentrator was complete...for now.  ;)

Overall, I was able to trim 30-40% of the code and increase its functionality. Win/win!

Begin Cloud Application

I've really only begun the Cloud Application itself, cloning the existing local server project (a Java EE web application), reviewing the code, and making some initial assessments. That will be the focus of next week's activities: refactoring/extending that code and laying the groundwork for it to be deployed into both Docker containers and production Oracle Java Cloud Service (JCS) environments. Stay tuned!

Onward and upward,

Just a reminder: I'll be presenting this and more in a JavaOne 2015 session entitled "Moving Renewable Energy Embedded Systems Into the Cloud", session CON10262, along with lessons learned and some other nuggets I pick up along the way. If you're there, come by and say "Hi!"

Monday Aug 17, 2015

Moving Renewable Energy Embedded Systems Into the Cloud: Activate Containment Field

NOTE: This is the third installment in my ongoing development of a fully cloud-connected, IoT-based Renewable Energy system. Please visit these earlier articles if you'd like to review what was discussed previously:
For the past few days, I've been focusing upon creating Docker containers that provide the foundations for a hosted application and its underlying database. What's that? Why Docker? I'm glad you asked!

Containers as Far as the Eye Can See

Many people hail containers in general - and Docker in particular - as being the "next big thing" in application deployment. There are good arguments for and against, and I'm not going to repeat those arguments here...but I will point out some things that make using Docker appealing for me personally:

Docker is great for development

While there are indisputable challenges for production environments (security, maturity, limited tooling, etc.), Docker generally makes setting up environments fast and fairly easy.

Repeatability/Infrastructure as Code (DevOps)

It's not only fast and easy to perform the initial environment setup; re-creating environments (or duplicating them) becomes almost trivial. Sure, it takes a bit of time to work through creating a Dockerfile (more on this later), but going from that to running machine is mind-bogglingly fast. Even the first time. :)

Oracle embraces containers

This point will mean nothing to some and everything to others, but it's another data point: Oracle has offered WebLogic, MySQL, and Oracle Linux Docker images for some time already and recently has announced coming support for containers in several product offerings (Cloud/Linux/Solaris).

And (too) many more reasons

I won't go into the minute details here, but please check back here or in my Twitter feed for future Docker discussions, good and bad. :)

Fabricating Containers

Three Docker concepts that are key to this discussion are those of Dockerfile, image, and container. The concepts are fairly straightforward once you understand how they relate to each other.

A Dockerfile is essentially a file containing directives you provide for Docker to use to create an image. In a Dockerfile, you can specify another image to use as the basis for this one, commands to run, and ports to expose, among other things. A Dockerfile is only necessary if you plan to modify an existing image in some way.

An image is an assembly of Operating System and application layers that can be used as a template for creating a running machine, i.e. container. Images contain the Operating System and any applications and configuration performed in preparation for running. This allows many containers to be created quickly and identically.

A container is one running instance of a particular image. You can run multiple instances of, for example, a MySQL database image, and each is a separate container.

But back to our system! Part of the targeted Cloud Architecture must support the collection and storage of data: ongoing status of the Renewable Energy (RE) system and sensors, actual readings, etc. To accommodate this, I started with:
These provide a great jumping-off point. Here is an overview of the steps I took to prepare them for what's in store.


The official MySQL image has most of what we need from a database container, but we need to specify a few additional parameters to lay the groundwork for our application; among other things, we must create a database! This is accomplished by executing the following command (explanation of parameters to follow):

docker run --name mymysql -d -e MYSQL_ROOT_PASSWORD=<password> -e MYSQL_DATABASE=<db_to_create> -e MYSQL_USER=<username> -e MYSQL_PASSWORD=<password> mysql:5.7.8
  • docker run : Runs a docker container
  • --name mymysql : Assigns the name 'mymysql' to the container (running instance)
  • -d : Runs the container as a daemon, in the background (non-interactive mode)
  • -e MYSQL_ROOT_PASSWORD=<password> : Assigns the specified password to MySQL's root user
  • -e MYSQL_DATABASE=<db_to_create> : Creates a database with the specified name
  • -e MYSQL_USER=<username> : Creates a non-privileged database user (e.g. application user)
  • -e MYSQL_PASSWORD=<password> : Password for MYSQL_USER specified above
  • mysql:5.7.8 : Image name:tag (version) upon which to base this container
We will create the necessary tables from our application, but creating the underlying database and the application's database user while initializing the container simplifies things greatly.


The WebLogic setup was a bit more complicated than for MySQL, although part of that was due to changes I made to the original configuration in GitHub, updating the various files to point to WebLogic Server 12.1.3 update 3 and JDK 8 update 51 (pull request merged, thanks Bruno!).

Here are the key steps I followed (excluding those necessary to make changes to the above files, which won't be repeated):

  1. "git clone" the oracle/docker repository from GitHub
  2. Download the current JDK and WebLogic .zip distribution, placing them in the OracleWebLogic/dockerfiles/12.1.3 directory within the cloned repo, to be used by the build script
  3. From the OracleWebLogic/dockerfiles directory, run ./ -d to create a basic WebLogic Server (WLS) image
  4. Edit OracleWebLogic/samples/12c-domain/container_scripts/, uncommenting and providing appropriate values in the section to create a JDBC connection (in our case, for our MySQL database/schema)
  5. From the OracleWebLogic/samples/12c-domain/ directory, run docker build -t weblogicdomain . to create a new image derived from the basic WLS image above, this time with a created domain and some resources (JMS, JDBC)
  6. Finally, execute the following command to create and run a WebLogic container and link it to the MySQL container we've started:
docker run --name wlsdomain --link mymysql:mymysql -d -p 8001:8001  weblogicdomain:latest
  • docker run : Runs a docker container
  • --name wlsdomain : Assigns the name 'wlsdomain' to the container (running instance)
  • --link mymysql:mymysql : Links this container to the container named 'mymysql', identifying it within this container as 'mymysql'
  • -d : Runs the container as a daemon, in the background (non-interactive mode)
  • -p 8001:8001 : Publishes port 8001 from the container to port 8001 on the host
  • weblogicdomain:latest : Image name:tag (version) upon which to base this container
  • : Run this shell script when container starts

Status Update

Both containers are running smoothly and thus far, all appears to be working properly. The true test will be when I drop real code in there, create tables and (of course), let the application do its thing. Spoiler alert: that's coming up next. :)

Next Week's Episode

Thanks for reading! Tune in next week as we do a significant refactor/rewrite of the local enterprise Java application and deploy it (well, version 0.1) to its new home. Should be fun!

Onward and upward,

Just a reminder: I'll be presenting this and more in a JavaOne 2015 session entitled "Moving Renewable Energy Embedded Systems Into the Cloud", session CON10262, along with lessons learned and some other nuggets I pick up along the way. If you're there, come by and say "Hi!"

Sunday Aug 09, 2015

Moving Renewable Energy Embedded Systems Into the Cloud: The Devil is in the Details, er, Devices

NOTE: This is the second installment in my ongoing (re-)development of a fully cloud-connected, IoT-based Renewable Energy system. Please click here to read the initial article for more information about the overall vision, "legacy" system, and proposed architecture. Please keep in mind that these things may change to accommodate new requirements, constraints, etc. Agile development depends upon agility.  :)

Over the past week, most of my efforts on the system have focused primarily upon expanding the device/sensor capabilities in the IoT node in our utility shed (affectionately referred to as the powerhouse), so it's been a very hands-on week. While software architecture and development are my life - very happily so - I really enjoy building interfaces between the virtual and real worlds. There is something intensely gratifying to soldering and assembling components, installing and connecting actuators, and seeing it all *work*. Let's face it, sometimes you just have to interact with the physical world. ;)

Hardware in the Powerhouse Node

As depicted in the first post's Cloud Architecture diagram, there are a lot of things that could be going on in the shed/powerhouse any any point in time. To better understand this, let's take a deeper look at the hardware present.

Providing power to the IoT node are three photovoltaic (PV) panels and a small wind turbine. The solar panels are routed through a charge controller to ensure power feeds only one way (into the batteries); a blocking diode provides similar protection for the wind turbine, guarding against battery discharge.

An array of 12V deep cycle batteries acts as storage for energy generated by the aforementioned power sources; devices are connected to a wiring block from the batteries, which are wired in parallel to increase system storage capacity while maintaining voltage.

Sensors previously (and still) active in the system include temperature & humidity (environmental) and amperage & voltage (power). Amps/volts readings were monitored by wiring the Arduino serially into the sensor (feeding the Arduino itself) due to the requirement there be a load on the circuit to monitor. FUN FACT: Without some kind of load in the circuit, it's a short...and you can (very literally) smoke a power sensor that way. Trust me. Or if you don't, order a spare. ;)

Outputs for the "legacy" system include an LED light that indicated "system normal" condition and one of the two following power outputs, depending upon season:
  • A fan, used to circulate air over the devices during warmer months of the year
  • A heater, used to maintain a reasonable temperature within the powerhouse during colder months
The system has two modes - Automatic and Manual (override) - which will be re-implemented in the new system as well. In automatic mode currently, when temperatures within the powerhouse are fairly moderate (I chose a range of 1-31 degrees Celsius/34-88 degrees Fahrenheit, exclusive), the status light is enabled and the fan/heater (whichever applies at that time of year) is turned off. When temps break out of the defined moderate range, the status light is extinguished and power is fed to the appropriate output to provide cool air or heat to the components. When the temperature again falls within the acceptable range, the actions are reversed: status light on, fan/heater off. When the system is placed into Manual mode, any combination of settings can be selected.

Over the past 7-10 days, I've installed two automotive (2-wire) linear actuators on the windows in opposite sides of the shed and connected them to an 8-channel relay, putting to use four of the eight channels to open and close the windows as desired. I've also incorporated a weather subsystem that includes the following measurement/feedback capabilities:
  • Wind direction
  • Windspeed
  • Rainfall
  • Barometric pressure
  • Relative humidity (original legacy system source also remains for time being)
  • Temperature (two additional sources in addition to original system source)
  • Luminosity (ambient light level)
  • Reading status light (provides visual feedback when each new reading is taken)

Status Update

After some system re-configuration and device installation, all output devices (currently lights, fans, and actuators) are working flawlessly. All sensors are providing data, but readings from the new humidity sensor (humidity and temperature) are returning errant values. Since the other new weather components integrated with the humidity sensor are providing verifiably accurate data, there's a good chance it's due to a faulty sub-component; I've contacted the vendor for assistance and will provide an update here once we isolate and resolve the issue.

Having a Raspberry Pi as the IoT node concentrator/gateway provides more than just a very able platform for nearly anything that may be required for processing and communication of sensor data; it also allows us to "maintain a visual" of the powerhouse's power center. Onboard the Pi is a camera that provides a near real-time video feed so we can see firsthand what is happening as the system responds automatically to readings or to commands we send it when in Manual (override) mode.

A Sneak Peek

Below are some photos of the power panel and "brains" of the powerhouse IoT node, along with a couple shots of the actuators that open and close the windows. Just a hint of things to come. :)

Thanks for reading! Tune in next week as we take the first steps toward configuring a containerized cloud environment to serve as the foundation for the JCS/application server portion of the cloud architecture(s). Here comes the GOOD STUFF. B)

Onward and upward!

Just a reminder: I'll be presenting this and more in a JavaOne 2015 session titled "Moving Renewable Energy Embedded Systems Into the Cloud", session CON10262, along with lessons learned and some other nuggets I pick up along the way. If you're there, come by and say "Hi!"

Sunday Aug 02, 2015

Moving Renewable Energy Embedded Systems Into the Cloud: In the Beginning...

"It was a dark and stormy night." No, not that kind of beginning. Although (spoiler alert), there are clouds involved.  :)

I've been running a Renewable Energy (RE) system at my house for the past three years. It's comparatively small, providing power only to our 12'x16', two-story utility shed in a corner of our backyard. Although there are electrical connections running between house and shed, the RE system provides nearly all power consumed outside of our house...and intentionally so.

I wanted to get a better understanding of renewable energy systems as a whole, and I also wanted to be able to monitor everything (at all times!) and perhaps even make adjustments to system settings from the comfort of our home. No single system accomplished all I had in I wrote one.

Initially, Dr. Jose Pereda and I teamed up to write parallel systems - one in the US, one in Spain - and we used Java SE Embedded and Java EE (running on Raspberry Pis, no less!) to provide the backbone(s) of the systems. This was a fun and instructive exercise, as it encouraged us to work efficiently and leverage all we could within each platform without adding unnecessary baggage. And the system has proven its robustness, having run for over three years with impressive uptime and (lack of) maintenance. But systems evolve, and the ubiquity, reliability, and compelling (low!) price of the cloud opens a great deal of possibilities for this system that I've been eager to explore.

Here is the existing architecture for my RE system:

Existing architecture

Click here for full-size diagram of Existing Architecture

And here is the target architecture I'll be implementing over the next few weeks:

Click here for full-size diagram of Cloud Architecture

As you can see, I'm refactoring and expanding a bit. And while I'm doing so, I'll be writing about it here. Once everything is finished, I'll be presenting this and more in a JavaOne 2015 session titled "Moving Renewable Energy Embedded Systems Into the Cloud", session CON10262, along with lessons learned and some other nuggets I pick up along the way. I'm agilely following agile principles (get it?), and DevOps principles and tools have a role to play here as well. So I'll be building as needed, deploying continuously (hardware and software), and reviewing and revising frequently. Think of it as a cloud- and IoT-centered microproject. :)

Looking forward to sharing it all with you here over the next few weeks.

Onward and upward!

Tuesday Oct 09, 2012

Seven Random Thoughts on JavaOne

As most people reading this blog may know, last week was JavaOne. There are a lot of summary/recap articles popping up now, and while I didn't want to just "add to the pile", I did want to share a few observations. Disclaimer: I am an Oracle employee, but most of these observations are either externally verifiable or based upon a collection of opinions from Oracle and non-Oracle attendees alike. Anyway, here are a few take-aways:

  1. The Java ecosystem is alive and well, with a breadth and depth that is impossible to adequately describe in a short post...or a long post, for that matter. If there is any one area within the Java language or JVM that you would like to - or need to - know more about, it's well-represented at J1.
  2. While there are several IDEs that are used to great effect by the developer community, NetBeans is on a roll. I lost count how many sessions mentioned or used NetBeans, but it was by far the dominant IDE in use at J1. As a recent re-convert to NetBeans, I wasn't surprised others liked it so well, only how many.
  3. OpenJDK, OpenJFX, etc. Many developers were understandably concerned with the change of sponsorship/leadership when Java creator and longtime steward Sun Microsystems was acquired by Oracle. The read I got from attendees regarding Oracle's stewardship was almost universally positive, and the push for "openness" is deep and wide within the current Java environs. Few would probably have imagined it to be this good, this soon. Someone observed that "Larry (Ellison) is competitive, and he wants to be the if he wants to have a community, it will be the best community on the planet." Like any company, Oracle is bound to make missteps, but leadership seems to be striking an excellent balance between embracing open efforts and innovating in competitive paid offerings.
  4. JavaFX (2.x) isn't perfect or comprehensive, but a great many people (myself included) see great potential, are developing for it, and are really excited about where it is and where it may be headed. This is another part of the Java ecosystem that has impressive depth for being so new (JavaFX 1.x aside). If you haven't kicked the tires yet, give it a try! You'll be surprised at how capable and versatile it is, and you'll probably catch yourself smiling while coding again.  :-)
  5. JavaEE is everywhere. Not exactly a newsflash, but there is a lot of buzz around EE still/again/anew. Sessions ranged from updated component specs/technologies to Websockets/HTML5, from frameworks to profiles and application servers. Programming "server-side" Java isn't confined to the server (as you no doubt realize), and if you still consider JavaEE a cumbersome beast, you clearly haven't been using the last couple of versions. Download GlassFish or the WebLogic Zip distro (or another JavaEE 6 implementation) and treat yourself.
  6. JavaOne is not inexpensive, but to paraphrase an old saying, "If you think that's expensive, you should try ignorance." :-) I suppose it's possible to attend J1 and learn nothing, but you'd have to really work at it! Attending even a single session is bound to expand your horizons and make you approach your code, your problem domain, differently...even if it's a session about something you already know quite well. The various presenters offer vastly different perspectives and challenge you to re-think your own approach(es).
  7. And finally, if you think the scheduled sessions are great - and make no mistake, most are clearly outstanding - wait until you see what you pick up from what I like to call the "hallway sessions". Between the presentations, people freely mingle in the hallways, go to lunch and dinner together, and talk. And talk. And talk. Ideas flow freely, sparking other ideas and the "crowdsourcing" of knowledge in a way that is hard to imagine outside of a conference of this magnitude. Consider this the "GO" part of a "BOGO" (Buy One, Get One) offer: you buy the ticket to the "structured" part of JavaOne and get the hallway sessions at no additional charge. They're really that good.

If you weren't able to make it to JavaOne this year, you can still watch/listen to the sessions online by visiting the JavaOne course catalog and clicking the media link(s) in the right column - another demonstration of Oracle's commitment to the Java community. But make plans to be there next year to get the full benefit! You'll be glad you did.


All the best,

P.S. - I didn't mention several other exciting developments in areas like the embedded space and the "internet of things" (M2M), robotics, optimization, and the cloud (among others), but I think you get the idea. JavaOne == brainExpansion;  Hope to see you there next year!


The Java Jungle addresses all things Java (of course!) and any other useful and interesting tools & platforms that help us GET IT DONE. "Artists ship," after all. :)

Your Java Jungle guide is Mark Heckler, a Software Architect/ Engineer and Oracle Developer Evangelist with development experience in numerous environments. Mark's current pursuits and passions all revolve around Java, the Cloud, & the IoT, and leave little time to blog or tweet - but somehow, he finds time to do both anyway.

Mark lives with his very understanding wife in the St. Louis, MO area.

Stay Connected


« August 2015