Wednesday May 15, 2013

Series On Embedded Development (Part 6) - Power Efficiency

Let's not forget efficieny as it pertains to power...important for embedded devices, especially battery powered ones. I mentioned avoiding idle tasks in the last installment and the suggestions here build on that same theme (CPU (therefore power) conservation):

Avoid inefficient polling - inefficient polling will keep the CPU busy and waste power. You should spend time thinking about your polling algorithm and optimize it to reduce CPU usage. Can you poll when the CPU isn't being utilized much?

Avoid idling and block waiting on asynchronous events - these are also things which will consume CPU. Instead of waiting, implement a listener pattern so your application is event-driven.

Provide quiet periods to allow the CPU to sleep - any time the CPU is asleep, power is being conserved.

Don’t assume there are multiple CPU cores - coding for multiple cores can place a demand on a single-core machine. Try to code efficiently for the machine you'll be running on.

Tuesday Apr 16, 2013

Series On Embedded Development (Part 5) - Efficiency

In this 5th installment of Embedded Development, I'm going to talk about efficiency. Efficiency has to do with, well, being efficient. It has to do with not being wasteful...not doing things which don't need to be done. It also has to do with doing things more efficiently...doing things you wouldn't normally do to have more efficient code.

A good example of this is moving fields to helper classes. If you have a class which contains fields which aren't used often, consider moving them to a helper class which contains sparsely-used fields. You will save memory and CPU cycles when the fields are not used. The memory and CPU will only be incurred when the fields are used. This is something you wouldn't normally do. I'm not talking about fields shared by multiple classes, which you do put into shared classes. This falls under the category of making your Java classes efficient (read smaller, using less memory, and less CPU in the typical case).

Related to moving fields to helper classes is grouping like-used fields together in shared classes. If you put too few fields in too many classes, you'll end up burning CPU and memory instantiating too many fields. Put fields which are likely to be used by the same functionality in shared classes. This is especially useful when the fields are static.

Simply eliminate extra fields. This one should be obvious, but if you've removed a reference to a field, make sure the field is still needed. A good IDE can help with this.

Use smaller buffers. This is related to the cache discussion in 'Part 4 - Tunability'. When using buffers, you should make them small and grow them as needed. Growing your buffers will incur some CPU cycles *when they grow*, but only then. Take care to ensure you don't make them too small and grow them in too small chunks. Testing and profiling will help find the sweet spot.

Use the smallest data type you can for each field. Do you really need a long or will an integer do just as well? Do you really need an integer or will a short do just as well. I rarely see the use of shorts. Most of the time I see ints used.

Keep stack frames small. This is especially important in recursive calls.

Avoid idle tasks which prevent the CPU from sleeping or going into low-power mode.

Usually once you start thinking in terms of efficiency (size/memory, less code/CPU) rather than just functionality and performance, you'll start getting creative ideas for keeping data structures smaller. Don't forget to get creative with your data structure layouts as well!

Friday Jan 11, 2013

Series On Embedded Development (Part 4) - Tunability

Writing a tunable application means writing your application in such a way that you can change it's behavior. This is different then having functionality available or not (optionality), it's changing the behavior of your application based on various factors. There's really two types of tunability, build-time and runtime. Within runtime tunability, there's also two types...static and dynamic.

Build-time tunability is when you set certain parameters at build-time. For example, if you use a cache and set the cache size based on a constant in your Java code, you're setting the cache size at build-time. Chaning a build-time tunable feature means you have to (obviously) re-build your application to change it's behavior. If you want to allow the users of your application to change it's behavior, then they'll need to be able to build your application to change it's behavior.

Runtime tunability is when your application changes it's behavior at runtime, so you don't have to re-build for to change its behavior. Runtime tunability can be static, set once when you run the JVM, or dynamic, changing based on certain criteria. The JVM uses runtime static tunability in several ways. One way it uses it is in the heap size. You can change the JVM heap size using a command-line parameter when you run the JVM. Another way is with the garbage collection. You can change the the JVM's garbage collector (GC) by specifying the GC on the command-line when starting the JVM. Both of these change the behavior of the JVM while it's running. Runtime dynamic tunability is when your application changes its behavior based on certain criteria it detects while it's running. For example, if you use large cache's in your application by default, but reduce the cache size when memory gets low, your using runtime dynamic tunability. Runtime tunability is more flexible then build-time and runtime dynamic is even more flexible then runtime static. Runtime dynamic allows your application to adapt as needed on the fly. However, it can be slower as you have to detect the criteria that trigger changes and then change the behavior of your application appropriately. With runtime dynamic tunability, you can move your application from device to device without having to make code changes, rebuild your application, or re-start it.

Tuesday Nov 27, 2012

Series On Embedded Development (Part 3) - Runtime Optionality

What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used.

In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features.

There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

Thursday Nov 01, 2012

Series On Embedded Development (Part 2) - Build-Time Optionality

In this entry on embedded development, I'm going to discuss build-time optionality (BTO). BTO is the ability to subset your software at build-time so you only use what is needed. BTO typically pertains more to software providers rather then developers of final products. For example, software providers ship source products, frameworks or platforms which are used by developers to build other products.

If you provide a source product, you probably don't have to do anything to support BTO as the developers using your source will only use the source they need to build their product. If you provide a framework, then there are some things you can do to support BTO. Say you provide a Java framework which supports audio and video. If you provide this framework in a single JAR, then developers who only want audio are forced to ship their product with the video portion of your framework even though they aren't using it. In this case, support providing the framework in separate JARs...break the framework into an audio JAR and a video JAR and let the users of your framework decide which JARs to include in their product. Sometimes this is as simple as packaging, but if, for example, the video functionality is dependent on the audio functionality, it may require coding work to cleanly separate the two.

BTO can also work at install-time, and this is sometimes overlooked. Let's say your building a phone application which can use Near Field Communications (NFC) if it's available on the phone, but it doesn't require NFC to work. Typically you'd write one app for all phones (saving you time)...both those that have NFC and those that don't, and just use NFC if it's there. However, for better efficiency, you can detect at install-time if the phone supports NFC and not install the NFC portion of your app if the phone doesn't support NFC. This requires that you write the app so it can run without the optional NFC code and that you write your install app so it can detect NFC and do the right thing at install-time. Supporting install-time optionality will save persistent footprint on the phone, something your customers will appreciate, your app "neighbors" will appreciate, and that you'll appreciate when they save static footprint for you.

In the next article, I'll writing about runtime optionality.

Wednesday Oct 24, 2012

Series On Embedded Development (Part 1)

This is the first in a series of entries on developing applications for the embedded environment. Most of this information is relevant to any type of embedded development (and even for desktop and server too), not just Java. This information is based on a talk Hinkmond Wong and I gave at JavaOne 2012 entitled Reducing Dynamic Memory in Java Embedded Applications.

One thing to remember when developing embeddded applications is that memory matters. Yes, memory matters in desktop and server environments as well, but there's just plain less of it in embedded devices. So I'm going to write about saving this precious resource as well as another precious resource, CPU cycles...and a bit about power too. CPU matters too, and again, in embedded devices, there's just plain less of it. What you'll find, no surprise, is that there's a trade-off between performance and memory. To get better performance, you need to use more memory, and to save more memory, you need to need to use more CPU cycles.

I'll be discussing three Memory Reduction Categories:

- Optionality, both build-time and runtime. Optionality is about providing options so you can get rid of the stuff you don't need and include the stuff you do need.

- Tunability, which is about providing options so you can tune your application by trading performance for size, and vice-versa.

- Efficiency, which is about balancing size savings with performance.


Darryl Mocek


« April 2014