Monday Jun 02, 2014

Tweaking Hudson memory usage

Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach.

Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior.

All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds.

For the jobs cache: ( default=60 )

Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. ( default=1024 )

Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. ( default=1024)

Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough.

For the builds cache:

The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory.

hudson.job.builds.cache.evict_in_seconds ( default=60 )

Same as the equivalent Job cache, applied to Build.

hudson.job.builds.cache.initial_capacity" ( default=512 )

Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up.

hudson.job.builds.cache.max_entries ( default=10240 )

The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below.

Sample usage:

java -jar hudson-war-3.1.2-SNAPSHOT.war \

Monitoring cache usage

The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run
$ jmap -histo:live <pid> | grep 'hudson.model.*Lazy.*Key$'
Here's a sample output:
 num     #instances         #bytes  class name
 523:            28            896  hudson.model.RunMap$LazyRunValue$Key
1200:             3             96  hudson.model.LazyTopLevelItem$Key

These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

Wednesday Jul 31, 2013

Deployment Profiles explained

A Deployment Profile is a data structure in JDeveloper that describes how to put together the contents of a Project or Application for publishing to a remote target. The target can be the file system, an Application Server, a Mobile device, etc. anything that JDeveloper can interface with.

The most common type of Deployment Profile is a JAR Profile, which lets you simply zip up all the contents of the project into a .jar file. A JAR Profile consists of one or more FileGroups, similar to Ant filesets, but at a higher level of abstraction. A FileGroup has Contributors, that act as the source of files, and filter patterns that select from them. Finally it has a target location within the jar where the selected files should end up.

For example, a "Project Source" contributor would provide all source files, a pattern of "Include **" would select all the files and a target of "src" would put all the source files under the "/src/" directory within the .jar archive.

Similarly, a "Project Output" contributor would get all the compiled .class files. Arbitrary directories can also be added as contributors, but this is usually not a good idea, its better to express the dependency to that directory in some other form and have one of the standard or pre-defined contributors bring it in.

So far we've been referring to a specific type of FileGroup called a Packaging file group. Another useful FileGroup which can be included in a JAR Profile is the Library file group. This file group gathers all the Libraries of a project and adds each one as a contributor source. Based on the definition of the library, it also makes some choices as to which ones to omit. For example, libraries that are not marked 'Deployed by Default' are not added. Users have the option to override the default. Just like the Packagingfile group, the Libraryfile group also lets you select the actual files from the library using patterns, by default all files from a certain library are added (in JDeveloper each Library can have several jar files).

All these actions can be done via UI, or using the API. A default "JAR Profile" is an instance of the ArchiveProfile class. A ProfileFactory is provided as a facade to create any type of profile.

import oracle.ide.Context;
import oracle.jdeveloper.deploy.ProfileFactory;
import oracle.jdeveloper.deploy.jar.ArchiveProfile;

ProfileFactory factory = ProfileFactory.getInstance();

// Create a context for the profile.
// Here we have a pre-existing project.
Context context = new Context( project );

// Create a JAR profile with a single packaging file group and a single
// library file group
MyProfile p = factory.create( "name1", ArchiveProfile.class, context);

Next we'll see how to deploy this profile to disk.

Monday Mar 04, 2013

Integrating JDeveloper into build systems with OJServer

OJServer is a headless instance of JDeveloper.

You might ask what good is an IDE without a UI. Well if you think about it, an IDE is a collection of tools within a UI, but sometimes you want to use a tool without spending too much time interacting with its UI elements. Or you may want to use the same tool on multiple projects or files which would be cumbersome to open and close within an IDE.

Typically, most tools used within an IDE (like the java compiler) can also be run outside of it independently, but you may already have configured your IDE environment in a certain way so that the compiler is launched with the correct options, and your preferences and settings may be configured to your liking. Although these can be duplicated or copied or even referenced in some cases, its not quite the same context. Some tools, like the Refactoring tool, may not have a command-line equivalent, but you may still want to use some of these tools in your headless build environment.

OJServer can help integrate the capabilities of JDeveloper with other build systems. OJServer starts JDeveloper in headless mode and also starts up an RMI listener that listens for service requests. Each service is a distinct unit of work that can be triggered by a remote client. The service gets a JDeveloper context and can execute any arbitrary API available within the IDE (excluding View APIs of course).

By default, OJServer comes with two services pre-defined, a simple "Ping" service and a "Deploy" service. Here's how the Ping service is written:

Ping Service

package oracle.example.ojserver;

import oracle.jdeveloper.ojserver.spi.Server;
import oracle.jdeveloper.ojserver.spi.Service;
import oracle.jdeveloper.ojserver.spi.ServiceContext;

public class PingService implements Service {

    public void start(Server server) {
        // Setup code goes here...
        server.getLogger().info("Ping service started");

    public void execute(ServiceContext serviceContext) {
        String greeting = (String) serviceContext.getProperty("greeting");
        serviceContext.getServer().getLogger().info("Server pinged! Text is " + serviceContext.getProperty("greeting"));
        serviceContext.setResult(new Date());

    public void stop(Server server) {
        // Release resources..
        server.getLogger().info("Ping service stopped");
Create a JDeveloper extension with this service and plug it in using a trigger hook.

Registering the Service


 <ojserver-hook xmlns="">
   <service name="ping"


The Ping client

package oracle.example.client;


import java.rmi.NotBoundException;
import java.rmi.RemoteException;

import oracle.jdeveloper.ojserver.OjClient;
import oracle.jdeveloper.ojserver.rmi.DefaultClientContext;

public class OJPing {
    public static void main(String[] args) {
        // Accept greeting  from args
        if ( args.length == 0 ) {
        // Optionally also accept OJServer host and port from args
        // For now, we assume defaults.
        String greeting = args[0];
        OjClient client = new OjClient("localhost", 2010);
        DefaultClientContext context = new DefaultClientContext();
        context.setProperty("greeting", greeting);

        try {
            client.invoke("PingService", context);
            Object result = context.getResult();
            System.out.println("OJServer returned " + result);
        } catch (NotBoundException e) {
            System.out.println("Invalid service name");
        } catch (MalformedURLException e) {
            System.out.println("Invalid server URL");
        } catch (RemoteException e) {
            System.out.println("Exception on Server: " + e.getMessage());

    private static void printUsage() {
        System.out.println("OJPing <greeting> ");

Tuesday Feb 26, 2013

Hello World - Code Sample

JDeveloper Project:


Complete JDeveloper Application with two Projects showing how to write a Deployment Extension and plug it into the IDE. The first extension adds the deployment code, the second extension surfaces a menu item labelled "Say Hello World" under the "Run" menu. For explanations see this blog entry.

Before running the project, go to Project Properties/Extension and select your target platform.

To run, right-click and select "Run Extension", or you can also "Deploy to Target Platform" and run the IDE manually.

Monday Feb 25, 2013

Hello World

JDeveloper's deployment framework predates Ant, Maven and other such build and project management tools, but you'll find similarities in the concept even if there are differences in terminology and implementation. For one, to change anything in deployment a JDeveloper Extension has to be plugged in. You can find many examples on how to write a JDeveloper Extension in the Extension SDK docs (ESDK) available with JDeveloper. The examples here will work with JDeveloper 11.1.2.x.

The deployment process is a series of steps, each step uniquely identified by a "Sequence". Sequences are analogous to targets in Ant. A sequence may consist of other child sequences that all have to be processed, just like a target can decompose into other targets. The actual processing or step is done by a "Deployer" that is tied to that sequence.

 Lets write the canonical "Hello World" example using the deployment APIs. The deployment sequence for this example will just print the greeting and exit.

Create the following classes:

1. A simple Element: Typically JDeveloper deployment operates on a target, like an Application or Workspace, a Project, or a Deployment Profile within an IDE Context, but for this short example since we do not have a valid target, we'll just make one up and stick it in empty context.

2. A Deployer: To print the message

3. A DeployerFactory: To plug in the Deployer at the correct point.

import oracle.ide.model.DefaultElement; 
public class MyElement extends DefaultElement {}

import oracle.jdeveloper.deploy.DeploymentManager;

public interface MySequences {
  final static int GREETING_SEQUENCE = 

import oracle.jdeveloper.deploy.DeployShell; import oracle.jdeveloper.deploy.common.AbstractDeployer; public class GreetingDeployer extends AbstractDeployer { public GreetingDeployer(int currentSequence) { super( currentSequence ); } @Override protected void deployImpl(int i, DeployShell deployShell) { deployShell.getLogger().info("Hello World!"); } }

import oracle.jdeveloper.deploy.DeployShell;
import oracle.jdeveloper.deploy.Deployer;
import oracle.jdeveloper.deploy.DeployerFactory;

import oracle.deploy.example.MySequences;

public class MyDeployerFactory implements DeployerFactory {

    public Deployer newDeployer(int sequence, DeployShell deployShell) {
        if ( sequence == MySequences.GREETING_SEQUENCE ) {
            return new GreetingDeployer(sequence);
        return null;

Register the DeployerFactory using "trigger-hooks" in the Extension Manifest ( extension.xml )

<trigger-hooks xmlns=""> <triggers> <deployment-hook xmlns=""> <deployer-factories> <deployer-factory> <deployable-class>oracle.deploy.example.MyElement</deployable-class> <factory-class>oracle.deploy.example.MyDeployerFactory</factory-class> </deployer-factory> </deployer-factories> </deployment-hook> </triggers> </trigger-hooks> 

Build and install the Extension and find a way to trigger the deployment. Usually this is via a menu or an IDE action. To run the deployment, get an instance of DeploymentManager and call it with the new sequence and an IDE Context. The following example shows this being triggered from a (menu) Controller

import oracle.deploy.example.MyElement;
import oracle.deploy.example.MySequences;

import oracle.ide.Context;
import oracle.ide.controller.Controller;
import oracle.ide.controller.IdeAction;
import oracle.ide.dialogs.ExceptionDialog;

public class MyController implements Controller { @Override public boolean handleEvent(IdeAction ideAction, Context context) { context.setElement(new MyElement()); try { DeploymentManager.deploy(MySequences.GREETING_SEQUENCE, context); } catch (Exception e) { ExceptionDialog.showExceptionDialog(context, e); } return true; } @Override public boolean update(IdeAction ideAction, Context context) { return true; } } 

Trigger the controller and you should be rewarded with these messages in your Log window.
Feb 25, 2013 12:35:09 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer prepareImpl
INFO: ----  Deployment started.  ----
Feb 25, 2013 12:35:09 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer printTargetPlatform
INFO: Target platform is Standard Java EE.
Feb 25, 2013 12:35:09 PM oracle.deploy.example.GreetingDeployer deployImpl
INFO: Hello World!
Feb 25, 2013 12:35:09 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer finishImpl
INFO: Elapsed time for deployment:  less than one second
Feb 25, 2013 12:35:09 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer finishImpl
INFO: ----  Deployment finished.  ----

Tip: Its a good practice to separate the View/Controller classes into a different extension from the deployer code which should have no GUI dependencies. As we'll see later this helps when running deployment from the command-line using ojdeploy. The JDeveloper Deployment API extensions (which are OSGi bundles) follow this convention, for example, there is the "deploy.core" bundle that is required for the above example, and "deploy.core.dt" bundle that is only required for accessing parts of the deployment UI within the IDE.

Thursday Feb 21, 2013

Deploying applications from JDeveloper

Oracle JDeveloper offers a one-stop deployment feature that lets you package your Application or Projects in various kinds of modules and deploy locally or to a remote server.

The deployment in JDeveloper is geared towards a development experience and not intended to be used to directly deploy to a production environment, although it is possible to do so. In this blog I will attempt to describe various aspects of the deployment feature and offer insights into how it can be customized to suit various requirements. Customization may be done via the UI by simply setting various options or may be involved through programmatic manipulations done by JDeveloper extensions.


Some tips and techniques for working with Oracle and Eclipse tools.


« June 2016