Helidon: A Simple Cloud Native Framework

Create container-friendly microservices with a minimum of code running straight Java SE.

For a good portion of the internet’s early years, web applications were “monolithic.” That is, they were single, self-contained applications that encapsulated the entire API and front-end display code. Business logic, validation, data retrieval and manipulation, persistence, security, and the UI were all wrapped up in a single bundle and deployed on application or web servers, such as Tomcat, Apache, or Microsoft IIS. This approach worked, and still works, but it leads to challenges as your application grows in scope, among them the following:

  • Deployment: Checking out source code, compiling, testing, bundling, and deploying monoliths takes a long time.
  • Dependencies, frameworks, and language: Choices and versions are locked in for the entire application, which leads to difficulties in upgrading when new versions are released.
  • Single point of failure: Monoliths are brittle; if the web server goes down, the whole application is down.
  • Scaling: The application must be scaled in its entirety—even if only a single portion of the application is the cause of increased load.

There are certainly other challenges that come with monoliths, but these tend to be the ones that cause the most pain to developers, project managers, and operations-minded folks. And for a long time, everyone just dealt with them.

You’ve done nothing more than run a few Maven commands and launched the JAR file from the command line, and you’ve obtained a fully scaffolded, running application without touching a single line of code.

In part, because of those limitations, we’re now in the microservice era. This approach, which uses individual services that typically serve a single, distinct purpose and usually are deployed in some sort of container, is growing in popularity and adoption. It’s easy to see why, too. Let’s briefly look at the issues I raised earlier and see how microservices address each of them.

  • Deployment: Each service can be tested, compiled, and deployed independently.
  • Dependencies, frameworks, and language: Each service is free to use the language, framework, dependencies, and versions as necessary and desired.
  • Single point of failure: Each service is typically deployed in a container and managed by an orchestration tool, which means outages can be isolated and do not affect the entire application.
  • Scaling: Services can be scaled independently of one another, which means the high-load services can scale up while the lower-demand services remain scaled down.

Microservices are not a silver bullet and they don’t solve all problems, but in many applications, they make a lot of sense. Now that I’ve established the “why” when it comes to microservices, let’s look at the “how.”

There are many microservice frameworks available right now, and although creating a new one might seem misguided, that’s what Oracle has done with Project Helidon. You don’t need to look much further than the framework name to understand Oracle’s reasons for creating it: Helidon is a Greek word for the swallow—a small, highly maneuverable bird that fits naturally in the clouds. With this in mind, Helidon’s creators strove to develop a lightweight set of libraries that didn’t require an application server and could be used in Java SE applications.

Helidon comes in two flavors: SE and MP. Helidon SE is simple, lightweight, functional, and reactive. It runs on an embedded Netty web server and falls into the microframework category. It can be compared with Javalin and Micronaut (both of which are covered in this issue of the magazine) or Spark Java. Helidon MP is a MicroProfile-based framework that uses familiar annotations and components that Java EE/Jakarta EE developers should be familiar with, such as JAX-RS/Jersey, JSON-P, and CDI. Think of Helidon MP in the same league as Open Liberty, Payara, and Thorntail (formerly WildFly Swarm). Let’s take a look at Helidon, starting with Helidon SE.

Getting Started with Helidon SE

There’s nothing worse than learning about a new tool and hitting a brick wall with a lack of tools or documentation to get you started. That is not an issue here. To get started with Helidon SE, make sure you’ve got a few prerequisites installed and ready to go: JDK 8 or later and Maven 3.5 or later. If you’re using Docker and Kubernetes, you’ll get some handy files generated to help you create your containers and deploy them. To take advantage of that, make sure you also have Docker 18.02 or later and Kubernetes 1.7.4 or later. (You can use Minikube or the Kubernetes support in Docker Desktop to run Kubernetes on your desktop.)

Verify your versions like so:


$ java --version
$ mvn --version
$ docker --version
$ kubectl version --short

Once you’ve met the prerequisites, it’s time to generate a project using the Helidon quickstart Maven archetype. If you’re not familiar with archetypes, they are project templates that you can use to scaffold out a basic starter project so you can quickly begin working with a framework. Oracle provides two archetypes: one for Helidon SE and one for Helidon MP.

Here’s a basic example you can use from your favorite terminal to generate a Helidon SE project:


$ mvn archetype:generate -DinteractiveMode=false \
    -DarchetypeGroupId=io.helidon.archetypes \
    -DarchetypeArtifactId=helidon-quickstart-se \
    -DarchetypeVersion=0.10.2 \
    -DgroupId=[io.helidon.examples] \
    -DartifactId=[quickstart-se] \
    -Dpackage=[io.helidon.examples.quickstart.se]

The archetype is documented at Maven Central, which is where you can always check the latest released version to make sure it’s available to use. The items bracketed in the previous snippet are project-specific, and you can edit them to apply to your project. Here’s an example I put together for this article:


$ mvn archetype:generate -DinteractiveMode=false \
    -DarchetypeGroupId=io.helidon.archetypes \
    -DarchetypeArtifactId=helidon-quickstart-se \
    -DarchetypeVersion=0.10.2 \
    -DgroupId=codes.recursive \
    -DartifactId=helidon-se-demo \
    -Dpackage=codes.recursive.helidon.se.demo

Once complete, a fully scaffolded sample application is available in a new directory that matches the value used for the artifactId. The example is complete and ready to run, so to see it in action, you can compile the application with


$ mvn package

This command will run all the generated tests, build the application JAR file, and place that file in the target/libs directory. Because the framework includes an embedded web server, you can now run the application by using the following command:


$ java -jar target/helidon-se-demo.jar

You’ll see the application start up and confirm that it’s running on port 8080:


[DEBUG] (main) Using Console logging
2018.10.18 14:34:10 INFO io.netty.util.internal.PlatformDependent Thread[main,5,main]: Your platform does not provide complete low-level API for accessing direct buffers 
reliably. Unless explicitly requested, heap buffer will always be preferred to avoid 
potential system instability.
2018.10.18 14:34:10 INFO io.helidon.webserver.netty.NettyWebServer Thread[nioEventLoopGroup-2-1,10,main]: Channel '@default' started: 
[id: 0x3002c88a, L:/0:0:0:0:0:0:0:0:8080]
WEB server is up! http://localhost:8080

But trying to view the root path will result in an error, because the archetype doesn’t declare a routing for the root path. Instead, go to http://localhost:8080/greet, and you’ll see a simple “Hello World” message returned as JSON.

At this point, you’ve done nothing more than run a few Maven commands and launched the JAR file from the command line, and you’ve obtained a fully scaffolded, running application without touching a single line of code. Obviously, you will need to dig into the code at some point, but before that, let’s see what Helidon provides for Docker support.

Helidon MP and Helidon SE both provide a low barrier to entry for teams looking to adopt a new microservices framework.

Before moving forward, stop the application by pressing Ctrl+C. Now look inside the target directory, and you’ll notice a few extra files that were created when you ran mvn package. You’ll see that Helidon has created both a Dockerfile for building a Docker container from your application and an application.yaml file for creating a Kubernetes deployment. The files themselves are basic, but they give you out of the box all you need to run them.

Here’s the Dockerfile for the demo project (excluding the license information, for brevity):


FROM openjdk:8-jre-alpine

RUN mkdir /app
COPY libs /app/libs
COPY helidon-se-demo.jar /app

CMD ["java", "-jar", "/app/helidon-se-demo.jar"]

If this is the first time you’ve seen a Dockerfile, this file declares a base image on the first line. In this case, I am using the openjdk image tagged with 8-jre-alpine, which includes the Java 8 JRE in a very lightweight image based on Alpine Linux. Two lines later, the Dockerfile creates an app directory to store the application. The next line copies the output in the libs directory into app/libs, and the following line copies the JAR file into the app directory. The final line tells Docker to run the java jar command at startup, which launches the application.

Let’s test out this Dockerfile by running the following command from a terminal in the project root directory:


$ docker build -t helidon-se-demo target

This instructs Docker to build an image tagged with helidon-se-demo, using the Dockerfile located in the target directory. You should see output similar to the following after running the docker build command:


Sending build context to Docker daemon  5.231MB
Step 1/5 : FROM openjdk:8-jre-alpine
 ---> 0fe3f0d1ee48
Step 2/5 : RUN mkdir /app
 ---> Using cache
 ---> ab57483b1f76
Step 3/5 : COPY libs /app/libs
 ---> 6ac2b96f4b9b
Step 4/5 : COPY helidon-se-demo.jar /app
 ---> 7d2135433bcc
Step 5/5 : CMD ["java", "-jar", "/app/helidon-se-demo.jar"]
 ---> Running in 5ab71094a72f
Removing intermediate container 5ab71094a72f
 ---> 7e81289d5267
Successfully built 7e81289d5267
Successfully tagged helidon-se-demo:latest

To confirm all is well, run this command:


docker images helidon-se-demo

You’ll see a container file, helidon-se-demo, in your directory. My file from this demo is 88.2 MB. To run this container, use the following command:


$ docker run -d -p 8080:8080 helidon-se-demo

The docker run command uses the -d switch to run the container in detached mode (in the background) and exposes the container port using -p. The final part of the docker run command tells Docker which image to run, which in this case is the image name helidon-se-demo that I used in the docker build command.

To view the running containers on your system, execute this command:


$ docker ps -a

Alternatively, you can use a GUI tool such as Kitematic or Portainer. I’m partial to Portainer, so I verified the running container with it, as shown in Figure 1.

Helidon figure 1

Figure 1. The Helidon application running in a container (see starred entry)

Of course, you could simply go to http:localhost:8080/greet again to confirm that the application is running locally (only this time, it’s running via Docker).

Running on Kubernetes

Now that you’ve tested out Helidon’s Docker support, let’s see what the framework gives you for Kubernetes support. First, kill the running Docker container (via either the command line or the graphical interface of your choice). Then, take a look at the generated file located at target/app.yaml. It contains the following:


kind: Service
apiVersion: v1
metadata:
  name: helidon-se-demo
  labels:
    app: helidon-se-demo
spec:
  type: NodePort
  selector:
    app: helidon-se-demo
  ports:
  - port: 8080
    targetPort: 8080
    name: http
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: helidon-se-demo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: helidon-se-demo
        version: v1
    spec:
      containers:
      - name: helidon-se-demo
        image: helidon-se-demo
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
---

I won’t go over the details of this configuration file, but it gives you the ability to quickly deploy the application to Kubernetes, which provides container management and orchestration. To deploy the application to a running Kubernetes cluster, enter the following command (again, from the root of the project; otherwise, modify the path to app.yaml accordingly):


$ kubectl create -f target/app.yaml

Assuming everything was created properly, you’ll get a reply like this:


service/helidon-se-demo created
deployment.extensions/helidon-se-demo created

You can confirm the deployment with kubectl get deployments, and you can check the service with kubectl get services:


NAME              TYPE     CLUSTER-IP     EXTERNAL-IP PORT(S)       
helidon-se-demo   NodePort 10.105.215.173 <none>      8080:32700/TCP

[This line is slightly truncated to fit here. —Ed.] As you can see, the service is running on port 32700, which you can verify by visiting the service in the browser.

So far, you have scaffolded an application, built it into a Docker container, and deployed that container to Kubernetes—and yet you have not written a single line of code.

Let’s switch gears and examine the code. Open up src/main/java/Main.java and take a look at the startServer() method to see how Helidon SE initializes the built-in Netty web server:


protected static WebServer startServer() throws IOException {

    // load logging configuration
    LogManager.getLogManager().readConfiguration(
        Main.class.getResourceAsStream("/logging.properties"));

    // By default this will pick up application.yaml from 
    // the classpath
    Config config = Config.create();

    // Get web server config from the "server" section of 
    // application.yaml
    ServerConfiguration serverConfig =
        ServerConfiguration.fromConfig(config.get("server"));

    WebServer server = 
        WebServer.create(serverConfig, createRouting());

    // Start the server and print some info.
    server.start().thenAccept(ws -> {
        System.out.println(
            "WEB server is up! http://localhost:" + ws.port());
    });

    // Server threads are not demon. NO need to block. Just react.
    server.whenShutdown().thenRun(()
        -> System.out.println("WEB server is DOWN. Goodbye!"));

    return server;
}

The comments that are included when this code was generated do a fairly good job explaining what’s going on here, but the following steps summarize what it’s doing:

  1. Initializing logging: Grabbing the configuration from the generated application.yaml file (additional application configuration variables can be added here)
  2. Creating an instance of ServerConfiguration and passing it the host/port info from the application configuration
  3. Creating and starting an instance of the WebServer and passing it the necessary routing info returned from createRouting()

The createRouting() method registers any necessary services like this:


private static Routing createRouting() {
    return Routing.builder()
             .register(JsonSupport.get())
             .register("/greet", new GreetService())
             .build();
}

That’s where you register a single endpoint, "/greet", which points at the GreetService, which I will break down here. You’ll notice a few class variables that use the Config class to obtain values from the application.yaml file I discussed earlier.


private static final Config CONFIG = 
    Config.create().get("app");
private static String greeting = 
    CONFIG.get("greeting").asString("Ciao");

The GreetService implements Service and overrides the update() method to define subpaths under the /greet endpoint like this:


@Override
public final void update(final Routing.Rules rules) {
    rules
        .get("/", this::getDefaultMessage)
        .get("/{name}", this::getMessage)
        .put("/greeting/{greeting}", this::updateGreeting);
}

In this code, update() receives an instance of Routing.Rules, which has methods corresponding to each HTTP verb—get(), post(), put(), head(), options(), and trace()—as well as some useful methods such as any(), which acts as a catchall and can be used for things such as logging and security.

I have registered three endpoints: /greet/, /greet/{name}, and /greet/greeting. Each endpoint has a method reference pointing to a service method. Each service method registered as an endpoint will receive two arguments: request and response. This design allows you to pull arguments out of the request scope, such as headers and parameters, and to set elements such as headers and body in the response. Here’s what the getDefaultMessage() method looks like:


private void getDefaultMessage(final ServerRequest request, 
                               final ServerResponse response) {
    String msg = String.format("%s %s!", greeting, "World");

    JsonObject returnObject = Json.createObjectBuilder()
            .add("message", msg)
            .build();
    response.send(returnObject);
}

It’s a bare-bones example, but it illustrates the basic structure of a service method. The getMessage() method shows an example of a dynamic path parameter (the {name} element within the path that was registered), which allows you to grab that value from the URL.


private void getMessage(final ServerRequest request, 
                        final ServerResponse response) {
    String name = request.path().param("name");
    String msg = String.format("%s %s!", greeting, name);

    JsonObject returnObject = Json.createObjectBuilder()
            .add("message", msg)
            .build();
    response.send(returnObject);
}

Calling http://localhost:8080/greet/todd would result in the expected output shown in Figure 2.

Helidon figure 2

Figure 2. Expected output

The updateGreeting() method, shown next, isn’t much different from getMessage(), but note that it must be called with PUT instead of GET because I registered it that way in update().


private void updateGreeting(final ServerRequest request, final ServerResponse response) {
    greeting = request.path().param("greeting");

    JsonObject returnObject = Json.createObjectBuilder()
            .add("greeting", greeting)
            .build();
    response.send(returnObject);
}

There’s much more to Helidon SE, from error handling and static content to metrics and health support. I highly recommend reading the project documentation to learn about those features and others.

Getting Started with Helidon MP

Helidon MP is the MicroProfile variant of Helidon. If you’ve been working with Java EE for any amount of time, you’ll probably find that it looks pretty familiar. As I mentioned earlier, you’ll see the usual things such as JAX-RS/Jersey, JSON-P, and CDI.

To get started quickly, use the Helidon MP archetype just like I did earlier with Helidon SE:


$ mvn archetype:generate -DinteractiveMode=false \
    -DarchetypeGroupId=io.helidon.archetypes \
    -DarchetypeArtifactId=helidon-quickstart-mp \
    -DarchetypeVersion=0.10.2 \
    -DgroupId=codes.recursive \
    -DartifactId=helidon-mp-demo \
    -Dpackage=codes.recursive.helidon.mp.demo

Take a look at the Main.java class, and you’ll see that it’s even easier than Helidon SE to get the embedded web server running:


protected static Server startServer() throws IOException {

    // load logging configuration
    LogManager.getLogManager().readConfiguration(
        Main.class.getResourceAsStream("/logging.properties"));

    // Server will automatically pick up configuration from
    // microprofile-config.properties
    Server server = Server.create();
    server.start();
    return server;
}

The application is defined in the GreetApplication class, which has a getClasses() method that is used to register resources that represent routes in the application:


@ApplicationScoped
@ApplicationPath("/")
public class GreetApplication extends Application {

    @Override
    public Set<Class<?>> getClasses() {
        Set<Class<?>> set = new HashSet<>();
        set.add(GreetResource.class);
        return Collections.unmodifiableSet(set);
    }
}

The GreetResource in Helidon MP performs the same tasks as the GreetService from Helidon SE, but instead of registering routes individually, you use annotations on the class and methods to represent the endpoints, HTTP verbs, and content-type headers:


@Path("/greet")
@RequestScoped
public class GreetResource {

    private static String greeting = null;

    @Inject
    public GreetResource(@ConfigProperty(name = "app.greeting") 
      final String greetingConfig) {
        if (this.greeting == null) {
            this.greeting = greetingConfig;
        }
    }

    @Path("/")
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public JsonObject getDefaultMessage() {
        String msg = String.format("%s %s!", greeting, "World");

        JsonObject returnObject = Json.createObjectBuilder()
                .add("message", msg)
                .build();
        return returnObject;
    }

    @Path("/{name}")
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public JsonObject getMessage(@PathParam("name") final String name){
        String msg = String.format("%s %s!", greeting, name);

        JsonObject returnObject = Json.createObjectBuilder()
                .add("message", msg)
                .build();
        return returnObject;
    }
    
    @Path("/greeting/{greeting}")
    @PUT
    @Produces(MediaType.APPLICATION_JSON)
    public JsonObject updateGreeting(@PathParam("greeting")
                                     final String newGreeting) {
        this.greeting = newGreeting;

        JsonObject returnObject = Json.createObjectBuilder()
                .add("greeting", this.greeting)
                .build();
        return returnObject;
    }
}

Conclusion

There are a few other differences between Helidon MP and Helidon SE, but both versions provide a low barrier to entry for teams looking to adopt a new microservices framework. Helidon is a versatile framework that will help your team quickly develop microservice applications. If containers aren’t your preference, you can choose to forgo them altogether and deploy the JAR as you would any traditional JAR. But if your team has adopted containers, the built-in support gives your team the ability to quickly deploy to any cloud-based or on-premises Kubernetes cluster. Because Helidon is being developed by Oracle, the Helidon team will continue developing the framework with some planned enhancements focused on integrating applications with Oracle Cloud. If you’re currently hosting your applications in Oracle Cloud, or you plan to migrate to it soon, Helidon might be the right framework for your next microservices application.

Also in This Issue

Javalin: A Simple, Modern Web Server Framework
Building Microservices with Micronaut
The Proxy Pattern
Loop Unrolling
Quiz Yourself
Size Still Matters
Book Review: Modern Java in Action

Todd Sharp

Todd Sharp is a developer advocate for Oracle focusing on Oracle Cloud. He has worked with dynamic JVM languages and various JavaScript frameworks for more than 14 years, originally with ColdFusion and more recently with Java/Groovy/Grails on the server side. He lives in the Appalachian mountains of north Georgia (in the United States) with his wife and two children.

Share this Page