X

Antony Reynolds' Blog

Building an FMW Cluster using Docker (Part III Running Docker Containers)

Antony Reynolds
Senior Director Integration Strategy

Click here for a Google Docs version of this document that doesn't suffer from the Oracle blog formatting problems




style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 621.14px; height: 11.00px;">

alt="" src="//cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/Image/7a83cda6c065b8fbd7a232fd9b4a0a3f/image04.png"

style="width: 621.14px; height: 8.14px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);"

title="horizontal line"> 


style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 244.00px;">

alt="" src="//cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/Image/1b400bcbec4e7ffde5a7e946aed75a32/image07.png"

style="width: 624.00px; height: 244.00px; margin-left: 0.00px; margin-top: 0.00px; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px);"

title="">


Oracle Fusion Middleware Deployments Using

Docker Swarm Part III



Overview


This is the third in a series of blogs that

describe how to build a Fusion Middleware (FMW) Cluster that runs as

a number of Docker images that run in docker containers.  These

containers are coordinated using Docker Swarm and can be deployed to

a single host machine or multiple hosts.  This simplifies the

task of building FMW clusters and also makes it easier to scale them

in and out (adding or subtracting host machines) as well as up and

down (using bigger or smaller host
machines

class="c13">).  Using docker also helps us to avoid port

conflicts when running multiple servers on the same

physical machine.  When we use swarm we will see that we also

get benefits from a built in load balancer.


This blog uses Oracle Service Bus as an FMW

product but the principles are applicable to other FMW products.


In our previous blog we talked about how to

build the required docker images for running FMW on Docker Swarm and

created a database container.


In this entry we will explain how to create

an FMW domain image and how to run that in a docker container

class="c0">.  The next blog will cover how to run this in

Docker Swarm.



Key Steps in Creating a Service Bus Cluster


When creating a service bus cluster we need to

do the following:



  1. Create the required

    schemas in a database.



  • Service Bus 12.1

    adds a number of new features such as re-sequencing that require the

    use of SOA Suite schemas.  These are in addition to the

    database requirements for Web Services Security Manager that existed

    in Service Bus 11g.

  • The Repository

    Creation Utility is used to create the schemas in a database.



  1. Create service bus

    domain.



  • The service bus

    domain contains all the required service bus binaries and associated

    configuration.  Within the domain we will create a service bus

    cluster.

  • The domain can be

    created using the WebLogic scripting tool by applying the service

    bus domain template.



  1. Create a Service Bus

    cluster within the domain.



  • The service bus

    cluster allows us to have multiple service bus servers running the

    same proxy services and sharing the load.

  • The cluster can be

    created and servers assigned using the WebLogic scripting tool.


These steps need to be factored into the way

we build our docker images and containers and ultimately into how we

create Docker Swarm services.



Mapping the Service Bus Cluster onto Docker


There are a number of ways in which we could

map a Service Bus cluster onto Docker.  We have chosen the

following approach:



  • Create a Docker

    image that contains the Service Bus domain configuration.



  • This is layered on

    top of the OSB installation image.  This allows us to modify

    and rebuild the scripts without having to reinstall the FMW

    software.  This speeds up the development cycle of the image.

     Once the scripts are working they could be placed in the FMW

    binary image, reducing the number of layers.

  • When creating a

    container from the image we run the RCU to create the schemas in the

    database.  We also run scripts to create the domain and add

    servers to the domain as needed.



  • The same Docker

    image is used for both Admin and Managed Servers.



  • Depending on

    parameters the container decides if it is an admin server or a

    member of a cluster.

  • All servers need

    access to the database.

  • All servers need

    access to the Admin server.

  • The admin server

    requires access to all servers.



Container Summary


We effectively have two images, which are both

built from multiple layers as explained previously.



  1. Database image holds

    the binaries for database and scripts to create a database instance.

  2. Fusion Middleware

    image holds the FMW binaries and scripts to create a domain, or

    extend an existing domain.


We have a single Docker Container to run the

database from the database image.


We have multiple Docker Containers, one per managed server

class="c0">, to run Fusion Middleware from the single domain

image.


To simplify starting the containers the git

project includes run scripts (run.sh for database and runNode.sh for

FMW) that can be used to create containers.



Database Container


The database container runs from the database

image and has the following characteristics:



  • When the container is created

    it creates and starts a database
    instance

    class="c0">.

  • After starting the

    database we change the database password.

  • When the container

    is stopped it shuts down the database instance.

  • When the container

    is started it starts the database instance.

  • The database

    container exposes the database port (1521 in this case)

  • Only a single

    container runs a given database.


The database container is started using the

following command:


docker run -d -it --name Oracle12cDB

--hostname osbdb -p 1521:1521 -p 5500:5500 -e ORACLE_SID=ORCL -e

ORACLE_PDB=OSB -v /opt/oracle/oradata/OracleDB

oracle/database:12.1.0.2-ee


We expose ports 1521 (database) and 5500 (em).



Admin Server Container


The admin server container runs from the FMW

domain image, or just the FMW image if the layers have been

collapsed.  It has the following characteristics:



  • When the container

    is created it runs the RCU to configure the database.

  • When the container is created

    it creates an FMW
    domain and

    cluster.

  • When the container

    is created it starts the Admin Server.

  • When the container

    is stopped its stops the Admin Server.

  • When the container

    is started it starts the Admin Server.

  • The Admin Server

    exposes the admin console port (7001 in this case)

  • Only a single Admin

    Server container runs in a given domain.

  • The same image is

    used for both Admin and Managed Servers.


We start the admin server using the following

command


runNode.sh admin


This translates to


docker run -d -it \


        -e

"MS_NAME=AdminServer" \


        -e

"MACHINE_NAME=AdminServerMachine" \


        --name

wlsadmin \


        --add-host

osbdb:172.17.0.2 \


        --add-host

wlsadmin:172.17.0.3 \


        --add-host

wls1:172.17.0.4 \


        --add-host

wls2:172.17.0.5 \


        --add-host

wls3:172.17.0.6 \


        --add-host

wls4:172.17.0.7 \


        --hostname

wlsadmin \


        -p

7001:7001 \


        oracle/osb_domain:12.2.1.2

\


        /u01/oracle/container-scripts/createAndStartOSBDomain.sh


We need to add the hostnames of the managed

servers and the database to the /etc/hosts file so that the admin

server can access them.  We will show how to avoid doing this in

the final blog post.



Managed Server Containers


The managed server containers runs from the

same FMW domain image as the Admin Server.  It has the following

characteristics:



  • When the container

    is created it creates a new server in the domain and adds it to the

    FMW cluster

  • When the container is created

    it creates
     a local copy of the domain files.

  • When the container

    is created it starts the Managed Server.

  • When the container

    is stopped its stops the Managed Server.

  • When the container

    is started it starts the Managed Server.

  • The Managed Server

    exposes the admin console port (8011 in this case)

  • Multiple Managed

    Server containers may run in a given domain.

  • The same image is

    used for both Admin and Managed Servers.


We start the managed servers using the

following command


runNode.sh N


Where N is the number of the managed server.


When N=2 this  translates to


docker run -d -it \


        -e

"MS_NAME=osb_server2" \


        -e

"MACHINE_NAME=OsbServer2Machine" \


        --name

wls2 \


        --add-host

osbdb:172.17.0.2 \


        --add-host

wlsadmin:172.17.0.3 \


        --add-host

wls1:172.17.0.4 \


        --add-host

wls2:172.17.0.5 \


        --add-host

wls3:172.17.0.6 \


        --add-host

wls4:172.17.0.7 \


        --hostname

wlsadmin \


        -p

8013:8011 \


        oracle/osb_domain:12.2.1.2

\


        /u01/oracle/container-scripts/createAndStartOSBDomain.sh



Note that all the managed servers listen on

port 8011.  Because they each run in their own container their

is no conflict in their port numbers but we need to map them so that

they can be accessed externally without conflicts.



Special Notes for Service Bus


The first managed server in a Service Bus

cluster is special because it runs singleton tasks related to

reporting, collecting performance information from other nodes in the

cluster and aggregating it and making it available to the console.

 Because of this we decided to always create a Service Bus

domain with a pre-existing single Managed Server in the cluster with

the correct singleton targeting.


Because this server already exists if a

container detects it is supposed to run Managed Server 1 then it does

not create the server or associate it with a cluster, it justs

assigns it to a machine (see next section for details) and creates

the local managed server domain.



Containers and FMW Mapping


Each container maps to a single WebLogic

server, either Admin Server or a Managed Server in a cluster.


The Admin Server container is responsible for

running the repository configuration utility, creating the domain and

configuring it for that particular FMW product (in our case Service

Bus).


The Managed Server containers are responsible

for adding a new Managed Server to the cluster and creating a local

managed server domain.


Both Admin and Managed Server containers need

to figure out key facts about themselves:



  • Hostname - used to

    set the listen address for the server and also the address of the

    machine

  • Type - admin or

    managed server to decide what to do on creation

  • Server identifier -

    managed servers need to make sure they have unique server names.

  • Associated Admin

    Server - managed servers must contact the admin server to obtain and

    update domain configuration.

  • Database Server -

    admin servers must know about the database to be able to run the RCU

    and create data sources required by the FMW product.



Starting the FMW Cluster


The FMW cluster is started as follows:



  1. A database container

    is created/started

  2. An admin server is

    created/started

  3. One or more managed

    servers are created/started


Containers are created using the “docker

run” command.  We use the “-d” flag to run

them as daemon processes.  By default the CMD directive in the

dockerfile is used choose the command or script to run on container

startup.  We use this for the database container.  For the

admin and managed containers we identify which type of container they

are and pass in an appropriate script to the “docker run”

command.


Containers are started using the “docker

start” command and the container uses the same command or

script as when it was created.  That means we must detect if

this is a new container or a container being started after previously

being shutdown.  With the FMW containers we do this by looking

for the existence of the domain directory, if it exists we have

previously been started, if not this must be our first run.


Tools such as docker compose and docker

compose simplify the task of deploying our multiple container FMW

cluster and we will look at these in the next blog entry.  One

of the benefits we will find with swarm is that it includes a load

balancer.  The current multi-container approach would require

either another container to run a load balancer or an external load

balancer, we will see that swarm removes this need.



Retrieving Docker Files for OSB Cluster


We are posting all the

class="c15"

href="https://www.google.com/url?q=https://github.com/Shuxuan/OSB-Docker-Swarm&sa=D&ust=1489385625373000&usg=AFQjCNEHOJ8D-pESBGHXlchklD8PPutvhQ">required

files to go along with this blog on github
.  You

are welcome to fork from this and improve it.  We cloned many of

these files from the

href="https://www.google.com/url?q=https://github.com/oracle/docker-images&sa=D&ust=1489385625374000&usg=AFQjCNGInGH0sXzaqAgEDw2crn4G3WUvig">official

Oracle docker github
.  We removed unused versions

and added a simplified build.sh file to each product directory to

make it easy to see how we actually built our environment.  We

are still updating the files online and will shortly add details on

building the swarm services.



Summary


In this entry we have explained how to create

a Fusion Middleware Domain
and run it on docker

containers
.  In our next entry we will

simplify the deployment
of of our cluster

by taking advatage of swarm

mode by defining swarm services for the database, WebLogic Admin

Server and WebLogic managed servers.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.