Proactive insights, news and tips from Oracle WebLogic Server Support. Learn Oracle from Oracle.

  • October 30, 2017

Docker Volumes in WebLogic

Background Information

In the Docker world, containers are ephemeral; they can be destroyed and replaced. After a container is destroyed, it is gone and all the changes made to the container are gone. If you want to persist data which is independent of the container's lifecycle, you need to use volumes. Volumes are directories that exist outside of the container file system.

Docker Data Volume Introduction

This blog provides a generic introduction to Docker data volumes and is based on a WebLogic Server image. You can build the image using scripts in github. In this blog, this base image is used only to demonstrate the usage of data volumes; no WebLogic Server instance is actually running. Instead, it uses the 'sleep 3600' command to keep the container running for 6 minutes and then stop.

Local Data Volumes

Anonymous Data Volumes

For an anonymous data volume, a unique name is auto-generated internally.

Two ways to create anonymous data volumes are:

  • Create or run a container with '-v /container/fs/path' in docker create or docker run
  • Use the VOLUME instruction in Dockerfile: VOLUME ["/container/fs/path"]


$ docker run --name c1 -v /mydata -d weblogic- 'sleep 3600'
$ docker inspect c1 | grep Mounts -A 10
        "Mounts": [
                "Name": "625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421",
                "Source": "/scratch/docker/volumes/625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421/_data",
                "Destination": "/mydata",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
# now we know that the volume has a random generated name 625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421
$ docker volume inspect 625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421
        "Name": "625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421",
        "Driver": "local",
        "Mountpoint": "/scratch/docker/volumes/625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421/_data",
        "Labels": null,
        "Scope": "local"

Named Data Volumes

Named data volumes are available in Docker 1.9.0 and later.

Two ways to create named data volumes are:

$ docker volume create --name testv1
$ docker volume inspect testv1
        "Name": "testv1",
        "Driver": "local",
        "Mountpoint": "/scratch/docker/volumes/testv1/_data",
        "Labels": {},
        "Scope": "local"

Mount Host Directory or File

You can mount an existing host directory to a container directly. 

To mount a host directly when running a container:

You can mount an individual host file in the same way:

Note that the mounted host directory or file is not an actual data volume managed by Docker so it is not shown when running docker volume ls. Also, you cannot mount a host directory or file in Dockerfile.

$ docker run --name c2 -v /home/data:/mydata -d weblogic- 'sleep 3600'
$ docker inspect c2 | grep Mounts -A 8
        "Mounts": [
                "Source": "/home/data",
                "Destination": "/mydata",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"

Data Volume Containers

Data volume containers are data-only containers. After a data volume container is created, it doesn't need to be started. Other containers can access the shared data using --volumes-from.

# step 1: create a data volume container 'vdata' with two anonymous volumes
$ docker create -v /vv/v1 -v /vv/v2 --name vdata weblogic- 

# step 2: run two containers c3 and c4 with reference to the data volume container vdata
$ docker run --name c3 --volumes-from vdata -d weblogic- 'sleep 3600'
$ docker run --name c4 --volumes-from vdata -d weblogic- 'sleep 3600'

Data Volume Plugins

Docker 1.8 and later support a volume plugin which can extend Docker with new volume drivers. You can use volume plugins to mount remote folders in a shared storage server directly, such as iSCSI, NFS, or FC. The same storage can be accessed, in the same manner, from another container running in another host. Containers in different hosts can share the same data.

There are plugins available for different storage types. Refer to the Docker documentation for volume plugins: https://docs.docker.com/engine/extend/legacy_plugins/#volume-plugins

WebLogic Persistence in Volumes

When running WebLogic Server in Docker, there are basically two use cases for using data volumes:

  1. To separate data from the WebLogic Server lifecycle, so you can reuse the data even after the WebLogic Server container is destroyed and later restarted or moved.
  2. To share data among different WebLogic Server instances, so they can recover each other's data, if needed (service migration).

The following WebLogic Server artifacts are candidates for using data volumes:

  • Domain home folders
  • Server logs
  • Persistent file stores for JMS, JTA, and such.
  • Application deployments

Refer to the following table for the data stored by WebLogic Server subsystems.

Subsystem or Service

What It Stores

More Information

Diagnostic Service

Log records, data events, and harvested metrics

Understanding WLDF Configuration in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server

JMS Messages

Persistent messages and durable subscribers

Understanding the Messaging Models in Developing JMS Applications for Oracle WebLogic Server

JMS Paging Store

One per JMS server. Paged persistent and non-persistent messages.

Main Steps for Configuring Basic JMS System Resources in Administering JMS Resources for Oracle WebLogic Server 

JTA Transaction Log (TLOG)

Information about committed transactions, coordinated by the server, that may not have been completed. TLOGs can be stored in the default persistent store or in a JDBC TLOG store.

Path Service

The mapping of a group of messages to a messaging resource

Using the WebLogic Path Service in Administering JMS Resources for Oracle WebLogic Server

Store-and-Forward (SAF) Service Agents

Messages from a sending SAF agent for re-transmission to a receiving SAF agent

Understanding the Store-and-Forward Service in Administering the Store-and-Forward Service for Oracle WebLogic Server 

Web Services

Request and response SOAP messages from an invocation of a reliable WebLogic Web Service

Using Reliable SOAP Messaging in Programming Advanced Features of JAX-RPC Web Services for Oracle WebLogic Server 

EJB Timer Services

EJB Timer objects

Understanding Enterprise JavaBeans in Developing Enterprise JavaBeans, Version 2.1, for Oracle WebLogic Server

A best practice is to run each WebLogic Server instance in its own container and share domain configuration in a data volume. This is the basic usage scenario for data volumes in WebLogic Server. When the domain home is in an external volume, server logs are also in the external volume, by default. But, you can explicitly configure server logs to be located in a different volume because server logs may contain more sensitive data than other files in the domain home and need more permission control. 

File stores for JMS and JTA etc should be located in an external volume and use shared directories. This is required for service migration to work. It’s fine for all default and custom stores in the same domain to use the same shared directory, as different instances automatically, uniquely decorate their file names. But different domains must never share the same directory location, as the file names can collide. Similarly, two running, duplicate domains must never share the same directory location. File collisions usually result in file locking errors, and may corrupt data. 

File stores create a number of files for different purposes. Cache and paging files can be stored in the container file system locally. Refer to following table for detailed information about the different files and locations.

Store Type

Directory Configuration

Store Path Not Configured


Relative Store Path


Absolute Store Path


File Name



The directory configured on a WebLogic Server default store. See Using the Default Persistent Store.





custom file

The directory configured on a custom file store. See Using Custom File Stores.






The cache directory configured on a custom or default file store that has a DirectWriteWithCache synchronous write policy. See Tuning the WebLogic Persistent Store in Tuning Performance of Oracle WebLogic Server.






The paging directory configured on a SAF agent or JMS server. See Paging Out Messages To Free Up Memory in Tuning Performance of Oracle WebLogic Server. <domainRoot>/servers/<serverName>/tmp





In order to properly secure data in external volumes, it is an administrator's responsibility to set the appropriate permissions on those directories. To allow the WebLogic Server process to access data in a volume, the user running the container needs to have the proper permission to the volume folder. 


  • Use local data volumes:
    • Docker 1.8.x and earlier recommends that you use data volume containers (with anonymous data volumes).
    • Docker 1.9.0 and later recommends that you use named data volumes. 
    • If you have multiple volumes, to be shared among multiple containers, we recommend that you use a data volume container with named data volumes.
  • To share data among containers in different hosts, first mount the folder in a shared storage server, and then choose one volume plugin to mount it to Docker.

We recommend that the WebLogic Server domain home be externalized to a data volume. The externalized domain home must be shared by the Admin server and Managed servers, each running in their own container. For high availability all Managed Servers need to read and write to the stores in the shared data volume. The kind of data volume that is selected should be chosen thinking of  persistence of the stores, logs, and diagnostic files.

Join the discussion

Comments ( 1 )
  • John Somers Friday, June 5, 2020
    What about attachin a volume for autodeploy?

    I have a WebLogic 12.1.2 container but I can't use volumes to pick up the latest version of a packaged ear whenever it is built because autodeploy is set to root:root when added as a volume to the ear location and WebLogic cannot be installed or run as root on the container so autodeploy is unreadable.

    The only way I've gotten around this is to mount to $PWD which means I have to manually copy the ear to the container working directory every time I build it so the container can deploy it.
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.