X

Oracle Spatial and Graph – technical tips, best practices, and news from the product team

Cluster Deployment of Oracle Spatial Studio 20.1

Daniel Cuadra
Principal Member of Technical Staff

Among the new enhancements of the latest release of Oracle Spatial Studio, a self-service web tool for accessing the spatial features of Oracle Database, is the ability to deploy to a web server cluster.

Cluster Key Features

Spatial Studio 20.1 is designed to run large cluster deployments out-of-the-box. Some of the key features are:

  • Cache-sync. In-memory cached Spatial Studio metadata are kept in sync across all Studio instances. For example, when a user updates statistics of a dataset in Studio instance A, all other Studio instances will eventually sync their own cached dataset statistics within a reasonable amount of time.
  • Distributed jobs. Background running jobs are distributed across all Studio instances seamlessly. As instances are added to the cluster, they can immediately take pending jobs and run them. Similarly, when an instance is shutdown, its pending jobs are redistributed across the remaining Studio instances. When a job cancellation is requested in the cluster, the job will gracefully stop in the Studio instance running it. This means users can run more concurrent jobs, such as pre-caching and geocoding.
  • Server logs. Each Studio instance has its own independent server log, allowing the administrator to inspect log entries individually.
  • Global Settings. Modifications done to settings in the Administration Console are applied across all Studio instances. For example, adding Basemaps or updating the Geocoder Service URL in Studio instance A gets reflected almost instantly to the rest of the instances.

Requirements for Cluster Deployment

The following requirements are necessary to setup a Spatial Studio 20.1 cluster:

  1. All Studio instances must share the same sgtech_config.json file. Make sure to pass the oracle.sgtech.config JVM argument to your JEE container*, using the sgtech_config.json path as value. For instance, you could share the sgtech_config.json file using a network directory and simply set the mounted path:
    -Doracle.sgtech.config=/net/sgtech/sgtech-config/sgtech_config.json

    * Achieving this varies by JEE container. In WebLogic Server, this could be done in setDomainEnv.sh or setDomainEnv.cmd file.

  2. All Studio instances must share the same working-directory path. Open the sgtech_config.json file for editing and set a valid value for work_dir property. Let's say you want to use a mounted network directory to share the working-directory. Your sgtech_config.json file should then resemble this:
    {
      ...
      "work_dir": "/net/sgtech/sgtech-workdir",
      ...
    }
    
  3. Use the appropriate Spatial Studio 20.1 EAR/WAR packaged distribution file for your clustering JEE container. For example, WebLogic Server must deploy Studio using EAR, while Tomcat needs WAR. Both files can be found on Spatial Studio downloads.
  4. Sticky sessions must be enabled in the load-balancer. This will maintain affinity of all sessions with the Studio instance that initially serviced the first request.

You are all set

To verify the Studio instances are sharing both the config file and working-directory as intended, open the System Status screen for each instance and verify that the workingDirectory entry points to your working-directory path:

[Studio System Status]

Now go and unleash the power of Spatial Studio!!

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.