Deployment of a Highly Available Memcached Cluster on Oracle Cloud Infrastructure using Terraform

Abhiram Annangi
Principal Product Manager - Cloud Marketplace

Caching is one of the most effective techniques to speed up a website and has become a staple of modern web architectures. Effective caching strategies will allow you to get the most out of your website, ease the pressure on your database, and offer a better experience for users. An effective caching strategy is perhaps the single biggest factor in creating an application that performs well at scale. Let’s now look at the in-memory caching solutions offered by the most widely adopted in-memory cache, Memcached.

Memcached is an open source, high performance distributed object caching system. It is simply a distributed key/value store that stores its objects in memory. It can be thought of as a standalone distributed hash table or a dictionary.  A typical Memcached system is comprised of 4 elements.

  • Memcached client software, which resides on the application servers that make calls to the Memcached server. There are various libraries of client software available with polyglot support.
  • Memcached server, which runs the Memcached software and stores the key-value pairs.
  • Client-based hashing strategy to distribute the keys on servers
  • Cache eviction strategy on server.

In a typical Memcached setup, the servers are disconnected from each other, and usually, they are unaware of each other. There is no communication between Memcached servers of any kind such as synchronization or broadcasting. If you are running low on resources on a Memcached server, you can add another server, and you can continue adding servers as your data volume grows. This increases the flexibility to be able to scale out Memcached servers. The cached items drop out of Memcached as the cache becomes full and this is called cache eviction. In Memcached, the Least Recently Used (LRU) objects are dropped from cache to create room for newer entries. The most common way to use Memcached is as a demand-filled look-aside cache. For more information on caching strategies, refer to this white paper.

In this blog post, we discuss how to deploy a simple LAMP stack involving Ubuntu Linux, Apache 2 web server, Python Flask Application and a MySQL database, with Memcached on Oracle Cloud Infrastructure. Flask is a BSD licensed microframework for Python, based on Werkzeug and Jinja 2 extensions.

Subsequently, we scale the application and Memcached instances across multiple availability domains. At each step, we'll give you the necessary Terraform code to automate the deployment. For the purposes of this blog post, we will use the following OCI instance shapes and services. For more information on instance shape selection for Memcached, please refer to this white paper.

  • Oracle Cloud Infrastructure instance shapes:   VM.Standard2.4 (Application servers), VM.Standard2.2 (Memcache),  BM.Standard2.1(database)
  • Oracle Cloud Infrastructure services: VCN, Internet Gateway, Route tables, Security Lists, Route Tables, Public Load balancer pair
  • Operating system: Ubuntu Linux 16.04
  • Application server: Python Flask
  • Memcached client library: py-memcache
  • Database: MySQL Community-edition version 5.7.x


Scenario 1: Single instance LAMP application in a single AD

In this scenario, we start by creating a simple LAMP stack with one instance each of the Apache 2 web server, Memcached server and a MySQL database, within separate subnets in a single availability domain. This is the simplest scenario to start with, which is suitable when the traffic volume is low and predictable or for a typical Dev/Test environment. This setup not only emphasizes the practice of starting with a simple design and not scaling pre-maturely but also prepares a good foundation to scale out instances independently in each tier as the traffic volumes grow. This way, when the traffic volume grows you can add more Memcached instances to handle the read-heavy data and add more MySQL instances for write-heavy data, thereby scaling out each data tier independently. The high-level deployment architecture is illustrated as below.

Let’s go ahead and set up this scenario.

  • Create a VCN with 4 subnets to house a bastion server, web server, Memcached server, and database server. Make sure the VCN CIDR is big enough to accommodate more subnets to expand in the future. Refer to VCN Overview and Deployment Guide for more information on how to create a VCN and the associated best practices. Create bastion and web server in separate public subnets, and Memcached and MySQL database in separate private subnets. By doing this, the cache and database servers are secure from public access and only the application & bastion servers have public access. Here is the VCN with 4 subnets:
  1. BastionSubnet: (, a public subnet which can be used as a jump box to access the instances in the private subnets.
  2. WebSubnet (Public): (, with access to the internet through an internet gateway.
  3. CacheSubnet (Private): (, a private subnet with no access to the internet, where cache instances reside.
  4. DBSubnet (Private): (, a private subnet with no access to the internet, where database instances reside.
  • Attach the following security lists to each subnet to restrict the access further. By default, the security list rules are stateful in nature so use the following stateful rules:
  1. Security list for Bastion subnet: Allow ingress access on TCP port 22 from the public internet, to allow SSH access to the Bastion host. Allow egress of all protocols. 
  2. Security list for App subnet: Allow ingress access on TCP port 80/443 for accessing the web application from the public internet. Also, allow ingress access on TCP port 22 to allow SSH access to the application server from BastionSubnet private IP address range only. Allow egress of all protocols. 
  3. Security list for Memcached subnet: Allow ingress access on TCP port 11211 for accessing the Memcached instance from the AppSubnet only. This is because no other instance has to directly access the cache. Also, allow ingress access on TCP port 22 for SSH access from the BastionSubnet private IP address range. Allow egress access to all protocols.
  4. Security list for DB subnet: Allow ingress access on TCP port 3306 for accessing the MySQL instance from the AppSubnet. Also, allow ingress access on TCP port 22 for SSH access from BastionSubnet private IP address range. Allow egress access to all protocols.


In this setup, since the private instances do not have internet access, running commands for updating apt-get repositories and downloading Memcached and MySQL libraries will fail. To work around this, create NAT instances in a public subnet and route internet-bound traffic through the NAT instances. Refer to this blog post for more information on how to setup NAT instances and automate the NAT instance deployment using Terraform.

We will now configure the individual instances. Let’s start with configuring the web server instance.


Configuring web server instance:

Install the Apache2 web server: The server starts listening on port 80 soon after installation. If you would like Apache2 to also listen on 443, include it in the ports.conf file.

sudo apt-get -y update

sudo apt-get -y install apache2

Allow Apache2 (HTTP and HTTPS) through the instance firewall

sudo apt-get install firewalld -y

sudo firewall-cmd --permanent --add-port=80/tcp

sudo firewall-cmd --permanent --add-port=443/tcp

sudo firewall-cmd --reload

Next, install the python client libraries for Memcached and MySQL, for the webserver to interact with Memcached and MySQL instances.

sudo apt-get -y install python-pip

sudo -H pip install --upgrade pip

sudo pip install python-Memcached -y

sudo apt-get install python-mysqldb -y


Configuring Memcached instance

Let’s proceed with configuring the Memcached instance. Update the package manager and allow ingress connection through instance firewall

sudo apt-get -y update

sudo apt-get install firewalld -y

sudo firewall-cmd --permanent --add-port=11211/tcp

sudo firewall-cmd --permanent --add-port=11211/udp

sudo firewall-cmd --reload

Install the Memcached server and the service starts automatically after installation and by default starts listening on port 11211. You can also launch multiple threads of Memcached by specifying the "-t" parameter while starting Memcached. 

sudo apt-get -y install Memcached


Configuring MySQL database instance

Update the package manager and allow ingress connection through instance firewall

sudo apt-get -y update

sudo apt-get install firewalld -y

sudo firewall-cmd --permanent --add-port=3306/tcp

sudo firewall-cmd --reload

Let's go ahead with installing and starting MySQL server. In this scenario, install MySQL 5.7.x version. The installation steps are different for versions 5.5.x and 5.6.x of MySQL. For more information, please refer to MySQL's official literature.

sudo apt-get install mysql-server -y

The MySQL server automatically starts listening on port 3306 after installation. Next, run a security script provided by MySQL. This changes some of the less secure default options for things like remote logins and sample users. This can be done by running

sudo apt-get install mysql-server -y

The entire deployment highlighted in this scenario can be automated using the following Terraform code.


The Terraform also contains a sample application (a python script named scenario-1.py) can be used to interact with the Memcached and MySQL instances. The script upon successful execution should return.

Success! Connected to Memcached instances and MySQL DB.

Please note that since we did not setup NAT, the Terraform code demonstrated here deploys the instances in public subnets. As discussed earlier, you can change this behavior by installing NAT instances and configuring the subnets to be private-only. Here is a script snippet indicating the calls to Memcached and MySQL database.

Note: When you create a MySQL Database, you will be able to sign into the database as a root user. By default and by design, the remote access to MySQL database is not permitted. To enable remote access for the web server to interact with MySQL database, create a separate user and give it the right privileges to enable access. While making calls from web server, use the same username as created in MySQL database. Here is the snippet to do it.


Scenario 2: Scaling Memcache - Application with multiple cache instances

Scenario 1 laid a good foundation to start scaling the Memcached instances as the traffic volume grows for our web application. We do this by adding an extra instance of Memcached in the Memcached subnet and updating the application server’s config file to locate the new cache instance. We can also configure the Memcached client library in the application server to partition the key space to use consistent hashing. This setup also ensures load balancing across the cache instances and provides high availability of the cache to a certain extent. If any of the cache instances goes down, only a subset of data is lost which might temporarily put load on the back end database. This situation can be quickly recovered by bringing up another cache instance. The high level deployment architecture is illustrated as below.


The discovery of the Memcached servers happens by adding the private IP of the second cache server in the scenario-2.py file. You can separate the config from your application code by using a separate config file to add the IPs of Memcached instances as you scale.

memc = memcache.Client(['’,'’], debug=1);

By default, the hashing mechanism used to divide the keys among multiple servers is crc32. To change the function used, set the value of memcache.serverHashFunction to the alternate function to use. For example:

from zlib import adler32

memcache.serverHashFunction = adler32

If you are interested in using consistent hashing, install the Python module python_ketama or hash_ring. The deployment in this scenario can be automated using this Terraform script. Since we did not setup NAT, the Terraform code deploys the instances in public subnets. You can change this behavior by installing NAT instances and changing the deployment to private subnets.



Scenario 3: Highly Available LAMP application

In this deployment we setup a highly available LAMP application, by scaling out each tier of our stack across two availability domains. We install Python Flask Application on our web server instances and create two instances of them across two availability domains and leverage the help of Oracle Cloud Infrastructure’s public load balancer to spread the inbound traffic across both application servers. In the data tiers, we scale out our Memcached instances across two availability domains and configure our application servers to consistently hash the keys across the Memcached servers in two availability domains. In the case of database, we configure MySQL primary in one availability domain and secondary in the second availability domain to act as an Active/Standby pair.

There is no extra configuration element required to enable communication of the instances across different availability domains. This happens out of the box leveraging Oracle Cloud Infrastructure’s built-in SDN network. You can also scale the instances across three availability domains and have the public load balancer spread the web traffic across application servers in all the three availability domains. This applies to the instances in cache and DB tier as well.


Since the subnets cannot spawn across availability domains, we create separate subnets for application, cache, and database tier in the second availability domains. Attach the same security lists we used in scenario 1 to these subnets.

The additional subnets in this setup will be

  1. AppSubnet2 (Public): (, with access to the internet through an internet gateway.
  2. CacheSubnet2(Private): (, a private subnet with no access to the internet, where cache instances reside.
  3. DBSubnet2 (Private): (, a private subnet with no access to the internet, where DB instances reside.
  4. BastionSubnet2: (, a public subnet which can be used as a jump box to access the instances in the private subnets.

Install and start the Python Flask server on both the web instances in the application subnets

sudo yum install python-setuptools

sudo easy_install pip

sudo pip install flask

Now, you can configure your own Flask application or use the Flask application given in this blog post. Deploy the Public Load balancer pair in the BastionSubnet and BastionSubnet2 to load balance traffic to the application servers. We have configured the Flask application server to listen on port 8080. To allow 8080 into our application instances, we need to edit the security list rules and the instance firewalls.                                                                                                    

Security List for App subnet: Allow ingress access on TCP port 8080 for accessing the Flask application from the public internet.

sudo firewall-cmd --permanent --add-port=8080/tcp

sudo firewall-cmd --reload

The high level deployment architecture is illustrated as below.


The Flask application in this example runs on port 8080 and uses its built-in web server to proxy traffic to Flask application server. We are not using Apache2 to proxy traffic to Flask as this adds extra configuration overhead, but if you would like to do so, please refer to this documentation. We also need to add a listener for TCP port 8080 on Oracle Cloud Infrastructure’s load balancer to route the traffic to Flask application instances.

The sample application indicated in this scenario, does the following:

  • When invoked the first time, it loads data from MySQL database and populates the cache. The MySQL database in this example holds a pre-populated collection of movies, which get loaded into the cache.
  • When invoked the second and subsequent times, the items are fetched directly out of cache, instead of doing a database lookup.

The deployment in this scenario, including running the Flask application server on web instances, can be automated using this Terraform script. The Flask application resides in scenario-3.py file.


There is one more additional step required before we start testing our application and Memcached deployment. Let’s go ahead and download the sample data set of movies needed to populate MySQL database. Here are the steps to do so:

Login to your primary MySQL instance in AD1 and download the dataset

curl -L http://downloads.mysql.com/docs/sakila-db.tar.gz | tar -xz

Once downloaded, you can feed the dataset into MySQL instance by running

mysql -u root -p < sakila-schema.sql

mysql -u root -p < sakila-data.sql

Now enter command mysql -u root -p and enter your password. This takes you into MySQL system with default database. To use the downloaded dataset (sakila), enter

use sakila;

To see the tables in the dataset of sakila-db, enter

show tables;

Now, we are all set. We downloaded the movies dataset to our MySQL database and we can now proceed with testing. Upon accessing the Flask application from the internet for the first time using the Load Balancer’s Listener IP address, you should see the following displayed on your web browser.

Updated Memcached with MySQL data

This is when the data is loaded from the MySQL database and stored to the Memcached server.

Upon accessing the second time, you should see the items retrieved out of Memcached and the following should be displayed on the browser.

Loaded data from Memcached






This concludes our discussion on how to deploy and scale Memcached using a LAMP stack on Oracle Cloud Infrastructure.


Future extensions

We looked at how to deploy and scale Memcached instances on Oracle Cloud Infrastructure. There are many topics and services that were not covered in this blog post, which can particularly be helpful for a large scale deployment.

  • Auto service discovery – Currently when we create new Memcached instances, there is no way for the application servers to automatically detect the private IP addresses of the new cache instance and start sending traffic to it. This has to be manually updated in the application servers’ config file. To alleviate this, we can use a centralized service for maintaining information, configuration and providing distributed synchronization. There are various open source services available to do this such as Zookeeper, Etcd, Consul. By having this centralized service, you can automatically scale the Memcached instances by registering their IP with these centralized services and the application servers need no longer have to track them manually.
  • Containerization –  We currently used Virtual Machines (VM) as our fundamental unit of deployment. Instead we can use Docker containers as our unit of deployment, thereby deploying our application, Memcached and MySQL instances as Docker containers. This has many benefits.
    • Platform independence – build it once, run it anywhere
    • VM Resource efficiency and density
    • Improved development velocity
  • Container Orchestration – Once you have containerized Docker images of your LAMP application, you can effectively leverage many container orchestration services like Kubernetes, Apache Mesos etc. to deploy and manage these containers. This has many benefits like autoscaling of containers, dynamic resource scheduling, centralized service discovery etc. This is the ideal way to build an application which is fully cloud native, and is easy to deploy and scale.


In my following blog posts, I shall demonstrate how to deploy and scale Redis on OCI and subsequently how to containerize your applications and orchestrate them automatically using Kubernetes.


Abhiram Annangi | Twitter  LinkedIn

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.