Are you interested in running a database in containers? Are you trying to understand if it's reasonable, or even possible, to run a database in a containerized environment?
The answer is: yes, it is possible. And it can even be beneficial when done the right way. But how do you make sure you run your database in containers "the right way"? Containers in their most popular current form (Docker) have only been around since about 2013. The leading container orchestrator, Kubernetes, has only been available open source since 2015. With such a short history, it's hard to find solid guidance on best practices for running much of anything in containers. Not to mention something so sensitive as a database.
Probably not every database should go in a container, but there are lots of cases where running a database in a container makes sense and could even help improve the way your team works.
Great use cases for containerized databases include:
Test environments - say you need to test out a new product or feature that needs to pull data from a database. It can streamline your testing if you could just spin up a new database with some test data for each test, rather than having to share a big test database, possibly with other teams.
Development environments - say your developers are working on a product or feature that needs to access a database. They may need to make sure their code is successfully accessing a database, but may not care exactly what the data is so long as it fits the right format, or perhaps a small sample of data would be sufficient. Imagine being able to spin up many small databases efficiently, that your developers can use in their development process.
Demos - say you need to put together a demo that's easy to spin up and tear down, and the demo needs a database. It'd probably be more convenient if you could spin up everything together rather than baking in accessing a remote database (in a lot of cases at least). Containers provide a great way to package up that demo and deploy it.
Generally speaking, if your use case could be satisfied by a relatively small sample database that can be spun up quickly on-demand, a containerized database might be a good fit.
Here is a case where you might want to be careful when considering containerizing your database:
Production environments with strict performance requirements - this is primarily a bit of "If it ain't broke, don't fix it" wisdom. Containers as a technology are still relatively young, and although there are some use cases where they can provide performance benefits, they may not be the best choice for the strict and extremely high level of performance and reliability needed in a full-fledged production environment.
For example, containers are great as a way to spin up a pre-packaged application reliably. Container orchestrators, especially Kubernetes, expect containers to die unexpectedly (a possibility to be accounted for in VM or bare metal environments too, to be fair). They make up for that possibility by providing good tools to help you create and manage many instances of the same thing (HA), and by automatically starting new containers to replace ones that are failing. But spinning up a new container and/or failing over to another container in the cluster does take some time. Getting used to the way container failures should be handled takes time, and you'll want to make sure your team is ready before you take the leap into running your production databases in containers.
MySQL is a database that is popular with developers and devops engineers. As such, it's an excellent candidate for a containerized database. But running MySQL the right way is something that requires knowledge both of MySQL and of running containerized infrastructure. The team at Oracle has worked to simplify the task of running and managing MySQL in containers so that you can run it the right way, without having to worry about learning everything there is to know about the way containers and MySQL should work together.
The MySQL Operator for Kubernetes essentially teaches Kubernetes what it needs to do to run MySQL the right way. It teaches Kubernetes to treat MySQL clusters as a first-party resource type in Kubernetes' API. It teaches it how many nodes you need to have a minimally HA cluster in your Kubernetes environment. It allows you to create and manage MySQL backups for your MySQL containers more easily. It even teaches Kubernetes what to do if one of your MySQL database instances goes down; that is, to auto-heal by creating a new one and adding it in to the cluster (though you may need to do some configuration around your backups to get fully back up and running).
By using the MySQL Operator, you can create an HA MySQL cluster, within your Kubernetes cluster, with ease.
So in a nutshell, the MySQL Operator teaches Kubernetes how to run MySQL in containers, the right way, with as little outside help as possible. Of course that doesn't mean you won't have to learn anything new. Running a database in containers has its own quirks and differences from traditional database paradigms. Differences that the team in charge of it will have to learn about to be able to manage it most efficiently. But over time, you'll likely come to find that running databases in containers allows your team and your business to do new things and solve new problems, that you couldn't have done with more traditional deployment methods.
So why not spend a little time to try something new that could innovate the whole way your team handles databases?
You can learn about all the things mentioned in this post and try out the MySQL Kubernetes Operator for yourself by following our quickstart guide. By following this guide, you'll not only set up a MySQL database in containers in a Kubernetes cluster, you'll also learn more about the features and variables of the MySQL Operator that allow you to configure your containerized MySQL database to best fit your needs.
If you want to explore the MySQL Kubernetes Operator code and documentation on your own, check out the github repo.