In this post, with guest co-writers Edwin Biemond (@biemond) and Joel Nation (@joelith), we will explore virtualization with Docker. You may have heard of Docker, it is getting a lot of interest lately, especially with the recent announcement that Google are using it in their cloud service. What Docker does is that it allows you to create reusable ‘containers’ with applications in them. These can be distributed, will run on several platforms, and are much smaller than the ‘equivalent’ virtual machine images. The virtualization approach used by Docker is also a lot more lightweight than the approach used by hypervisors like VMWare and VirtualBox.
The motivation for looking at Docker is twofold. Firstly, there are a lot of virtual machines images created for training purposes, e.g. for SOA Suite. These are then distributed to Oracle folks and partners around the world. They tend to be in the 20-40 GB range in terms of size. This means that downloading them takes time, unzipping them takes time, and you need to have plenty of space to store them and so on. Publishing updates to these images is hard, and in reality means you need to go download them again. It would be nice to have a better way to distribute pre-built environments like this – a method that allowed for much smaller downloads, and the ability to publish updates easily, without sacrificing the control over the configuration of the environment – so that you still know what you are going to end up with when you start up the ‘image’.
Secondly, as many of you know, I have a strong interest in Continuous Delivery and automation of the build-test-release lifecycle. Being able to quickly create environments that are in a known state, to use for testing automation, is a key capability we would want when building a delivery pipeline.
Docker provides some capabilities that could really help in both of these areas. In this post, we are just going to focus on the first one, and while we are exploring, let’s also look at how well Docker integrates with other tools we care about – like Vagrant, Chef and Puppet for example.
Docker is a virtualization technology that uses containers. A container is a feature that was added to the Linux kernel recently. Solaris has had containers (or ‘zones’) for a long time. A container is basically a virtual environment (like a VM) where you can run applications in isolation – protected from other applications in other containers or on the ‘host’ system. Unlike a VM, it does not emulate a processor and run its own copy of the operating system, with its own memory, and virtual devices. Instead, it shares the host operating system, but has its own file system, and uses a layering technology to overlay sparse file systems on top of each other to create its file system – you’ll see what this means in practice later on. When you are ‘in’ the container, it looks like you are on a real machine, just like when you are ‘in’ a VM. The difference is that the container approach uses a lot less system resources than the VM approach, since it is not running another copy of the operating system.
This means that more of your physical memory is available to run the actual application you care about, and less of it is consumed by the virtualization software and the virtualized operating system. When you are running VMs – this impact can be significant, especially if you need to run two or three VMs.
Containers are pretty mainstream – as we said, Solaris has had them for years, and people have been using them to isolate production workloads for a long time.
You can use Linux containers without using Docker. Docker just makes the whole experience a lot more pleasant. Docker allows you to create a container from an ‘image’, and to save the changes that you make, or to throw them away when you are done with the container.
These images are versioned, and they are layered on top of other images. So they are reusable. For example, if you had five demo/training environments you wanted to use, but they all have SOA Suite, WebLogic, JDK, etc., in them – you can put SOA Suite into one image, and then create five more images for each of the five demo/training environments – each of these as a layer on top of the SOA image. Now if you had one of those five ‘installed’ on your machine and you wanted to fire up one of the others, Docker allows you to just pull down that relatively small demo image and run it right on top of the relatively large SOA image you already have. Read the complete article here.
For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.