Wednesday Dec 24, 2014

What is the Wonder Machine?

It’s been long in coming, but we are now ready to get into the meat of things over the next few posts.

The Problem

Wonder Machine is a design of a demo environment that is particularly suited to demonstrating end-to-end integrated scenarios. The problem the Wonder Machine is meant to solve is a large memory footprint of an integrated setup (two or more applications, middleware, databases) contrasted with limited memory in a demo laptop. While the obvious solution would be to build the integrated setup on a server with abundant memory, we keep running into situations where we do not have a reliable connection from the client site where we do the demo – either because we can't get a reliable Internet connection or because of the client's security policies. For those situations, we need to have a setup that we can carry to the demo.

We'll see shortly that the Wonder Machine design also helps us when running server-based demo environments – by making it possible to clone it and run multiple copies concurrently in a data center.

How is it going to work? Consider as an example the integration between Oracle Utilities applications Customer Care and Billing (CC&B), Meter Data Management (MDM) and Operational Device Management (ODM). All the integration flows are routed through an instance of Oracle SOA Suite (SOA). We cannot fit an integrated setup that requires a total of 9 GB memory to run into a laptop that has only 8 GB memory. What we may be able to do is to split the setup so that it runs on two 8 GB laptops. If the integrated setup is made up of several discrete components, interacting with one another over a network, we can host some components in VM1 running on Laptop1, and others in VM2 running on Laptop2, like this:

Here the arrows indicate network connections between components, i.e. CC&B, MDM and ODM each connect to the SOA Suite server, but do not connect to one another directly.

Distributing the environment between two laptops solves the limited memory problem, but how do we specify the network connections in the integration points? We could run the two laptops on a private network (set up a dedicated portable router and assign static IP addresses) or we could use the DNS names of the two VMs. This will work for the components laid out as in the picture, but it will not be easy to change when we have different hardware available (laptops with more or less memory). For example, if we have to reduce the VMs to no more than 4 GB each, we would want to be able to run our integration on three VMs:

and once we upgrade to a laptop with 16 GB of memory, we would like to run everything in a single 12 GB VM:

In general, the URLs of the integration points are configured at installation time, and are not easy to change afterwards (at least, not easy for a functional sales consultant who did not perform the installation). Specifying URLs with explicit IP addresses or DNS names makes it harder to reconfigure the environment to match available hardware configuration.

It turns out there is a better way.

Key Design Principles

The Wonder Machine design boils down to these key principles:

  1. Modularity – each of the components of an integrated setup should be installed in a separate directory, and should not access files or otherwise depend on other modules. There is a special common module that holds data and programs shared by all modules – so module installations may assume that the common module exists, but may not rely on the existence of other, non-common modules. It is OK to connect to other components over the network, but an installation should not reference a file or directory in another, non-common module. To preserve modularity, we need to adhere to a specific directory structure.

  2. Logical server names – any connection URLs and hostnames specified as part of the installation should use logical server names instead of DNS names, hostnames, IP addresses, anything that is tied to the physical layout of the components or networking landscape. We want to be network-independent! Logical server names are abstract names tied to the function of a component, rather than any specific host on the network. For example, when specifying the URL to connect to the CC&B application server, we would use the logical server name ccbappserver ; to connect to the SOA Suite server we could use soaserver , and so on – whether these components happen to be running on the same physical or virtual server, or on different servers. The logical server names are translated to the IP addresses using the hosts file. Then, if the networking landscape changes, all we have to do is update the hosts file with the new IP addresses – no need to reconfigure the applications or integrations!

    Note that network independence is a very strong requirement – when implemented correctly, there should be no references to any IP addresses, hostnames, DNS names or any other names beside the logical server names anywhere in the installation of the applications or the integrations. All the integration/functional scenarios should work fine after the environment is moved to a different IP address, or if the hostname or DNS name of the server is changed, or if the server is cloned on the same network. Any network-dependent names or addresses can cause problems when the environment is cloned, or when components are split apart onto separate servers. For example, after the components are split apart, the connections may no longer work as the target component is no longer running on the same VM or IP address as the calling component. Worse, a component of the cloned environment could be making connections to a component of the original environment. This type of situation can be very hard to test and troubleshoot (it all appears to work while you are building it), so you should take utmost care when installing and configuring software in this fashion. There are situations where the hostname will slip into the configuration without your active participation – it may be defaulted for you or substituted without your knowledge by the installation program. When installing a module, it is a good practice to set the hostname of the server to match the logical server name of the module (i.e. set the server's hostname to ccbappserver ). This helps ensure that even configuration items that would be silently defaulted to the hostname are made to use the logical server name instead. After the installation and deployment of the component is completed, the server's hostname can be changed back to the original (or to the logical server name of another component that will be installed next).

    Using localhost in the configuration of the server is not allowed. Because localhost is always mapped to IP address 127.0.0.1 , it implies that the target of the connection resides in the same server (same OS instance) as the program making the connection. This may be true when the environment is being built, but it may no longer be true after it is split among multiple VMs.

    Normally, if all the components are installed on a single VM, all the logical server names can be pointed to 127.0.0.1 in the hosts file – all the network connections between components are internal to the VM. This way, multiple copies of the environment can run on a single network with no need of reconfiguration and no host file updates. We only need to update the hosts file when splitting the environment to run on two or more VMs.

Implementation Features

The above two points define the essence of the Wonder Machine. In addition, there are some design choices that we have made in the Wonder Machine implementation at Oracle Utilities Global Business Unit (UGBU). They are not absolute requirements, but they make the Wonder Machine easier to distribute to end users, easier to clone and deploy on hosted servers, and more practical to run on laptops with limited memory.

  1. Wonder Machine is meant to be a virtualized environment. This is driven by the ease of distribution (non-technical end users can download a VM and run it); this also allows us an easy way to archive environments and build a library of environments. At Oracle, we use Oracle VM VirtualBox to build Wonder Machine environments and to run them on laptops and desktops, but the Wonder Machine itself is not dependent on a particular virtualization tool. Once an environment is built, we often transfer it to a server as a runtime copy (to be used in demos). We use virtualized servers that run on Oracle VM. In some cases it may be more convenient to deploy the Wonder Machine directly on a physical server or laptop, without a virtualization layer.

  2. All the components needed for an integrated scenario are installed on a single VM. It would be possible to create separate VMs for each application and for the integration layer, but this would have several practical drawbacks. We would usually need to run two or more VMs per laptop when demonstrating integration, and that would incur unnecessary overhead. We would be running multiple copies of the guest operating system – which would increase the memory and disk space footprint. An integrated setup that would just fit in a 16 GB laptop when built as a single VM, might require two or more laptops when each component is a separate VM.

    This makes the Wonder Machine VMs rather large. Fortunately, we are able to deliver slimmed down VMs to the end users by distributing subsets of the original master environment. Thanks to the modular design, we can simply remove the modules that are not needed in a particular situation and deliver only what is required – this reduces the disk space requirement and the download time for the end user. For example, we may build a CC&B - MDM - ODM integrated environment on our master Wonder Machine VM. From there we can create several end user packages: the full CC&B - MDM - ODM integration, the two-point CC&B - MDM and MDM - ODM integration packages, and the single application packages for CC&B, MDM and ODM. There is no need to build the single application VMs separately from the integrated environment – with the modular design we can reuse the work.

  3. All the components needed for the largest integrated scenario are installed on a single VM. This makes the Wonder Machine VM even larger. We want to be able to show the complete end to end business process. Although CC&B - MDM and MDM - ODM are two distinct integrations, we want to be showing them in a single environment, because that's what makes sense from the business point of view. Thus our demo environment needs to include all the applications and all the integrations that might be needed for the largest end to end process. If the solution relevant to a particular client does not include some applications, it is easy to cut the bigger integrated setup down to a subset (e.g. CC&B - MDM). It is much harder to combine two-point integrated environments that were built separately (e.g. CC&B - MDM and MDM - ODM) into a single environment (CC&B - MDM - ODM).

  4. Wonder Machine environments are built on Oracle Linux, in a VirtualBox VM. Although, with the right level of expertise and effort, a similar environment could be built on Windows, with Linux we get a universe of useful free tools and readily available online advice. One tool that turned out to be particularly important to us is rsync – we use it to clone and transfer the environments to hosted servers.

  5. Wonder Machine environments need to be reliable and easy to use. Most of the intended users of our demo environments are non-technical, and they are generally not familiar with a UNIX or Linux environments. They use the environments under stressful conditions – the portable VM hosted on a laptop needs to be started at the client site, minutes before the demo is to start – when chances of human error are high. Therefore, making the Wonder Machine environments reliable, intuitive, simple and easy to use is critical for the success of the Wonder Machine project, and the user experience aspect is a very important part of building Wonder Machine environments. Here are some design features meant to support usability:
    • Graphical desktop. All Wonder Machine environments should provide the user with a graphical desktop environment (using GNOME) starting by default. Servers can be started and shut down using desktop shortcuts.
    • Server start/stop scripts. For those users who prefer a command line interface, and to facilitate management of the environments hosted in a data center, Wonder Machine environments provide simple shell scripts to start and stop the servers. Desktop shortcuts and automatic startup/shutdown call these scripts to ensure consistent results, regardless of the method used to start or stop a server.
    • Automatic startup and shutdown of the applications on system startup and shutdown. This is normally used for environments hosted in a data center, to minimize management workload and improve availability (the environments should start automatically following data center maintenance or reboot for other reasons, without manual intervention).
    • Supporting utilities. Wonder Machine environments come with a set of utilities (database autostart/stop, SQL*Developer, scripts for export/import of OUAF databases, migration scripts, Samba for file sharing, VNC for graphical remote access) that the end user can rely on.
    • Minimize size. We try to minimize the size of complete Wonder Machine package – both the disk space required to run the VM on the user's laptop and the size of the compressed package that needs to be downloaded. To start, we should clean up the installation and do not leave behind files that are no longer needed (such as the installation media). As part of the packaging process, we try to recover the unused space by compacting the virtual disks in VirtualBox, and replace disks that do not contain valuable information (swap, tmp and work disks) with the original version that has not been written to (and is therefore very small). Finally, we compress the VM using "maximum" or "ultra" settings of the compression utility. We'll cover the packaging process in detail in a later post.
    • Consistent user experience. It is very important that the look and feel, and the ways users interact with the environment (e.g. start/stop scripts, supporting utilities, application URLs and login credentials) are consistent between different Wonder Machine packages and between different applications within a package. This includes the end user experience, and extends to administration and maintenance (e.g. applying patches). It implies following patterns in directory structure, and scripting, and deviating from the patterns only when there is a good reason. There are often multiple ways to accomplish the same result, and they may be equally good from the technical perspective. It is often much easier, especially when the work is done by multiple people, to just make a choice and move on with the task; it takes more effort to make sure that the pattern is consistently followed – but following the pattern results in a higher quality product. Consistency makes the environments easier to learn and more intuitive to use. Consistency reduces human error. This is a point that cannot be stressed enough.
    • Managed change. The above is not to mean that there must not be any change. Great is the enemy of good. When making changes, we simply need to consider the impact on the users and roll them out so as to minimize disruption.

It follows that, although the concepts are broad and open, the guidelines for building a Wonder Machine environment need to be quite prescriptive.

How Does This Work?

Let us revisit the example of the CC&B - MDM - ODM integration with an environment built as above. We have a single VM that has been built to contain the entire integrated setup: three applications and the integration layer. If we have a laptop with sufficient memory (16 GB), we can just run all components of the VM within the single laptop, no changes required.

Here is the hosts file inside the VM:

VM1 – 192.168.0.4
127.0.0.1    tugbu-olvm-1 localhost
127.0.0.1    ccbappserver ccbdbserver
127.0.0.1    mdmappserver mdmdbserver
127.0.0.1    odmappserver odmdbserver
127.0.0.1    soaserver soadbserver

If we don't have a laptop with 16 GB of memory, but have two laptops with 8 GB each, then we can make a copy of the Wonder Machine on each laptop. We decide that on the first laptop we'll run a VM with only CC&B and MDM, and on the second laptop we'll run a VM with only ODM and SOA Suite – so as not to exceed the available memory:

We connect the two VMs to the network, and update the hosts files within the VMs as follows:

VM1 – 192.168.0.4 (running CC&B, MDM)
VM2 – 192.168.0.5 (running ODM, SOA Suite)
127.0.0.1    tugbu-olvm-1 localhost            
127.0.0.1    ccbappserver ccbdbserver
127.0.0.1    mdmappserver mdmdbserver
192.168.0.5  odmappserver odmdbserver
192.168.0.5  soaserver soadbserver
127.0.0.1    tugbu-olvm-1 localhost 
192.168.0.4  ccbappserver ccbdbserver
192.168.0.4  mdmappserver mdmdbserver
127.0.0.1    odmappserver odmdbserver
127.0.0.1    soaserver soadbserver

Now, because we have built our VM using the logical server names from the beginning, all the integrations work as before, even though initially all connections were made inside a single VM, and now we are connecting between two networked servers, running on two separate pieces of hardware! And because we have built it in a modular way, we can simply delete the inactive modules (marked in light grey) to save disk space. In effect, we have an integrated setup that is a logical collection of modules, which can be run on an arbitrary collection of physical servers (with the constraint that each module must fit in some server, i.e. we cannot split a module that is too big). We did not have to plan for the specific hardware configuration when building the integrated environment, but we did have to plan for this flexibility by building it according to the Wonder Machine guidelines.

Sunday Jun 16, 2013

Who Are We?

Before we dive (soon!) into design issues, I have to tell you about the organization where the Wonder Machine was developed. Oracle Utilities Global Business Unit (UGBU) is the part of Oracle that focuses on solutions for utilities – companies that provide electricity, gas, water, sewer, waste and other services to residential, commercial and industrial customers. Some of the products in our application portfolio are:

  • Oracle Utilities Customer Care and Billing (CC&B)
  • Oracle Utilities Meter Data Management (MDM)
  • Oracle Utilities Operational Device Management (ODM)
  • Oracle Utilities Smart Grid Gateway (SGG)
  • Oracle Utilities Mobile Workforce Management (MWM)
  • Oracle Utilities Work and Asset Management (WAM)
  • Oracle Utilities Customer Self-Service (OUCSS)
  • Oracle Utilities Network Management System (NMS)

Why is this important? If you have worked with us before, you know us and what we offer already. If not, and would like to find out more, please visit our Oracle Utilities portal. If you have no particular interest in the utility billing, meters, assets or distribution networks, the discussion in this blog will still be relevant. I will use the names and acronyms of existing UGBU products in the examples that will follow. If you are not familiar with our applications, just substitute “Application A” for CC&B, “Application B” for MDM, and so on. The UGBU context will be useful to those who want to build installations of Oracle Utilities applications, but the Wonder Machine concept will be just as applicable to any enterprise applications running on Oracle technology stack.

Sunday Dec 09, 2012

What's My Problem? What's Your Problem?

Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations.

When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments.

Different Objectives

Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments.

This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work.

When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone.

Welcome to the Wonder Machine

The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development!

I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

About

Wonder Machine is a way to build software demonstration environments that emphasizes portability, reuse of work, and ease of use. Wonder Machine is particularly well-suited to scenarios that have multiple applications with integration. It has been developed at Oracle Utilities Global Business Unit.

Search

Categories
Archives
« July 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today