Diving into OpenStack Network Architecture - Part 3 - Routing

In the previous posts we have seen the basic components of OpenStack networking and then described three simple use cases that explain how network connectivity is achieved. In this short post we will continue to explore networking setup through looking at a more sophisticated (but still pretty basic) use case of routing between two isolated networks. Routing uses the same basic components to achieve inter subnet connectivity and uses another namespace to create an isolated container to allow forwarding from one subnet to another.

Just to remind what we said in the first post, this is just an example using out of the box OVS plugin. This is only one of the options to use networking in OpenStack and there are many plugins that use different means.

Use case #4: Routing traffic between two isolated networks

In a real world deployment we would like to create different networks for different purposes. We would also like to be able to connect those networks as needed. Since those two networks have different IP ranges we need a router to connect them. To explore this setup we will first create an additional network called net2 we will use 20.20.20.0/24 as its subnet. After creating the network we will launch an instance of Oracle Linux and connect it to net2. This is how this looks in the network topology tab from the OpenStack GUI:

If we further explore what happened we can see that another namespace has appeared on the network node, this namespace will be serving the newly created network. Now we have two namespaces, one for each network:

# ip netns list

qdhcp-63b7fcf2-e921-4011-8da9-5fc2444b42dd

qdhcp-5f833617-6179-4797-b7c0-7d420d84040c

To associate the network with the ID we can use net-list or simply look into the UI network information:

# nova net-list

+--------------------------------------+-------+------+

| ID                                   | Label | CIDR |

+--------------------------------------+-------+------+

| 5f833617-6179-4797-b7c0-7d420d84040c | net1  | None |

| 63b7fcf2-e921-4011-8da9-5fc2444b42dd | net2  | None |

+--------------------------------------+-------+------+

Our newly created network, net2 has its own namespace separate from net1. When we look into the namespace we see that it has two interfaces, a local and an interface with an IP which will also serve DHCP requests:

# ip netns exec qdhcp-63b7fcf2-e921-4011-8da9-5fc2444b42dd ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

19: tap16630347-45: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

    link/ether fa:16:3e:bd:94:42 brd ff:ff:ff:ff:ff:ff

    inet 20.20.20.3/24 brd 20.20.20.255 scope global tap16630347-45

    inet6 fe80::f816:3eff:febd:9442/64 scope link

       valid_lft forever preferred_lft forever

Those two networks, net1 and net2, are not connected at this time, to connect them we need to add a router and connect both networks to the router. OpenStack Neutron provides users with the capability to create a router to connect two or more networks This router will be simply an additional namespace.

Creating a router with Neutron can be done from the GUI or from command line:

# neutron router-create my-router

Created a new router:

+-----------------------+--------------------------------------+

| Field                 | Value                                |

+-----------------------+--------------------------------------+

| admin_state_up        | True                                 |

| external_gateway_info |                                      |

| id                    | fce64ebe-47f0-4846-b3af-9cf764f1ff11 |

| name                  | my-router                            |

| status                | ACTIVE                               |

| tenant_id             | 9796e5145ee546508939cd49ad59d51f     |

+-----------------------+--------------------------------------+

We now connect the router to the two networks:

Checking which subnets are available:

# neutron subnet-list

+--------------------------------------+------+---------------+------------------------------------------------+

| id                                   | name | cidr          | allocation_pools                               |

+--------------------------------------+------+---------------+------------------------------------------------+

| 2d7a0a58-0674-439a-ad23-d6471aaae9bc |      | 10.10.10.0/24 | {"start": "10.10.10.2", "end": "10.10.10.254"} |

| 4a176b4e-a9b2-4bd8-a2e3-2dbe1aeaf890 |      | 20.20.20.0/24 | {"start": "20.20.20.2", "end": "20.20.20.254"} |

+--------------------------------------+------+---------------+------------------------------------------------+

Adding the 10.10.10.0/24 subnet to the router:

# neutron router-interface-add fce64ebe-47f0-4846-b3af-9cf764f1ff11 subnet=2d7a0a58-0674-439a-ad23-d6471aaae9bc

Added interface 0b7b0b40-f952-41dd-ad74-2c15a063243a to router fce64ebe-47f0-4846-b3af-9cf764f1ff11.

Adding the 20.20.20.0/24 subnet to the router:

# neutron router-interface-add fce64ebe-47f0-4846-b3af-9cf764f1ff11 subnet=4a176b4e-a9b2-4bd8-a2e3-2dbe1aeaf890

Added interface dc290da0-0aa4-4d96-9085-1f894cf5b160 to router fce64ebe-47f0-4846-b3af-9cf764f1ff11.

At this stage we can look at the network topology view and see that the two networks are connected to the router:

We can also see that the interfaces connected to the router are the interfaces we have defined as gateways for the subnets.

We can also see that another namespace was created for the router:

# ip netns list

qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11

qdhcp-63b7fcf2-e921-4011-8da9-5fc2444b42dd

qdhcp-5f833617-6179-4797-b7c0-7d420d84040c

When looking into the namespace we see the following:

# ip netns exec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

20: qr-0b7b0b40-f9: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

    link/ether fa:16:3e:82:47:a6 brd ff:ff:ff:ff:ff:ff

    inet 10.10.10.1/24 brd 10.10.10.255 scope global qr-0b7b0b40-f9

    inet6 fe80::f816:3eff:fe82:47a6/64 scope link

       valid_lft forever preferred_lft forever

21: qr-dc290da0-0a: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

    link/ether fa:16:3e:c7:7c:9c brd ff:ff:ff:ff:ff:ff

    inet 20.20.20.1/24 brd 20.20.20.255 scope global qr-dc290da0-0a

    inet6 fe80::f816:3eff:fec7:7c9c/64 scope link

       valid_lft forever preferred_lft forever

We see the two interfaces, “qr-dc290da0-0a“ and “qr-0b7b0b40-f9. Those interfaces are using the IP addresses which were defined as gateways when we created the networks and subnets. Those interfaces are connected to OVS:

# ovs-vsctl show

8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1

    Bridge "br-eth2"

        Port "br-eth2"

            Interface "br-eth2"

                type: internal

        Port "eth2"

            Interface "eth2"

        Port "phy-br-eth2"

            Interface "phy-br-eth2"

    Bridge br-ex

        Port br-ex

            Interface br-ex

                type: internal

    Bridge br-int

        Port "int-br-eth2"

            Interface "int-br-eth2"

        Port "qr-dc290da0-0a"

            tag: 2

            Interface "qr-dc290da0-0a"

                type: internal

        Port "tap26c9b807-7c"

            tag: 1

            Interface "tap26c9b807-7c"

                type: internal

        Port br-int

            Interface br-int

                type: internal

        Port "tap16630347-45"

            tag: 2

            Interface "tap16630347-45"

                type: internal

        Port "qr-0b7b0b40-f9"

            tag: 1

            Interface "qr-0b7b0b40-f9"

                type: internal

    ovs_version: "1.11.0"

As we see those interfaces are connected to “br-int” and tagged with the VLAN corresponding to their respective networks. At this point we should be able to successfully ping the router namespace using the gateway address (20.20.20.1 in this case):

We can also see that the VM with IP 20.20.20.2 can ping the VM with IP 10.10.10.2 and this is how we see the routing actually getting done:

The two subnets are connected to the name space through an interface in the namespace. Inside the namespace Neutron enabled forwarding by setting the net.ipv4.ip_forward parameter to 1, we can see that here:

# ip netns exec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 sysctl net.ipv4.ip_forward

net.ipv4.ip_forward = 1

We  can see that this net.ipv4.ip_forward is specific to the namespace and is not impacted by changing this parameter outside the namespace.

Summary

When a router is created Neutron creates a namespace called qrouter-<router id>. The subnets are connected to the router through interfaces on the OVS br-int bridge. The interfaces are designated with the correct VLAN so they can connect to their respective networks. In the example above the interface qr-0b7b0b40-f9 is assigned IP 10.10.10.1 and is tagged with VLAN 1, this allows it to be connected to “net1”. The routing action itself is enabled by the net.ipv4.ip_forward parameter set to 1 inside the namespace.

This post shows how a router is created using just a network namespace. In the next post we will see how floating IPs work using iptables. This becomes a bit more sophisticated but still uses the same basic components.

@RonenKofman

 

 

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Ronen is Director of Product Development for Oracle OpenStack. You are welcome to follow Ronen on Twitter at @RonenKofman

Search

Archives
« April 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today