Niagara 2 in a high-performance, high-security deployment
By Martin Mueller on Oct 09, 2007
- Eight cores, each providing eight hardware threads (and a lot more), these translate to the CPUs in a conventional SMP system. Each core carries a floating point unit and crypto unit that is able to do symmetrical block and asymmetrical (public key) cipher algorithms. All cores are connected to a crossbar to communicate to each other and the other components.
- Eight banks of level two cache, which translate to main memory in a SMP. These banks of cache on the one end are connected to the same crossbar that was mentioned above and on the other end to (chip-)external main memory.
- a x8 PCI-E root complex and
- a "network interface unit" (NIU) providing two 10Gbit ports.
- Exploiting (undiscovered) security flaws
- Denial of service attacks by willingly or accidently overloading it
- Control Domain, the only domain that can change the hyepervisor configuration
- Service or I/O Domains have access to physical I/O devices, and provide I/O services to other guests
- "usual" Logical Domains have only CPU resources and memory physically assigned, all I/O is virtual via the already mentioned I/O domains
The picture (click it to enlarge) gives an overview of the configuration, from left to right one has
- The control domain which is also a service domain delivering the boot devices as virtual disks to all other LDoms in the system. It runs a virtual switch private to the LDoms inside the system with access to the outside world. The control domain runs the virtual switch but does not have access to the virtual network the switch provides. All administration is done through this domain.
- A regular logical domain in the middle, which is meant to host the application. There may be more logical domains of that kind, i.e. a multi-part application or test environments.
- A frontend domain that is the central idea of the whole proposal: The on-chip NIU is assigned to this LDom, and all external traffic is handled by the 10Gbit interfaces. The frontend domain routes or "firewalls" the external traffic to the application domains from above.
In a classical deployment a service domain transports incoming traffic through the hypervisor to the LDom the traffic is meant for. If the incoming interface is a 10Gbit interface that is hit by a denial-of-service attack the hypervisor could end up in only handling the malicious traffic and the traffic would impose a severe load on the service domain running the virtual switch infrastructure driving the incoming interface.
The frontend domain will need quite a few cores, although that of course depends on the load on the external interfaces.