X

Move your VMware and KVM applications to the cloud without making any changes

Recent Posts

Ravello Community

Introducing VCN Peering on Oracle Cloud Infrastructure Ravello

We are excited to announce the availability of Virtual Cloud Network (VCN) peering on Oracle Cloud Infrastructure Ravello Service. VCN peering enables internal connectivity between Ravello based VMware VMs to OCI native PaaS and IaaS services. Common Enterprise Scenario Most enterprises run their multi-tier applications spread across VMware and physical hosts in their data-centers. As enterprises look to move these applications to Oracle Cloud Infrastructure, they have multiple options: Keep all the application tiers (web, app, database) virtualized Keep web, app tiers virtualized and utilize PaaS (e.g. Database Cloud Service) for Database Keep web, app tiers virtualized and run Database on bare metal compute for higher performance In all these scenarios, security, low latency and reliable connectivity between the web, app and database tiers is a key need. VCN Peering to Rescue Ravello enables enterprises to move their VMware applications to OCI 'as-is' without any modifications—saving them time, effort and costs. With VCN peering now available on Ravello, enterprises can utilize OCI’s high throughput, low latency network to connect their VMware VMs on Ravello to OCI native PaaS (DBCS) or IaaS (bare metal compute or virtual machines) services to utilize any of the flexible deployment options that meets their unique technical and operational needs. Interested in trying VCN Peering? Sign up for a free Ravello trial, and follow these instructions to set it up on your account.

We are excited to announce the availability of Virtual Cloud Network (VCN) peering on Oracle Cloud Infrastructure Ravello Service. VCN peering enables internal connectivity between Ravello based...

Ravello Community

Establishing Secure Connectivity Between Oracle Ravello and Oracle Database Cloud

Oracle Ravello is an overlay cloud service that enables enterprises to run their VMware and KVM applications, with data-center-like (Layer 2) networking, ‘as-is’ on public clouds without making any modifications. With Ravello, enterprises don’t need to convert their VMs or change networking. This empowers businesses to rapidly develop and deploy existing data-center applications on the public cloud without the associated infrastructure and migration cost and overhead for a variety of use-cases such as PoC, dev, test, staging, UAT, production, training, etc. Application Architecture Enterprises looking to move their VMware based applications with large databases to the public cloud have multiple options. They can move the entire app onto Ravello or use a combination of Ravello (for web & app tier) in conjunction with Oracle PaaS services such as DBCS on either Oracle Cloud Infrastructure or Oracle Cloud Infrastructure Classic. When used in the latter mode, secure connectivity between the web/app tier on Ravello and the Oracle Database Cloud Service instance is a key requirement. There are multiple methods to establish secured connections between an application on Ravello and a database on Oracle DBCS. Click on the appropriate links to learn how to establish secure connections between Ravello and the Database Cloud using Siebel CRM as an example. 1. Establishing secure connectivity between Oracle Ravello and Oracle Cloud Infrastructure Database Cloud 2. Establishing secure connectivity between Oracle Ravello and Oracle Cloud Infrastructure Classic Database Cloud

Oracle Ravello is an overlay cloud service that enables enterprises to run their VMware and KVM applications, with data-center-like (Layer 2) networking, ‘as-is’ on public clouds without making any...

Ravello Community

Siebel CRM Applications on Oracle Ravello Cloud Service

Oracle Ravello is an overlay cloud that enables enterprises to run their VMware and KVM applications with data-center-like (Layer 2) networking 'as-is' on public cloud without any modifications. With Ravello, enterprises don't need to convert their VMs or change networking. This empowers the business to rapidly develop and deploy existing data-center applications on the public cloud without the associated infrastructure and migration cost and overhead for a variety of use-cases such as dev-test, staging, UAT etc. Siebel CRM Overview Oracle's Siebel Customer Relationship Management (CRM), the world's most complete CRM solution, helps organizations achieve maximum top- and bottom-line growth and deliver great customer experiences across all channels, touchpoints, and devices. Siebel CRM delivers transactional, analytical, and engagement features to manage all customer-facing operations. With solutions tailored to more than 20 industries, Siebel CRM delivers comprehensive on- premise and on-demand CRM solutions that are tailored to industry requirements and offer role-based customer intelligence and pre-built integrations. Siebel CRM core components Siebel CRM deployment consists of some core components, which are given below with the function they provide: Component Function Siebel Gateway Name Server Stores Siebel Enterprise Server and Siebel Server configuration and status information. Siebel Server Application server software that provides both user services and batch mode services to Siebel clients or other components. Siebel Web Server Extension The SWSE identifies requests for Siebel data and forwards them to the Siebel Servers. It receives data from Siebel Servers and helps format it into Web pages for Siebel clients. Siebel Web Server Software installed on a third-party Web server computer, where the virtual directories for Siebel applications are created. Siebel Web Client The Siebel Web Client runs in a standard browser on the end user's client computer Siebel Database Stores database records   The following diagram illustrates the relationship between the elements of the Siebel CRM deployment. Why run Siebel CRM on Oracle Ravello? Enterprises running Siebel CRM application in their data center typically need many copies of their environment for various purposes. Typically for every one production instance of the Siebel environment in their data center, enterprises have 5-8 copies of this environment for pre-production use cases such as development, testing, staging, and running User Acceptance Tests. However, most of the pre-production environments are not needed 24x7, but only for a few hours. For such ephemeral needs, it doesn't make economic sense to invest in a data center-based environment. Ravello provides a great platform for such use cases that need ephemeral environments by offering data center-like capabilities on public cloud (the ability to run VMware VMs with Layer 2 networking). This helps enterprises reduce their infrastructure costs for such ephemeral workloads. As an example, an enterprise running one production instance and 5 pre-production environments of such a Siebel deployment on-prem can benefit from 58% savings by running compared to running on-prem. To read the rest of this article, please download the full white paper:"Siebel CRM Applications on Oracle Ravello Cloud Service."

Oracle Ravello is an overlay cloud that enables enterprises to run their VMware and KVM applications with data-center-like (Layer 2) networking 'as-is' on public cloud without any modifications. With...

Ravello Community

Introducing Ravello on Oracle Cloud Infrastructure

Ravello has always made it easy for enterprises to move their on premise VMware applications to public cloud. Today, we are excited to announce the availability of Ravello on Oracle Cloud Infrastructure offering significantly higher performance & scalability than before. This new offering makes "lift-and-shift" of performance sensitive production enterprise applications to cloud a reality. Accelerate Move of Data-Center Production Apps to Cloud Enterprises typically run their production apps across virtualized and physical hosts on-prem. As they embark on the journey to move these apps to public cloud, they go through a long migration process—converting their physical hosts to virtual machines, re-platforming their VMware VMs to cloud images, re-networking their application setup to leverage cloud based networking constructs—costing them time, money and sometimes ending in failure despite the investment. Ravello utilizes its industry leading nested virtualization and software defined networking overlay technology to make the underlying public cloud look and feel like a data-center, making this move easy. With a like-for-like environment with Ravello on Oracle Cloud Infrastructure, enterprises don't need to re-platform or re-network their VMware data-center based app—they are moved to Oracle Cloud Infrastructure "as-is," and the physical components are simply moved to bare metal servers. This unique capability to run VMware VMs and physical hosts on the same cloud makes enterprise's journey quick, simple and predictable. Data-Center-Like Capabilities on Public Cloud Ravello enables data-center-like capabilities on Oracle Cloud Infrastructure with its next generation nested hypervisor—HVX. HVX comprises of three components: Nested virtualization engine - that runs the VMware VMs on underlying cloud without needing any modifications Networking overlay - that offers a clean Layer 2 network to the guest VMs (including broadcast & multicast capabilities) typically unsupported on public cloud Storage overlay - that abstracts the underlying cloud storage and exposes block devices to the VMware VMs. HVX's nested virtualization engine supports three modes to offer unparalleled performance when running VMware VMs on cloud. These nested virtualization modes are: hardware assisted, direct on bare metal, and software assisted. Figure 1: HVX Nested Virtualization Modes Hardware assisted nested virtualization - Oracle Cloud Infrastructure runs on the next generation of blazing fast hardware that supports virtualization extensions. These extensions allow multiple guest operating systems to share the same underlying hardware in safe and efficient manner. HVX utilizes these hardware assist CPU instruction sets to perform its nested virtualization directly on the underlying cloud hardware and offers significant performance improvements over the previous generation of HVX. Typically, the cloud providers do not expose the hardware assisted virtualization extensions to the guest VMs, which limits the performance that customers can realize when operating in a nested virtualization mode. However, with Ravello running on Oracle Cloud Infrastructure, we now have complete access to these hardware assisted virtualization extensions, and can make performance boosts a reality. Directly on Bare Metal – On Oracle Cloud Infrastructure, HVX also supports the ability to run directly on top of bare metal servers. By eliminating a layer of hypervisor in the middle, HVX is able to provide near native performance. Software assisted nested virtualization – For the underlying clouds where the hardware virtualization extensions are not available, HVX uses a software based nested virtualization technology called binary translation with direct execution to run the VMware VMs. This technology offers good performance that is acceptable for a wide variety of the workloads. Unleashed Performance & Scalability for Apps Ravello on Oracle Cloud Infrastructure offers up to 14X performance boost with hardware assisted nested virtualization and even higher performance by running HVX directly on bare metal. With such performance gains, enterprises are now easily able to run their production VMware apps on Oracle Cloud Infrastructure with Ravello. Also, as enterprise needs grow, the apps need to scale to accommodate the demand. With Ravello on Oracle Cloud Infrastructure, enterprises are able to scale their application vertically up to 32 vCPUs per VM, and horizontally across thousands of VMs. Try It for Yourself Interested in trying Ravello on Oracle Cloud Infrastructure? We can help. Just sign up for a free trial and drop us a line.

Ravello has always made it easy for enterprises to move their on premise VMware applications to public cloud. Today, we are excited to announce the availability of Ravello on Oracle...

Ravello Community

Oracle RAC DB on Ravello

Oracle RAC Overview Oracle Real Application Clusters (Oracle RAC) is a shared cache clustered database architecture that utilizes Oracle Grid Infrastructure to enable the sharing of server and storage resources. Automatic, instantaneous failover to other nodes, and therefore enables an extremely high degree of scalability, availability, and performance. Originally focused on providing improved database services, Oracle RAC has evolved over the years and is now based on a comprehensive high availability (HA) stack that can be used as the foundation of a data base cloud system as well as a shared infrastructure that can ensure high availability, scalability, flexibility and agility for any application in your data center. Fig. 1: Oracle Database with Oracle RAC architecture   Need for Ravello Enterprises run Oracle RAC as a part of bigger application in variety of scenarios. They also need such environments for development, testing, staging and running User Acceptance Tests. It is expensive to have on-premise environments for such transient needs. Ravello provides a great platform for such use-cases by offering data-center like environments on public cloud with VMware and Layer 2 networking. An Oracle RAC database is a shared everything database. All data files, control files, SPFILEs and redo log files in Oracle RAC environments must reside on cluster-aware shared disks so that all of the cluster database instances can access these storage components. All database instances must use the same interconnect, which can also be used by Oracle Clusterware. Public cloud environments do not provide shared storage and Layer2 capabilities required by Oracle Clusterware for RAC natively. However, such functionality can be achieved using Ravello. Oracle RAC DB on Ravello The following implementation is broadly followed from the reference article to setup Oracle RAC DB installation in VMware ESXi environment1. The deployment diagram for the implementation: Fig. 2: Deployment diagram   For the deployment, we have 4 configured subnets 192.168.56.0/24 – public network 192.168.1.0/24 – cluster inter-connect/private 192.168.20.0/24 – shared storage access The ‘racnas’ node is running Openfiler 2.99.1 and is setup as an iSCSI target for the RAC nodes to connect to. Two logical volumes are setup as iSCSI targets as below: ocr – for OCR and voting disk data – for database (datafiles, control files, redo log files, spfile) The database access is configured through iSCSI with Automatic Storage Management (ASM). The binaries for Grid Infrastructure and Database are stored locally on each RAC node. Setting up imported VMs into Ravello As a first step, we import the 4 VMs that were setup on the on-prem VMware environment into Ravello’s VM Library, and then create a new application by dragging the VMs onto the canvas – two RAC nodes, one storage node – namely, ‘rnode1’, ‘rnode2’ and ‘rnas’ respectively. We also add a test node, ‘rtest’ to test overall functionality of the deployment. Fig. 3: Building the application with the imported VMs   All VMs are using Oracle Linux 7.3 distribution with the RAC DB nodes configured with 4VCPUs and 16GB of memory. The storage node is running Openfiler and is configured with 4VCPUs/16GB memory, while the test node running OL7.3 is configured with 2VCPUs and 8GB of memory. On the network tab, Ravello automatically re-creates the underlying network by looking at the ESXi configuration files and VM disk images. Fig. 4: Network view of the application   We will now make sure all the settings in each VM are as per our expectation. Let us take a look at ‘rnode1’ in the Ravello UI. Let us start with the ‘General’ tab. Make sure that the hostname field is populated and it matches to the hostname in the VM. Fig. 5: General tab for node1   Under the ‘Disks’ tab, Controller we select is a para-virtualized controller for better performance. Fig. 6: Disks tab for node1   Under the ‘NICs’ section, we select paravirtualized devices for each of the NICs for better performance. RAC requires a ‘public’ interface and a ‘cluster inter-connect’ interface per node. As pointed out earlier, we have used a separate subnet to handle shared storage traffic. We verify that all the NICs are present and configured correctly for each of the nodes with the right IP configuration. Fig. 7: Public interface for rnode1   Fig. 8: Private interface for rnode1   Fig. 9: Storage interface for rnode1   We have provided ‘ssh’ and ‘Enterprise Manager express’ by enabling port 22 and port 5501 on the ‘Services’ tab. Fig. 10: External services   Next, we ‘Edit and Verify’ all the VMs on the application in a similar fashion. Once this is done, the application is ready to be published. Publish the application to bring up the VMs in the public cloud either using ‘Cost-optimized’ or ‘Performance-optimized’ selection. Verifying the RAC setup on Ravello Login to any node and check to see if the shared storage is mounted on the RAC nodes over iSCSI Fig. 11: Shared storage details   Confirming that Grid Infrastructure is up and running Fig. 12: Grid Infrastructure status   Check the RAC DB configuration by running the ‘srvctl’ command. Fig. 13: RAC configuration for DB check   Check status of DB running on RAC Fig. 14: RAC DB status check   We now have a fully functional Oracle RAC environment running Oracle Database 12cR1 running on the cloud using Ravello. One can test out the deployment by connecting to the database from the test node ‘ractest’.   Free Trial To try out your custom RAC environment on public cloud, please open a free Ravello trial account. References: Build Your Own Oracle RAC 11g Cluster on Oracle Linux and iSCSI Oracle Real Application Clusters (RAC) Real Application Clusters Administration and Deployment Guide    

Oracle RAC Overview Oracle Real Application Clusters (Oracle RAC) is a shared cache clustered database architecture that utilizes Oracle Grid Infrastructure to enable the sharing of server and...

Ravello Community

Five challenges with cyber range training on AWS

This article highlights key challenges associated with offering cyber range training using AWS. It also presents Ravello cybersecurity lab as a way to run cyber ranges for training on public cloud (AWS & Google cloud) to overcome these challenges.   What are cyber ranges Cyber range is a realistic representation of infrastructure, network, tools & threat to carry out live-fire attacks and disruptive effects to support testing, training, mission rehearsal exercises. These are large setups – typically running into hundreds of nodes.   Why is public cloud great for cyber ranges Cyber ranges are ephemeral environments – which means they are needed for events, training, testing most of which are typically short lived in nature (max. lasting a couple of days). Further, cyber range training environments need scale to realistically mirror enterprise infrastructure. Given the bursty nature of these workloads & need for scale, it is cost-effective to run Cyber Ranges on public cloud vs. creating a data center for housing these workloads.   What are the key challenges with cyber range training on AWS While public cloud is a great for building cyber ranges, its inherent infrastructure limitations pose challenges in deploying ‘life-like’ cyber ranges for training on AWS. Here are the key challenges: Different network & security appliances – Cyber ranges need same network and security appliances as those present in DC for an effective representation of enterprise environment. However, Cloud version of virtual appliances are different from the ones deployed in data-centers. Take for example Palo Alto Networks VM Series Firewall. While VM Series has a AMI (Amazon Machine Image), the functionality supported by VM Series AMI pales in comparison to VM Series Firewall intended for datacenters (VMWare or KVM version). No Layer 2 networking on public cloud – Data-center networking is different from Cloud networking. Public cloud inherently blocks broadcast, multicast packets and provides access to only Layer 3 and above. Most (if not all) enterprise deployments rely on some Layer 2 protocol or the other for advanced functionality that their setup depends on (e.g. VRRP is typically needed for High Availability). VMware workloads – Almost half the workloads in enterprises is being run on VMware (both VMs and appliances). However, ESXi (hypervisor that runs VMware VMs) is not supported on public cloud. This makes it difficult to run the same VMs on AWS without making modifications, and these modifications defeat the entire purpose of cyber ranges being ‘life-like’ as in production. Port Mirroring – To be effective in red-team, blue-team exercises, one needs certain advanced capabilities (such as ability to tap into certain ports promiscuously to be able to monitor the traffic). Capabilities such as port mirroring to be able to accomplish this are not supported on public cloud. Difficulty in creation, deployment & control – Cyber ranges are large environments spanning couple of hundred machines and network nodes. Creation, deployment and control of these environments on AWS is a lot of work. Writing a new set of AWS cloud-formation scripts every time one needs to create a new cyber range requires effort, time and learning a different way of doing things compared to a data-center. These overheads add-up when one is deploying multiple different cyber range scenarios to train their workforce. Ravello’s cybersecurity lab platform overcomes these challenges. Using Ravello’s nested virtualization and networking overlay, one can build and deploy large cyber ranges that are high fidelity replicas of enterprise environments – including the same VMware VMs, network appliances and Layer 2 networking. Further, the platform supports advanced capabilities such as port mirroring, REST APIs and rich ‘drag and drop’ UI to easily create, deploy, control and automate management of cyber ranges. Interested in learning more, check out this video on how SimSpace is using Ravello for building cyber ranges. To try out Ravello for yourself, sign-up for a free trial and reach out to us if you need help.

This article highlights key challenges associated with offering cyber range training using AWS. It also presents Ravello cybersecurity lab as a way to run cyber ranges for training on public cloud (AW...

Ravello Community

Build & test network security architecture using enterprise replicas on AWS & Google Cloud

Author: Matt Conran   Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.   Colocation with third-party network elements/servers in demilitarized zone (DMZ) is an issue for security architects and puts pressure on network security architecture. How do we connect third party equipment to inhouse security appliances in a flexible way? This is an issue for many large financial & health care institutions, and other enterprises that have to securely connect 3rd party equipment. The placement of servers may be limited to certain parts of the network and traditional ways to connect lead to inefficient use of physical resources. BGP+ VXLAN + VRF helps solve this issue by dynamically creating tunnels between endpoints offering a lot more flexibility to device placement. This allows traffic from newly installed equipment to be dynamically sent to a security device for scrubbing and analyses before forwarding on to its final destination. Ravello Network Smart Labs helps enterprises in building and testing this new network security architecture on cloud - which helps keep enterprise secure.   Pressures on network security architecture TCP/IP was developed in an age when there was little or no security threats. The focus of the connectivity model was simply to connect endpoints. It has no way of knowing if packets are tampered with. It was later extended with other frameworks, such as Internet Protocol Security (IPsec) to enhance security functionality but initially it started insecurely. The Internet has changed since then and so too the endpoints it serves and connects together. This leaves us with no option but to pay more attention on how we secure connections, not solely on endpoint connectivity. Attackers now have the ability to easily bypass traditional security mechanism. There are so many threats out there that we need to protect against - DNS spoofing, man-in-the-middle, ARP poisoning, smurf attacks, buffer overflow, heap overflow and many cyber threats. This is even more of a concern as many consumable services are available on the public Internet. Due to its user base and wide reach, the Internet is becoming the sole foundation for numerous company services. Business rely on the public Internet to host services.   Changing applications & access drive new needs It is no secret that we are on the back foot when it comes to security. In today's world protecting data is increasingly difficult due to the wide range of security threats and attacks. The cyber threat landscape has increased due to the change in applications and connectivity models. Nowadays, we have more than just a roaming PC connecting over the Internet to headquarters. We have a variety of endpoint types and connection mediums to public / private clouds. All of which need to connect securely. The application is no longer deployed on a single service housed in a location with single ingress / egress traffic points. It is broken up into multiple services, dispersed on a variety of physical nodes and locations. All components require cross communication and dynamically scale based on traffic load. There has been such a rush to market with new technologies. Some may feel as long as you can connect then the job is done. Cloud data centers need to take on a new flexible cost effective design in order to meet the needs of the changing landscape. The design must be scalable while not to jeopardise protection from both outside and inside threats. Routing with VXLAN is a flexible way to tunnel to security appliances.   Traditional DMZ design Traditional demilitarized zone (DMZ) design have a very modular approach. Modularity in network designs enables isolation allowing boundaries to be designed in networking. This is one of the reasons (por count was the main one) why we started with core, access and distribution and other Internet and DMZ edges. The DMZ and Internet edge were at the top of the network, all ingress and egress traffic flows through these devices. The majority of IPS/IDS would also be at this layer, analysing and looking for unusual traffic patterns. Depending on designs, it may have been the case that some internal traffic had to pass through them too. There wasn't any overlays sticking together applications or remote paths. Everything was very static. The DC designs we recommended 5 to 10 years ago are very different to what we have today.   Business needs changing the DMZ designs We are seeing these designs come to play with colocation requirements, especially in financial network designs. Financial institutions operate their own networks. It's too sensitive to outsource and most have large MPLS based backbones. Usually dual MPLS networks with separate vendor appliances for each network. They may provide a lot of their own network infrastructure but they usually cannot provide everything to support the wide variety of applications and services. This is where they have to colocate third party equipment into their private data centre. Now that we have third party infrastructure how do we physically and logically connect it securely? Traditionally, third party equipment could be placed in a separate third party rack and physically connect to an intermediate switch or router. This type of design does not save on physical resource as you could end up with just a single appliance per rack. Also, once the third appliances are installed it must be logically integrated. A more flexible method could be to combine VXLAN/VRF and BGP to create layer 2 tunnels between the third party node to a firewall for scrubbing. In the supporting blueprint, the ToR switch encapsulates the traffic and sends it to a VTEP that sends to the Palo Alto firewall. The main benefits is that the third party equipment can be physically located anywhere in the network. It doesn't need to go to a dedicated rack. This save on physically resource and money. VXLAN, VRF and BGP can now be combined to provide a very flexible DMZ and security services POD infrastructure. Macrosegmentation is a new feature that can be used in DMZ and security PODs. It allows the integration of security services into cloud data centers. Its prime focus is on security integration by instantiating a logical topology to enforce security services. This gives you complete flexibility for underlay network design and no changes to the underlay are needed. There is no need to change operational models or ownerships rules, the firewalls still carries out the firewall rules. Recent, VM-NIC and some other distributed firewall services are placed closer to the workloads. However, these designs completely change the security paradigm which may be challenging for some big financial security audits, potentially unnerving security architects.   Flexible Network Security Architecture Leaf and Spine designs are becoming the de facto design for cloud data centres enabling new security architectures. They fall into either a Layer 2 or Layer 3. Layer 3 designs are commonly deployed with Border Gateway Protocol (BGP). BGP is more stable and less chatty than an interior gateway protocol (IGP), like Open Shortest Path First (OSPF). It’s also easier to troubleshoot when things go wrong. Leaf and spine provide ECMP needed to support active redundant paths and agile data centres. Once the underlay has a solid design, it's easier to build a flexible overlay with all the security features and functionality you need. The benefits of an IP underlay far outway a Layer 2 underlay. IP underlays enable full separation of broadcast domains between racks. They are very scalable and the spine is no longer limited. One of its main benefits is that we are relying on a testing protocols of IP that has been around for an age. A solid network design would be a Layer 3 leaf and spine with some kind of overlay on top of it. The combined underlay and overlay design offers layer 2 mobility without sacrificing the scalability of the spine layer. VXLAN was originally viewed in the network virtualization world. Applications could no longer be supported with single segment designs, commonly broken up into a number of tiers. We needed multiple segments for individual load balancing, front and back end firewalling tiers. For large data centers VLANs could never meet this. For multi tenant cloud deployments, VXLAN enables 16 million independent domains.   VXLAN routing in VRF VXLAN, VRF and BGP are combined together to provide a very flexible security domain when connecting up devices over an IP core. The VRFs only need to reside on the leaf switches. INGRESS would include VRF>VLAN>VNI and EGRESS would be VNI>VLAN>VRF. Essentially we have a VRF>VLAN>VNI>network>VNI>VLAN>VRF setup. VXLAN routing in VRF over an overlay enables very flexible designs and device placement. It is a useful feature for financials when deploying colocated space for third party equipment.   Conclusion A more efficient network security architecture is to group all security and network services into separate PODs. Security and network services have different workloads that normal VM’s. They require more I/O, server hardware must be selected accordingly. An ideal place for security services is to have them bundled within POD in a Layer 3 leaf and spine design. It makes more sense to have all security appliances in one place. However, then comes to question of how do we route or switch traffic to these services appliances? The ability to connect appliances with BGP + VXLAN tunnels offers flexible network security architectures. If you already have BGP in place extensions can be used to create an overlay linking up appliances. Ravello Smart Labs to create replicas of enterprise DCs to model/build and test this sort of environment. Follow this link to the VXLAN + BGP + VRF blueprint created on Ravello Labs. Add it to your Ravello account and customize it for additional security configurations.

Author: Matt Conran   Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

Network security architecture using VXLAN with Palo Alto Networks NG Firewall

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. Financial institutions and enterprises require flexible network security architecture to accommodate external network devices/servers in their DC/colo facilities. This article provides a way to design and implement such a network security architecture using Border Gateway Protocol (BGP) + VXLAN tunnels along with VM-series firewall from Palo Alto Networks. Ravello Network Smart Labs provides an easy way to test and deploy an architecture before moving it to the enterprise infrastructure. The following article demonstrates the VXLAN routing feature implemented to transport packets from one tunnel endpoint to another. It’s functionality is based on BGP extensions and Virtual Routing and Forwarding (VRF) technologies. The feature is useful in scenarios when you have 3rd party colocated equipment requiring firewall scrubbing before transmission to final destination. Traditionally, 3rd party vendor equipment may need to be separated into a dedicated rack and physically wired. VXLAN and BGP overlay designs offer better flexibility when designing connections to security appliances. It enables equipment to be physically placed in any rack and tunnelled to the appropriate device. The core architecture is based on a leaf and spine consisting of spines, leafs and DCi switches based on Arista vEOS. All end stations are ubuntu hosts. The security services represented in the top half of the diagram with Red and Blue networks are separated by a Palo Alto Firewall. The firewalls connect to the core with two datacenter interconnect devices. The Blue network operates normally and all east west traffic goes direct to its destination without any scrubbing. However, all Red traffic gets forwarded directly via the VXLAN overlay to the Palo Alto firewall for scrubbing. Traffic to the Palo Alto is carried with a VXLAN encapsulated tunnel across the spine nodes. The diagram displays the canvas from the Palo Alto Networks/ Arista vEOS blueprint. The PA are setup for web based management and CLI access. Just point your browser to the Ravello IP or DNS name. All of the vEOS nodes and ubuntu hosts are reachable via mgmt1. Ravello’s DNS service is automatically configured enabling SSH to device host name. The bottom half of the blueprint contains a number of leaf switches, labelled L1, L2, and L3. Each of these leaf switches connect into a virtual rack for testing. All virtual racks are the same except for the one connected to L2. L2 contains both a blue and red host. The red network requires scrubbing. IP & Interface Configuration Leaf 1 Mgmt - 192.168.0.12 Loop - 192.168.0.12 Eth1 - IP 172.16.2.3/31 eBGP sessions to the connecting spine Eth2 - IP 172.16.2.13/31 eBGP sessions to the connecting spine Leaf 2 VTEP endpoints : 172.16.0.13 172.16.0.15 172.16.0.16 Mgmt - 192.168.0.13 Loop - 172.16.0.13 Eth1 - IP 172.16.2.5/31 eBGP sessions to the connecting spine Eth2 - IP 172.16.2.15/31 eBGP sessions to the connecting spine Leaf 3 Mgmt - 192.168.0.14 Loop - 192.168.0.14 Eth1 - IP 172.16.2.7/31 eBGP sessions to the connecting spine Eth2 - IP 172.16.2.17/31 eBGP sessions to the connecting spine Spine 1 Mgmt - 192.168.0.10 Loop - 172.16.0.10 Eth1 - IP 172.16.2.2/31 eBGP sessions to the connecting L1 Eth2 - 172.16.2.4/31 eBGP sessions to the connecting L2 Eth3 - IP 172.16.2.6/31 eBGP sessions to the connecting L3 Eth4 - P 172.16.2.8/31 > eBGP sessions to the connecting DC1 Eth5 - IP 172.16.2.10/31 eBGP sessions to the connecting DC2 Spine 2 Mgmt - 192.168.0.11 Loop - 172.16.0.11 Eth1 - IP 172.16.2.12/31 eBGP sessions to the connecting L1 Eth2 - IP 172.16.2.14/31 eBGP sessions to the connecting L2 Eth3 - IP 172.16.2.16/31 eBGP sessions to the connecting L3 Eth4 - IP 172.16.2.18/31 eBGP sessions to the connecting DC1 Eth5 - IP 172.16.2.20/31 eBGP sessions to the connecting DC2 DCi1 VTEP endpoint to 172.16.0.13 172.16.0.15 172.16.0.16 Mgmt - 192.168.0.15 Loopback - 172.16.0.15 Eth1 - IP 172.16.2.9/31 eBGP sessions to the connecting S1 Eth2 - IP 172.16.2.19/31 eBGP sessions to the connecting S2 Eth3 - 172.16.4.0/31 BLUE Network Eth4 - 10.255.1.0/31 RED Network DCi2 VTEP endpoint to 172.16.0.13 172.16.0.15 172.16.0.16 Lo 192.168.0.16 Lo 172.16.0.16 Eth1 - IP 172.16.2.11/31 eBGP sessions to the connecting S1 Eth2 - IP 172.16.2.21/31 eBGP sessions to the connecting S2 Eth3 - IP 172.16.4.2/31 BLUE Network Eth4 - IP 10.255.1.2/31 RED Network All Spine and Leaf configurations can be pulled from the following GitHub Account. BGP Configuration All leaf nodes run normal eBGP to the two Spine nodes. The leafs no not peer BGP with each other. The two DCi nodes also run eBGP to the Spine nodes. This forms the base of the leaf/spine underlay network, providing core reachability. The normal BGP sessions are depicted by the blue lines in the diagram below. Node BGP ASN BGP Type Spine1 64512 eBGP to All Leafs and DCx Spine2 64512 eBGP to All Leafs and DCx Leaf1 64514 eBGP to both Spines Leaf2 64515 eBGP to both Spines Leaf3 64516 eBGP to both Spines DC1 64517 eBGP to both Spines DC2 64518 eBGP to both Spines There is another overlay network running on top of the current BGP underlay. It is based on VXLAN, represented by the 10.55.2.0/21 networks. Additional BGP sessions are created between L2, DC1 and DC2 within a newly created VRF. They are represented by the RED arrows on the diagram. The overlay offers flexibility to where nodes can be placed, enhancing the security services design. The diagram below illustrates the high level logical map of the blueprint. The following screenshot represents the BGP view from Spine1. It has 5 x eBGP peerings and all BGP states are “established” and learning routes from neighboring BGP peers. Maximum routes is set to 12000 and redistribute connected and static is configured. Spine 2 will have similar configuration except for the endpoint IP addresses. The configuration of the Spines is really simple, standard BGP and IP address on the interfaces. The VXLAN overlay is where the magic happens. We have a VRF named vxlan20 configured under SVI VLAN 20 that is mapped to VNI 20. The VXLAN flood list is set to 172.16.0.13, 172.16.0.15 and 172.16.0.16 relating to the correspoding VTEP endpoints DCI1, DCI2 and L2. Within the BGP VRF configuration, the other BGP peers forming the overlay tunnel are explicitly set. From the perspective of Leaf 2, this will be DCI1 and DCI2. The diagram below displays the BGP and VXLAN configuration for Leaf 2. The following screenshot displays the VXLAN address table and the status of the vxlan interface. The VXLAN address table shows the remote MAC address learnt and the corresponding VTEP’s. Head End replication is used to forward BUM (Broadcast, Unknown Unicast, and Multicast) traffic. Previously, IP multicast is the control plane in VXLAN. The Palo Alto firewalls are set with default configurations with static routing towards DC1 or DC2 respectively. They don’t share any state and traffic engineering towards each is done based on static routes with metrics assigned to DC1 and DC2, redistributed into BGP enabling global reachability. For initial tests we run a PING from V1 on the RED network to a test loopback on Spine1. Enter Bash on the vEOS and run a tcpdump crabbing packets to the connecting Spine interfaces. To prove that this runs through the firewall shut down Eth4 on DC1 (connecting interfaces to FW1). The pings will then fail. For advanced configuration and deep packet inspection log in and tailor the security configuration as you see fit. Administration is carried out with the CLI or via the Palo Alto Networks GUI that is accessible from your web browsers. Conclusion Above highlights a network security architecture on how enterprises and financial institutions can integrate 3rd party servers and network equipment into their DC while keeping their network secure. It also demonstrates the flexibility of introducing these devices using VXLAN as the connection medium. Overlays offer a flexible approach to connecting endpoints, regardless of physical location. The underlay simply needs endpoint reachability. If you are interested in trying this setup out, or building your own deployment please open a Ravello account, download the Palo Alto Networks VM-Series and copy over the configurations. Ravello's nested virtualization and networking overlay helps one to create a high fidelity replica of deployment on cloud to model and test with couple of clicks making the on-prem deployment of this architecture easier. The blueprint can be found at this link - BGP + VXLAN Overlay.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

4 Instructor led training best practices

A productive training process is key to the success of new products (from internal training to channel partners and customers). Since ILT also involves a substantial investment, it isn’t surprising that more and more training organizations are seeking to understand instructor led training best practices - essentially - what they can do to maximize the effectiveness of their training sessions. At Ravello we believe that the essentials are: easily provisioned and deployed isolated full-featured product environments for each student anywhere and anytime, where the instructor and student can collaborate. What is Instructor Led Training? Instructor led training (or ILT) is the practice where students learn given material from an instructor, or facilitator (as opposed to self-paced training, for instance, where the students go through materials at their own time, without a scheduled staff member). While in the past, this delivery method usually involved lots of physical hardware requirements, it is now becoming the norm for training organizations to take their classroom instructor led training sessions to the cloud, where they can benefit from on-demand capacity. Train on the exact full-featured product, not some resized version of it Instructor led training classes are often faced with hardware constraints - there’s only so many resources in the data center students can access. To deal with this constraint, many times instructors are forced to downsize - create a “lighter” version of the product, so there are enough resources to go around. This completely defeats the purpose. Training sessions are meant to showcase and explain the product. In fact, more often than not - they are tied with the roll out of a new version, more features, a better product. The desired outcome is that students are familiar and understand all the features of the product. Thus, for successful instructor led training - always train students on a full-featured product without compromising the quality of your virtual training lab. Provide each student with their own environment Another undesirable possible outcome of scarce hardware or data center resources is a students sharing environments in an ILT session. In fact, the best practice here is to provide each student an isolated environment instance to control and learn through. No excessive time or work investment in the class preparation So these last two points leave us with an isolated copy of the complete environment for each student. In some scenarios this can lead to a high investment in the preparation for each class. This operational cost tends to run high and fast (see an analysis of virtual training cost structure). One of the more important instructor led training best practices from a cost perspective is to use tools or platforms that minimize these operational costs through enabling easy provisioning and deployment of virtual training environments. Allow collaboration between instructor and student To ensure a seamless and effective training session - facilitate access and collaboration between instructor and student on the student’s virtual training environment. This practice is key in ensuring students don’t get stuck during a training session, and in empowering the training instructor to keep the training in the proper pace, ensuring all students advance and see specific important features in the application environment. Here are some further details about Ravello’s approach to instructor led training. In a previous post we also elaborated more on virtual training infrastructure requirements. You might find it interesting in this point, if you’re building out your training lab. If you’re interested to check it out, and see some product features that will help you follow these best practices, let us know, we’ll be happy to run you through a demo and discuss your use case.

A productive training process is key to the success of new products (from internal training to channel partners and customers). Since ILT also involves a substantial investment, it isn’t surprising...

Ravello Community

Man-in-the-middle Network Security Testing on enterprise environment replicas in AWS & Google Cloud

Author: Clarence Chio   Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented independent research on Machine Learning and Security at Information Security conferences in several countries, and is also the organizer of the “Data Mining for Cyber Security” meetup group in the SF Bay Area.     Have you ever used an unsecured public wifi connection and wondered if someone could be hacking you? Who could possibly be interested in monitoring your browsing activity on the web? In this post, we focus on a particularly active and common type of network hacking - man-in-the-middle (MITM) attacks. Network security testing is essential to discover these attacks, and Ravello cybersecurity labs provide an easy way to replicate enterprise environments on AWS and Google cloud and carry out MITM security testing. Man-in-the-middle attacks refer to a class of situations where a machine’s communications with another machine is intercepted and potentially mutated by malicious actor. Because of the complexity of network protocols, attackers can often make use of commonly known loopholes that allow them to step between you and the service that you’re communicating with, and gain knowledge on private information about you. You might think: if these attacks are so ubiquitous, surely there must be lots of network security precautions in place by default to protect you against them! The truth is, even though security conscious web services indeed do have such measures in place, the vast majority of the web is still theoretically vulnerable to MITM attacks. Figuring out the vulnerabilities in your network infrastructure is crucial in ensuring that you do not become a victim. For enterprises that manage large networks, it is almost always impossible to ensure all doors to the internal network remain securely locked. System administrators need to design systems and protections under the assumption that there are unknown and malicious entities within the internal network. (If you aren’t yet convinced of this point, check out my last blog post) Internal corporate network policy is often designed with efficiency in mind, and make the erroneous assumption that any entity within the internal network can be trusted. This is why MiTM attacks still flourish today, and is also why it is so important to understand which parts of your system are vulnerable to such attacks. With Ravello’s nested virtualization environment, you are able to create a cyber range to understand exactly how MITM attacks work and see how an attacker within your internal network can extract information to the outside world.   Section I: The Set Up In this post, we will use the Man-in-the-middle Security Playground published in the Ravello Repo to see exactly how easy it is to execute an MITM attack.   After selecting the ‘Library’ → ‘Blueprints’ tab on the dashboard sidebar, you can then select the blueprint you just added to your library and click the orange ‘Create Application’ button. This will take you to the ‘Applications’ section of the dashboard, where you can launch the application by publishing it to the cloud. This application is a very simple 3 node setup, consisting of a web server, a database, and our starting point for mischief - Kali Linux. Throughout this exercise, assume that you are the attacker, and you have gained access to the Kali Linux node that lies within a corporation’s internal network. We are trying to perform a MITM attack on a user that is trying to access the web server from the MySQL host. After starting the application, enter the Kali Linux console. (login username: “root”, password: “ravellosystems”) The main tool that we will be using today is Ettercap, a suite of tools for carrying out MITM attacks that include a wide arsenal of software used to defeat most common network communication protocols and perform analysis on the target environment. This version of Kali Linux included in the blueprint application includes a graphical version of Ettercap, “ettercap-graphical”. Let’s launch it from the Kali application menu, under the “09 - Sniffing & Spoofing” folder. When you first start the Ettercap application, you have to configure and start live connection sniffing before you can carry out any more sophisticated attacks. Click on the “Sniff” dropdown menu option and select “Unified sniffing…”. You will be asked to select the network interface to sniff on, which should be “eth-0”. Ettercap is now in sniffing mode, and the game has begun. First of all, Ettercap has to gain knowledge on hosts within the internal network. The typical way that this is done is through the built-in “Hosts” → “Scan for hosts” option, which does an IP range scan, and may take some time. To speed things up, let’s say that we know that all hosts of interest are in the “10.0.0.*” IP range. With Ettercap still active, let’s open a terminal window in Kali and perform an “nmap”, just like we did in the last blog post. Enter the command: $ nmap 10.0.0.* This triggers a scan of a narrower IP range, and allows us to more quickly get the results we want. Within seconds, you should see the results of the scan. The two hosts of interest, “webserver.localdomain (10.0.0.4)” and “mysql.localdomain (10.0.0.6)”, have been found. In scanning mode, Ettercap listens in on any network traffic that the local machine is involved in. There is therefore no need for any manual entering of IP addresses. After the nmap scan, just go back to the Ettercap window and select the “Hosts” → “Hosts list” option. You will see that 3 hosts have been found. If you dig through the nmap results in greater detail, you will see that the “10.0.0.1” host is in fact the internal DNS (Domain Name System) server of this network. At this point, let’s consider what we can do to mislead the mysql host using Ettercap’s arsenal. DNS resolution is a common target for MITM attacks. We can guess that the mysql host communicates with the webserver host through DNS names, which is a fair guess, since there is an internal DNS server which presumably has the DNS entries for “webserver” and “mysql”. Whenever the mysql host wants to communicate with the webserver host, it issues a DNS query to the DNS server (10.0.0.1), which then replies with the IP address of the webserver host that will be used as the network address.   Section II: DNS Interception If we are able to somehow intercept the mysql host’s DNS query, and make all traffic from mysql come to the Kali box instead, then we would have succeeded in the most crucial step of MITM attacks. Ettercap has all the tools required to do this. Let’s launch a console into the mysql box just to see what effects our attacks will have. (login username: “ravello”, password: “ravellosystems”)  Open a browser and navigate to “webserver”. You should see a pretty plain-looking example website served by the authentic “webserver” host with IP address 10.0.0.4. How can we make sure of that? Launch a terminal in the mysql host, and let’s use the standard Unix tool “dig” to examine a DNS query for “webserver” by doing: $ dig webserver Only the “ANSWER SECTION” of the response is the important one. See that the “webserver.” question was given the answer “10.0.0.4”. Note also that the “SERVER” (which refers to the DNS server used to serve this response) is 10.0.0.1, which lines up with our earlier assumption that the internal DNS server does indeed contain the entry for “webserver”. All is working well from the point of view of a user on the mysql host! Let’s switch back to the dark side and return to the Kali console. Before doing anything further with Ettercap, let’s start a web server to serve some content on the Kali host. This is the content that we want to serve the clueless user on the mysql host, assuming our MITM attack is successful. There should already be an “index.html” file in the Kali home directory. The simplest way to serve this content is probably to start a SimpleHTTPServer with Python by entering: $ python -m SimpleHTTPServer 80 Next up, we have to configure the “dns_spoof” plugin within Ettercap. Open another terminal window and edit the “etter.dns” file on the system. We first locate the file by doing this: $ locate etter.dns After we know the location of this file, we open it, and see that the line webserver A 10.0.0.3 is already there. This file is used by Ettercap’s “dns_spoof” tool, and the line above simply means that if the target host makes a DNS query for “webserver”, we will return with (DNS A-record) 10.0.0.3, which is the IP address of the Kali host. Go to the “Host List” tab, select our victim server, mysql, i.e. 10.0.0.6, and select “Add to Target 1”. Then, by selecting “Targets” → “Current targets”, double check that 10.0.0.6 is indeed in the “Target 1” list.   To start the “dns_spoof” tool, we have to first select “Plugins” → “Manage the plugins”, then in the “Plugins” tab that appears, double-click on “dns_spoof” and ensure that the asterisk appears in the first column. This is an indication that the dns_spoof tool is running.   However, this is not all. The “dns_spoof” tool sends spoofed DNS replies for any DNS queries that comes our way. However, DNS queries that originate from the mysql host do not come through the Kali host. It goes straight to 10.0.0.1. How can we make DNS requests come through us? The answer is ARP poisoning. (Address Resolution Protocol)   Section III: ARP Spoofing ARP poisoning is performed by sending spoofed ARP messages into the network with the aim of associating the MAC (hardware) address of the attacker’s host with the IP of another host. The ARP network protocol makes use of ARP caches to determine how to route network traffic. Therefore, if we associate the Kali host with the 10.0.0.1 IP address within the mysql hosts’s ARP table, any DNS queries that the mysql host makes will be routed through the Kali host. Then, “dns_spoof” will be able to work it’s magic. Select “Mitm” → “ARP poisoning…” in Ettercap, and then check “Only poison one-way” in the selection box that pops up.   In just a few simple steps, (without writing a single line of code) we have launched a DNS/ARP poisoning attack within an internal network. Checking back on the mysql host console, see that all browser requests to “webserver” now instead return the content served by our Python webserver on Kali, and the “dig webserver” command now shows that the IP address returned is 10.0.0.3 - the IP address of the Kali box.     Section IV: Fin The attack illustration above shows just how easy it is for an attacker within your internal network to mislead network users. A common technique for eavesdropping would be for the attacker to serve an exact copy of the original site, (instead of the “You’ve been pwned” page) and act as a literal middle-man in all transactions between the user and the server. If the site accepts login credentials or sensitive information, they would easily end up in the hands of the attacker. Because there are so many different ways to carry out a MITM attack, it is difficult to ensure that there are no unintended eavesdroppers on the network. A good way to get around that fact is to have cryptographically secure communications, so even when traffic is intercepted, the attacker will not be able to make sense of the content. This is a large motivation for the move to SSL and HTTPS. Furthermore, many modern DNS servers implement protections against MITM attacks. To be sure that you are protected, you need to perform penetration testing on your environment. A good way of doing that is to make use of Ravello’s nested virtualization technology to spin up a copy of your network infrastructure which you can use as a cyber range. In-network detection tools such as “arpwatch” can also help network administrators to keep a close watch on the wire to make sure that any attack attempts are detected as early as possible. The next time you connect to a public wifi network in your neighborhood coffee shop, do consider that the person sitting next to you might be using Ettercap to steal your login credentials. Always make sure that you are using the latest browsers with the newest security protections and updates, and use a VPN connection if you have one. The best way to know your vulnerabilities is to try to exploit them as an attacker would. I strongly encourage you to use the lab to build an environment that allows you to perform vulnerability assessments on your own systems. Ravello’s flexibility allows you to create a close replica of system and network infrastructures within a sandbox that can be repeatedly spun up and destroyed with a few clicks. Once again, keep in mind that breaking into computer systems is illegal. Most system administrators, government agencies, and companies don’t have any sense of humor when it comes to security, and you don’t have to do any real damage to get into a considerable amount of trouble. Stealing credentials in your neighborhood coffeeshop with a WiFi Pineapple is not cool.

Author: Clarence Chio   Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented...

Penetration testing on AWS: Think like your attacker

Author: Clarence Chio Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented independent research on Machine Learning and Security at Information Security conferences in several countries, and is also the organizer of the “Data Mining for Cyber Security” meetup group in the SF Bay Area.   In the previous post in the pentest on AWS and Google series, we set up a complete security testing environment to play with. As you have seen, it really isn’t that difficult for an attacker to pwn your network. A lot of what attackers do is observation, trial-and-error, and guesswork. I left most of those parts out of the article, but bad network cleanliness and practices make things a lot simpler for adversaries. All of the techniques we have discussed above are real techniques that take advantage of real (and sometimes even common) security loopholes that are frequently overlooked. Here are some things that the network administrator could have done to disrupt the attacker’s kill chain: Fine-grained access control for database users, i.e. wordpress@mysql user can only access the WORDPRESS table Don’t let arbitrary users have read permissions to the wp-config.php file. Remove the ability to perform passwordless SSH between nodes, and don’t store any server access keys in the clear (this is by far the top method that attackers use to pivot between nodes) Use firewall rules and thresholds to detect nmap attempts Understand the norm of traffic flow in your network, and immediately alert administrators when anything abnormal is detected Once again, I strongly encourage you to use the lab to build an environment that allows you to perform vulnerability assessments on your own systems. Ravello’s flexibility allows you to create a close replica of system and network infrastructures within a sandbox that can be repeatedly spun up and destroyed with a few clicks. Most importantly, keep in mind that breaking into computer systems is illegal. Most system administrators, government agencies, and companies don’t have any sense of humor when it comes to security, and you don’t have to do any real damage to get into a considerable amount of trouble. Just trying to break into a system is a serious offence in many jurisdictions.

Author: Clarence Chio Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented...

Pentesting on AWS: Network Penetration Testing Playground

Author: Clarence Chio   Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented independent research on Machine Learning and Security at Information Security conferences in several countries, and is also the organizer of the “Data Mining for Cyber Security” meetup group in the SF Bay Area. This next post in the network penetration testing lab series will get you acquainted with the technical details of the pentest blueprint and settings required to test security capabilities and run pentesting on AWS or Google Cloud. Section I: Setting Up Your Environment To know what an attacker is going to do, you have to learn to think like an attacker - only then can you understand the pain of attackers and make their lives that much harder. This exercise requires you to deploy the “Network Penetration Testing Playground” blueprint that I have published on the Ravello Repo. If you don’t yet have a Ravello account, sign up for one for free. It’s free to try - you don’t even have to provide your credit card information - your VMs are complimentary during the trial period. Add the blueprint to your Library and proceed to the Ravello dashboard. After selecting the ‘Library’ → ‘Blueprints’ tab on the dashboard sidebar, you can then select the blueprint you just added to your library and click the orange ‘Create Application’ button. This will take you to the ‘Applications’ section of the dashboard, where you can launch the application by publishing it to the cloud. In roughly 10 minutes, you will have deployed a 5-node Cyber Range in the cloud. Once you see that all 5 VMs are running, we will begin our journey of infiltration. Even though it is probably obvious from the hostnames of the servers, I will not be revealing details about this environment at this point. We will treat this as a black box, and attempt to find out precisely how these nodes are connected to one another, what they are running, and see if we can find anything of value - just like an attacker will. Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. Section II: Discovering Network Topology Consider that you have somehow gained access to a host in this unfamiliar, foreign network. Conveniently, this host happens to run Kali Linux - all the tools for network exploitation are literally at your fingertips. You may think this is cheating - but remember that many of the tools that Kali provides out-of-the-box can be installed on arbitrary servers on demand. Using the ‘Console’ feature of the Ravello platform is the easiest way to get command line or graphical access to the boxes within your web browser. You can also SSH into the boxes in your own terminal by following the instructions provided in the ‘More’ tab in the bottom of the dashboard right sidebar under ‘Summary’. Select the “Kali Linux” VM under the “VMs” tab of the Ravello dashboard, and click the “Console” button to launch a new tab where you can login to the box with the username root and password ravellosystems. Given that we have no idea what other servers are on this network, determining the landscape will be our first task. Most security professionals swear by nmap, (network mapper) a probing tool that discovers hosts and services by sending specially crafted packets to a range of IP addresses and ports, then parsing their responses to gather intel. In this step, we will use a higher level tool, Zenmap, that makes use of nmap under the hood, but also has some other convenient features that makes our task slightly easier. Zenmap can be found under the ‘Applications’ drop-down menu in the Kali desktop menu bar. You’ll find it under the ‘Information Gathering’ group. The only thing you have to do is to provide a target IP range for scanning. What’s the IP of the box you’re on? Opening a terminal window and looking at the network interface configuration seems like a good way to find out. $ ifconfig eth0 Link encap:Ethernet HWaddr 2c:c2:60:7a:0f:5a Inet addr:10.0.0.3 Bcast:10.0.0.255 Mask 255.255.0.0. ... The internal IP address of the eth0 interface is ‘10.0.0.3’ (yours may be different, but just substitute 10.0.0.3 for the value you see under the ‘inet addr:’ listing in ifconfig). What range of IP addresses should you scan for? You could scan the entire IP range, but it’ll take too long, and the chances of being detected are too high. You want to keep these port scans brief and targeted to avoid any kind of detection mechanisms that the victim has in place. It seems pretty reasonable to start off by scanning the “10.0.0.*” range. Enter that into the ‘Target’ text field and you’ll see that it translates into this ‘nmap’ command: nmap -T4 -A -v 10.0.0.* Hitting the ‘Scan’ button executes an ‘Intense scan’ on the IP range, targeting only the most common TCP ports. The ‘-T4’ option specifies a timing template, the slowest being 0 and the fastest being 5. The ‘-A’ option tells nmap to enable OS detection, version detection, script scanning, and traceroute. The ‘-v’ option is a verbosity setting that gives us some feedback as the scan is under way. The scan should take a few minutes to complete… When the scan is complete, you should see the message “Nmap done: 256 IP addresses (7 hosts up) scanned…” Let’s explore the results. Immediately, you should see that there are 7 servers reachable from the host we are on. Wait - isn’t it only a 5-node environment? Let’s go over to the “Ports/Hosts” tab and look at all the servers/open ports we found.   Focus your attention on the “hvx_dhcp_11128.localdomain (10.0.0.1)” host. We see that port 53 is open, and nmap has identified it to run a domain resolution service. Indeed, port 53 is the DNS port, and from the server hostname, it’s reasonable to guess that this server is the DHCP server for the environment. In fact, this host is there by default in every Ravello application, and is used for internal DNS resolution. The second item on the list has hostname “default-tcuilj99qx4ss….”, but doesn’t seem to have any common open ports. This seems like some Ravello application management node that’s also not part of the Ravello user space. However, because it doesn’t have any open ports, it’s less interesting to us. Let’s move on. The next 5 servers are more interesting. We see the host we’re currently on, with hostname “kali.localdomain” at IP 10.0.0.3, but also “wordpress-a.localdomain”, “wordpress-b.localdomain”, “mysql.localdomain”, and “loadbalancer.localdomain”. The pot of gold at the end of the rainbow is most probably the server named “mysql.localdomain”. Does it actually run the MySQL database? We can check by selecting it on the sidebar. Indeed, we see that port 3306 is open, running service “mysql” - more specifically, “MySQL 5.1.73”. Why are databases so exciting to attackers? Data dumps are perhaps the most valuable and destructive form of attacks, and access to a database presents the possibility of data dumps to attackers. Data dumps may contain anything from usernames, password hashes, email addresses, telephone numbers, and credit card numbers etc. - you can see why attackers will be excited by these. At a glance, it seems like this network environment could be a typical web server setup. As you may have noticed by now, it’s often difficult and tedious for attackers to be 100% certain of anything about the environment. Making informed guesses about systems often works just as well, keeping in mind that people tend to stick to defaults. A web developer would guess that the load balancer server probably load balances across two instances of wordpress on the two hosts “wordpress-a” and “wordpress-b”. Some research will inform you that wordpress installations, by default, require a MySQL database instance. This is probably what the mysql host is for. Exploring further, we see that the “Topology” tab gives us a nice visualization of the detected network topology that seems to confirm our guesses about the web server environment. This visualization is part of the extras that Zenmap has over pure nmap. For purposes of this exercise, we won’t be using information from the “Topology” tab. Now, how do we get access to the mysql database? Section III: Pivoting Between Servers One can of course use a password cracking tool like Jack the Ripper to brute force username/password combinations, but we’ll save that technique for another day. We’ll try to find other security weaknesses in the network. Anyone familiar with wordpress will know that the wordpress database username and passwords are located in the wordpress config files. We will have to get access to either of the wordpress servers, and we may be able to locate those config files. Servers set up by careless administrators sometimes have passwordless SSH access to other servers in the network. Let’s try to enumerate the possibilities… (some guesswork required) Why don’t we try this? We’re in. The sad but true fact is that, in reality, you don’t actually have to be that lucky to come across something like that. Well, now that you’ve pivoted to the ‘wordpress-a’ host, let’s get cracking on finding that wordpress config file. Some research will tell you that the config filename is wp-config.php. We could perform a Unix find command, or just navigate to the typical path for served web page - /var/www. Once there, we see the wordpress folder, with the haloed wp-config.php contained within it. Screenshot Of LS with Highlighted wp-config.php With any luck, we have read permissions. Explore the file, and you will soon realise that we have struck gold. Screenshot Of Wordpress DB Username Password Since we are certain that the wordpress hosts must have access to the mysql database, let’s try using the mysql command line interface to login to mysql. $ mysql -h 10.0.0.6 -u ravello -p Enter the password password. Screenshot Of MySQL Database Well, the only thing left to do is to see what databases there are (and if we have access) $ SHOW DATABASES; I wonder what’s in the “confidential_user_info” database? From here onwards, you can use mysqldump or similar tools to exfiltrate data. We have successfully pwned this web server environment. Section IV: Fin If you followed through the above exercise, you have successfully performed a database dump, and stolen user information from a web site, possibly containing credit card numbers, email addresses, and all kinds of personally identifiable information.

Author: Clarence Chio   Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented...

Ravello Community

Network penetration testing labs on AWS and Google Cloud with Ravello

Author: Clarence Chio   Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented independent research on Machine Learning and Security at Information Security conferences in several countries, and is also the organizer of the “Data Mining for Cyber Security” meetup group in the SF Bay Area. This blog series discusses how to set up a network penetration testing (aka pentest) environment where one can perform security audits and test security capabilities of a network (typically called cyber ranges). Instead of using an enterprise data center, we'll be using a life-like environment on AWS or Google Cloud as the pen test environment. Rob Joyce, chief of the NSA’s Tailored Access Operations (TAO) gave a great talk at the USENIX Enigma conference earlier this year. If you’re unfamiliar with TAO - it’s the NSA’s elite network infiltration and exploitation team. Their motto is “Your data is our data, your equipment is our equipment - any time, any place, by any legal means.” If that doesn’t scare you just a little bit, I’m not sure what will. If you haven’t already read my previous blog post about running a simple penetration testing lab on Ravello, check it out. We’ll be using some of the same tools this time, but will be focusing primarily on network infiltration and exploitation. In this blog post, we play the attacker, and walk you through exploiting a multi-node playground environment for fun and profit, with minimal prior knowledge about the environment. Ravello’s nested virtualization technology allows you to make realistic copies of your network infrastructure and run them in encapsulated environments on Amazon Web Services or Google Cloud Platform. Within these environments, you can perform security audits and test the security capabilities of your network without side effects. This is also popularly known as a Cyber Range, popularized by the Department of Defense and associated governmental agencies for the purpose of finding security holes in the mock environment before attackers find and exploit them in operational environments. "If you really want to protect your network, you really have to know your network. You have to know the devices, the security technologies, and the things inside it. Why are we successful? We put the time in to know that network, we put the time in to know it better than the people who designed it and the people who are securing it, and that's the bottom line." — Rob Joyce Many of the high profile security and data breaches in recent years were only possible because attackers exploited a weak link in the victim’s network. Whether this is a Point-of-Sale (PoS) system plagued with vulnerabilities or an external vendor with access to your network that had their credentials stolen, you need to assume that you cannot realistically keep watch on all locked doors. Attackers will be able to enter your network if they have the patience and knowhow. The real question is what they are able to achieve after they get in. Typically, attackers that gain access to a single node will try to gather as much information as they can about the network topology. After getting a good understanding of open ports, operating systems, and exposed services of nodes in the network, they will pivot and try to secure access to other nodes in the network, constantly on the lookout for valuable information on the exploited nodes. After doing this repeatedly, they will have gained access to your entire network. In the next post we’ll get your pentesting environment set up using the “Network Penetration Testing Playground” published on the Ravello Repo. Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

Author: Clarence Chio   Clarence is a Security Research Engineer at Shape Security, working on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented...

Ravello Community

How to run VMware NSX and Cisco Nexus 1000v on AWS & Google Cloud

Author: Matt Conran   Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. Network and data-center architects are evaluating network virtualization solutions to bring workload agility to their data-centers. This article (part 3 of a 3 part series) details how to setup fully-functional VMware NSX and Cisco Nexus 1000v deployment on Ravello to evaluate each of the solutions. Part 1 compares the architectural components of Cisco Nexus 1000v and VMware NSX, and Part 2 looks into the capabilities supported by of each. Setting up Nexus 1000v and vSphere 6.0 on public cloud In this section we will walk through setting up VMware vSphere 6.0 environment with the addition of Cisco’s Nexus 1000v on AWS & Google cloud using Ravello, and save a ‘blueprint template’ of the setup for a one-click deployment. VMware vSphere is a virtualization platform that by default comes with a standard and distributed virtual switch (DVS). The Nexus 1000v is a Cisco product integrated into vCenter for additional functionality. Similar to VMware VDS, it follows a distributed architecture and is Cisco implementation of the distributed virtual switch. The distributed virtual switch is a generic term among all vendors. The Nexus 1000v is a distributed platform that uses a VEM module for data plane and VSM module for control plane. The VEM operates inside the VMware ESXi hypervisors. The setup consists of a number of elements. The vCenter server is running on Windows 2012 server (trial edition) not as an appliance and acts as the administration point for the virtualized domain. Two ESXi hosts are installed with test Linux VM’s. Later, we will install the Nexus 1000v modules, both VEM and VSM, on one of the ESXi hosts. The vSphere client version 6 is also installed on the Windows 2012 server. The ESXi host have default configurations including the standard vswitch and port groups. The architecture below is built to a working blueprint enabling you to go on a build a variety of topologies and services. The Nexus 1000v standard is installed on a Enterprise plus license vSphere environment. One requirements for Nexus 1000v deployment is an enterprise plus licence. We currently have two ESXi hosts and one vCenter. A flat network of 10.0.0.0/16 is used as we have IP connectiity between all hosts. We install the Nexus version (Nexus 1000v.5.2.1.SV3.1.5a-pkg.zip), which is compatible with vSphere 6.0. It can be downloaded from the Cisco Website for free with your Cisco CCO. There are two versions of Nexus 1000v available - Standard and Enterprise Plus. Enterprise has additional features requiring a license. The standard addition has a slightly reduced feature set but its free to download. This blueprint uses the standard edition. Nexus 1000v Installation Once downloaded you can deploy the OVA within vcenter. There are a number of steps you have to go though such as setting the VSM domain IP and management address etc. Once finished you should be able to see the N1K deployed as a VM in your inventory. Power it on and SSH to the management IP address. The N1V has the concept of control and packet VLANs. It is possible to use VLAN 1 for both. For production environments, it is recommended to separate these. This blueprint uses Layer 3 so we don't need to do this. Next, we must register Nexus 1000v with vCenter by downloading the Nexus 1000v extensions and entering to vCenter. Proceed to go to the WEB GUI of the VSM and right click the extension type. Once complete you can import the extension as a plugin to centre. Now we are ready to log back into the VSM and configure it to connect to the vcenter. Once this is done you vCenter you will see the Distributed Switch created under Home > Inventory > Networking. Next, we install the VEM (Virtual Ethernet Module) on the ESXi host and connect the host to the N1K VSM. Once the VEM is installed you can check it status and make sure it's connected to the vCenter. The following screen shows the VSM connected to vCenter. The following screen shows the VEM correctly installed. This steps needs to be carried out on all ESXi host that require the VEM module. Once installed the VEM gets its configuration from the VSM. Now, you are ready to build a topology by adding host to your Nexus 1000v. For example, install the VEM in the other ESXi host and add an additional VSM for high availability. With Ravello this is easy to do, simply save the ESXi host to the library and add into the setup. Remember to change DNS and IP settings on the new ESXi host. Once this deployment is created, you can click “Save as Blueprint” to have the entire topology complete with VMs, configuration and networking and storage interconnect saved into your Blueprint library that can be used to run multiple clones of this deployment with one click.  Setting up VMware NSX on public cloud VMware NSX is a network and security virtualization platform. The entire concept of virtualization involves the decoupling of the control and data plane, offering a API to configure the network services form a central point. NSX abstracts the underlying physical network and introduces a software overlay model that rides on top of the physical network. The decoupling permits complex network services to be deployed in seconds. The diagram below displays the NSX blueprint created on Ravello. Its design is based around separation into clusters, for management and data plane reasons. The following are summary of prerequisites required for NSX deployments: The standard vsphere client cannot be used to manage NSX. For this reason, a NSX a vSphere web client is used. A vCenter Server (version 5.5 or later) with at least 2 cluster. For multi-vCenter deployments you will require vCenter version 6.0. NTP and DNS. Deploy distributed virtual switches instead of standard virtual switch. The VDS perform the foundation of the overlay VXLAN segments. The following ports are required TCP Port 80 and 443 for vsphere communication and NSX REST API TCP Ports 1234, 5671 and 22 for host to controller cluster communication, RabbitMQ message bus and SSH access. NSX manager and its components require considerable amount of resources. Pre-install checks should check for CPU, memory and disk space required for NSX Manager, NSX Controller and NSX Edge requirements. The NSX deployment consists of a number of elements. The two core components are a NSX manager and a NSX controller. The NSX manager is an appliance that can be downloaded in OVA format from VMware's website. The recommended approach is to deploy the NSX manager on a separate management cluster, separate from the compute cluster. The separation allows the decoupling of management, data, and control plane. All configurations are carried out in the “Networking & Security tab”. The diagram below displays the logical switches and the Transport Zone they represent. ESXi hosts that can communicate with each other are said to be in the same transport zone. Transport zones control the domains of logical switches, which enables a logical switch to extend across distributed switches. Therefore, any ESXi host that is a member of that transport zone may have multiple VM’s part of that network. The management cluster below runs the vCenter server and the NSX controllers are deployment in the compute clusters. Each NSX manager should be connect to only one vCenter.The NSX Manager interface has a summary tab from a GUI and also from the Webclient. You may also SSH to it IP address. The diagram show the version of NSX manager with ARP and PING tests. The next component is the NSX control plane that consists of controller nodes. There should be a minimum of three controller virtual machines. 3 controllers are used for high availability and all of theme are active at any given time. The deployment of controllers is done via the Network & Security | Installation and Management Tab. From here click on the + symbol to add a Controller. For data plane components must be installed on a per-cluster basis. This involves preparing the ESXi host for data plane activity. This will enable the distributed firewall service on all host in the cluster. Any new hypervisors installed and added to the cluster will get automatically provisioned. Just as Cisco Nexus 1000v, once this NSX deployment is created, you can click “Save as Blueprint” to have the entire topology complete with VMs, configuration and networking and storage interconnect saved into your Blueprint library that can be used to run multiple clones of this deployment with one click. The current NSX blueprint is already pretty big with multiple clusters but can also be easily expanded, similarly to how the vSphere 6.0 and Cisco Nexus 1K blueprint. Nodes can be added by saving the item to the library and inserting to the blueprint. There are many features to test with this blueprint including logical switching, firewalling, routing on edge service gateways, SSL and IPSEC VPN, data security and flow monitoring. With additional licences you can expand this blueprint to use third party appliances, such as Palo Alto. The vSphere 6.0 and Cisco Nexus 1000v deployment can be easily expanded to a much larger scale. Additional ESXi hosts can be added by saving the item to the library and inserting to the blueprint. With this type of flexibility we can easily scale the blueprint and design multiple VEM and VSM. A fully distributed design will have multiple VEM’s installed. With additional licences you can insert other Cisco appliances that work with the Nexus 1000v. This may include the VSG or the vASA, allowing you to test service chaining and other advanced security features. If you are interested in trying out this blueprint, or creating your VMware NSX or Cisco Nexus 1000v deployment from scratch, just open a Ravello trial account and send a note. You will be on your way to play with this fully functional VMware NSX or Cisco Nexus 1000v deployment within minutes, or build your very own deployment using Ravello’s Networking Smart Labs.

Author: Matt Conran   Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

Choosing between VMware NSX and Cisco Nexus 1000v

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. With SDDC (Software Defined Data Center) gaining prominence, network architects, administrators and data-center experts in enterprises around the globe find themselves staring at the inevitable question – should I go for vSphere environment with Cisco Nexus 1000v or VMware’s NSX as the network virtualization solution that facilitates my SDDC? This article (part 2 of 3-part series) compares Cisco Nexus 1000v with VMware NSX from deployment model, components, multi-data-center support and network services perspective. Part 1 compares capabilities supported by Cisco Nexus 1000v and VMware NSX, and Part 3 walks through how to setup a fully functional environment of each on Ravello Networking Smart Labs (powered by nested virtualization and networking overlay). VMware NSX for vSphere is built on top of vSphere Distributed Switch and cannot be run on top of Cisco Nexus 1000v. If you have vSphere environment already operating with Cisco Nexus 1000v and you are considering a jump to the API driven NSX world, this article will also help you understand the benefits and disadvantages of making that jump. Deployment Model VMware NSX is an entire suite of network and security services - in-kernel distributed firewalling / routing, load balancing, gateway nodes and redundant clusters, all components can be managed by a GUI platform. Cisco’s Nexus1000v being is an add in module for previous vSphere environments that may be integrated with other Cisco products such as the VSG and vASA. From a platform to platform comparison, NSX and Cisco ACI are more comparable as they represent a full service suite, all tightly integrated. But if you have a Nexus 1000v deployed in your existing VMware environment, what benefits do you gain from upgrading to the entire NSX suite. The NSX platform is not a gentle upgrade or something you can deploy in pockets / islands of the network. Planning is essential for an NSX upgrade. The NSX operates as an overlay model so it really is a big bang approach and requires entire team collaboration. While Cisco Nexus 1000K and its supporting products are add on modules and can be gently introduced, NSX requires a green field deployment model. The old and new networks could link, applications with their corresponding services could be migrated overtime. Green fields are less risky but parallel networks come at a cost. Component Introduction The NSX operates on the VDS and the feature set between the Nexus1K and the VDS are more a less the same. Depending on the release dates, one may outperform the other for a period of time but there is not too much of a difference. Not considering the additional integrated nodes NSX offers ( controller clusters, edge service gateways, cross-vCenter) it has two great new features sets - the distributed in-kernel firewall and distributed in-kernel forwarding. NSX has Edge router functionality used as various components - VPN & Firewall, Load balancer and support Dynamic routing (BGP and OSPF). The edge distributed router sits in the control plane and communicates to the controller which in turn communicate to the NSX manager. The NSX edge services router sits in the data plane. The Nexus 1000v has two editions - Standard and Enhanced. The standard edition is free to download with a CCO account and the enhanced edition requires a purchased licence. Enhanced supports additional features such as Cisco Integrated Security Features (ISF): DHCP snooping, IP source guard, and Dynamic ARP Inspection, TrustSec, VSG. Both of these versions share quite a few features and both can be integrated with additional Cisco products. The following blox display the feature parity (from cisco.com) Features Edition Edition Layer 2 switching features: VLANs, private VLANs, loop prevention, multicast, virtual PortChannel (vPC), Link Aggregation Control Protocol (LACP), access control lists (ACLs), etc. Cisco Nexus 1000V Essential Edition: No Cost Cisco Nexus 1000V Advanced Edition (with Cisco VSG) Network management features and interfaces: Cisco Switched Port Analyzer (SPAN), Encapsulated Remote SPAN (ERSPAN), and NetFlow Version 9; VMware vTracker and vCenter Server plug-in; SNMP; RADIUS; etc Included Included Advanced features: ACLs, quality of service (QoS), and VXLAN Included Included Cisco vPath (for virtual service insertion) Included Included Cisco Integrated Security Features (ISF): DHCP snooping, IP source guard, and Dynamic ARP Inspection Not supported Included Cisco TrustSec SGA support Not supported Included Cisco VSG Supported Included Other virtual services: Cisco ASA 1000V, vWAAS, etc. Available separately Available separately The mains reasons for upgrading from a vSphere–Cisco Nexus 1000v environment is for architectural and operational benefits. From an operational perspective it may be simpler to have everything under one hood with NSX. VMware NSX has clearly many additional components and network services than the Nexus 1000v. But if your business and application requirements are met with existing infrastructure based on Cisco 1K (with potentially other virtual services) you may choose to avoid the big-bang upgrade to NSX. Multi-Data Center NSX is a complete network and security solution that operates on the VDS. With the release of software version 6.2, NSX supports vSphere 6.0 Cross vCenter NSX. Previously, logical switches, routers, and distributed firewalls had a single vCenter deployment model. But now with 6.2 these services are deployed across multiple vCenters. This enables logical network and security services for a workloads to span multiple vCenters, even physical location. A potential use case for combining multiple physical data centres that have different vCenters. This new design choice by VMware NSX promotes the NSX Everywhere product offering. NSX enables application and corresponding network / security service to span multiple data centers. All your resources are pooled together, the location of each is abstracted into a software abstraction layer. This offers a new disaster avoidance and disaster recovery model. For traffic steering, previous active - active data center designs might need additional kludges such as LISP, /32 host routing or HSRP localisation. Without proper configuration of these kludges, all east - west traffic could trombone across the DCI link. They all add to network complexity and only really deal with egress traffic. Ingress traffic still needs proper application architecture and DNS load balancing. NSX is a proper virtualization platform and you don't need to configure extra kludges for multi data center design. It has a local egress optimization feature so traffic exits the correct data centre point and does not need to flow over the delicate DCI link. Unlike Cisco ACI (comparable to VMware NSX), Cisco Nexus 1000v is not a complete solution for multi data centre support but it has capabilities to link data centre together. Similar to VMware NSX, the Nexus 1000v support VXLAN - MAC over IP technology. VXLAN is used to connect Layer 2 islands over a Layer 3 core so if you have applications that require Layer 2 adjacency in different data centers could you use Nexus 1000v as the DCI mechanism? Technically it's possible. The problem with VXLAN in the past has been its control plane and the initial releases of VXLAN required a multicast enabled core. The latest releases of Nexus 1000v do offer enhancements including Multicast-less mode, Unicast Flood-less mode,VXLAN Trunk Mapping and Multiple MAC Mode. The new modes increase the robustness of VXLAN. However, VXLAN was developed to be used in the cloud to support multi tenancy and this is how it will probably be developed with further releases. By itself, the Nexus 1000v doesn't offer great DCI features and capabilities. It may, however, be used in conjunction with other DCI technologies to become a more reliable DCI design. Network Services A major selling point for NSX is its ability to support VM-NIC firewalls. VMware has a built in distributed firewall feature allowing stateful filtering services to be applied at a VM NIC level. This gives you an optimum way to protect east - west traffic along with a central configuration point. Individual policies do not need to be configured on an individual NIC bases. All the configuration can be done on a GUI and propagated down to the individual VM NIC’s. The entire solution scales horizontally, as you add more compute host you get more VM NIC firewalls. Micro firewalls do not result in traffic tromboning or hairpinning, offering optimum any to any traffic. By default, the Nexus 1000v does not offer a distributed firewall model but it can be integrated to support the VSG and the vASA. The additional models are supported in Standard and Enterprise. Both of these can then be managed by Cisco Virtual Network Management Center. The VSG is a multi tenant security firewall that implements policies that move with the mobile virtualized workloads. It decouples the control and data plane operations and connects to the Nexus1000v VEM using vPath technology. The VSG uses vpath to steer the traffic. It employs a scalable model, only the initial packet is sent to the VSG, subsequent packets are offloaded to vPath on the VEM. VMware NSX allows you to decouple networking from the physical assets by leveraging the hypervisor edge - the new access switch. Due to the decoupling of the network functions from hardware also to virtualise those network functions. The main driver for NSX is that its network virtualization approach is API driven. Network virtualization provides the abstraction from the physical assets and all this is API driven. Yes, you can automate using some sort of CLI wrapper but that approach just doesn't scale. Most CLI wrapping approaches fail as soon as it comes to looking at the entire lifecycle of a component, not only the creation. It is also possible to automate creation of an asset by writing different CLI scripts for certain actions. But what about advanced features – such as querying status and capacity, free resources, or removing assets? This would bring in a lot of operational complexity of what you need to do in a script. An API solution is far more superior and easy to manage than a CLI switch which is hidden behind an orchestrator. Capability VMware NSX Nexus 1000K & vSphere Multi Data Center Built in with local ingress support and promoted with NSX everywhere. Not a true DCI product but technical capable with additional technologies Service Chaining Built in service chaining Service chaining with vPath Distributed Firewall Built in distributed firewall. VM-NIC Add on modules Edge Service Gateway Built in Potential Edge services with add on modules Virtual Private Networks SSL, IPSEC, L2 VPN Potentially with add on modules End to end activity Monitoring Traceflow N/A but has NetFlow, SPAN, and Encapsulated Remote SPAN Services - DHCP & DNS Yes Yes Abstracted Security Yes No API driven Yes, full API solution Orchestrated/td> Conclusion So should you switch to VMware NSX from Cisco Nexus 1000v for your SDDC? The answer is - it depends. If your existing business and technical requirements are already being met and you don’t want to take a big-bang approach to change everything – Cisco Nexus 1000v is the way to go. If you are looking for a greenfield approach to build your SDDC with strong out-of-box integration with existing VMware resources – NSX will help you get there quicker. Interested in trying out both Cisco and VMware solutions to get a feel for which one is right for you? Just open a Ravello trial account, and reach out to Ravello. They can help you run the Cisco Nexus 1000v and VMware NSX solutions showcased here with one click.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

Ravello Community

VMware NSX and Cisco Nexus 1000v Architecture Demystified

Author: Matt Conran   Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.   Network virtualization brings many benefits to the table – reduced provisioning time, easier/cheaper network management, agility in bringing-up of sophisticated deployments to name a few. A large number of network and data-center architects around the globe are evaluating VMware NSX and Cisco Nexus 1000v to enable network virtualization in their data-centers. This article (part 1 of 3 part series) walks through the architectural elements of VMware NSX & Cisco Nexus 1000v, and explains how Ravello (powered by nested virtualization and networking overlay) can be used as a platform to run and deploy each of the solutions with a couple of clicks for evaluation during the decision-making process. Part 2 compares capabilities supported by Cisco Nexus 1000v and VMware NSX, and Part 3 walks through steps to create a Cisco Nexus 1000v & VMware NSX deployment on Ravello.   The Death of the Monolithic Application The meaning of the application has changed considerably over the last 10 years. Initially, we started with a monolithic application stack where we had one application installed per server. The design proved to be very inefficient and a waste of server resources. When would an a single application ever consume all server resources? Almost never, unless there was a compromise or some kind of bug. Single server application deployment has considerable vendor lock-in, making it difficult to move the application from one server vendor to the other. The application has now changed to a multi-tiered stack and is no longer installed on a single server. The application stack may have many dispersed tiers requiring network service such as firewalling, load balancing and routing between each tier. Physical firewall devices can be used to provide these firewalling services. Physical devices are evolved to provide multi tenancy by features such as VRFs and multiple context. But it is very hard to move firewalls in the event of an application stack move. If there is an disaster avoidance or recovery situation, App A might need to move to an entirely new location. If the security policies are tied to a physical device, how can its state and policies move? Some designs overcome this with stretch VLANs across DCI link and stretched firewall clusters. Both of which should be designed with care. A technology was needed to tie the network and security services to the actual VM workload and have it move alongside the VM.   The Birth of Microservices The era of application microservices is coming aboard. We now have different application components spread across the network. More importantly, all these components need cross communication. Even though we moved from a single application / physical server deployment to application per VM on a hypervisor, it was still not agile enough. Microservice applications are now getting installed in Linux containers, Docker being the most popular. Containers are more lightweight than a VM, spinning up in less than 300 milliseconds. Kubernetes are also getting popular, resulting in massive agile compute environments. So how can traditional networking keep up with this type of agility? Everything that can is being virtualized with a abstracted software layer. We started with compute and storage and now the missing network piece is picking up pace.   Distributed Systems & Network Virtualization Network virtualization was the missing piece of the puzzle. Now, that the network can be virtualized and put into software, it meets the agility requirements of containers and complex application tiers. The entire world of distributed systems is upon us. Everything is getting pushed into software at the edge of the network. The complexity of the network is no longer in the physical core nodes, it's at the edges in software. Today's network consists of two layers, we have an overlay layer and an underlay physical layer. The overlay is the complicated part and allows VM communications. Its entirely in software. The physical underlay is typically a leaf and spine design, solely focusing on forwarding packing from one end point to another. There are many vendors offering open source and proprietary solutions. VMware NSX and Cisco Nexus 1000v are some of the popular choices.   VMware NSX VMware NSX is a network and security virtualization solution that allows you to build overlay networks. The decoupling / virtualization of networking services from physical assets displays the real advantages of NSX. Network virtualization with NSX offers the same API driven, automated and flexible approach much along the lines of what compute virtualization has done for compute. It enables changing hardware without having to worry about your workload networking which is preserved thanks to being decoupled from the hardware. There are also great benefits from decoupling security policy from its assets, abstracting security policy. All these interesting abstractions are possible as we are on the hypervisor and can see into the VM. NSX provides the overlay and not the underlay. The physical underlay should be a leaf and spine design, limited to one or two ToR switches. Many implement just two ToR switches. Depending on port count density you might only need one ToR. Each ToR has a connection (or two) to each spine offering a high available design. Layer 2 designs should be limited as much as possible so to minimise the size of broadcast domains. The broadcast domain should be kept to small isolated islands as to minimise the blast radius should a fault occur. As a general design rule, Layer 2 should be used for what it was design for - between two hosts. Layer 3 routing protocols on the underlay should be used as much as possible. Layer 3 uses a TTL that is not present in Layer 2. The TTL field is used to prevent loops.  The hypervisor, referenced to as the virtual machine manager, is a device / program that enables multiple operating systems to share a single host. Hypervisors are a leap forward in fully utilizing server hardware as a single operating system per host would never fully utilise all physical hardware resources. It is here we have hypervisor hosts. Soft switches run in the hypervisor hosts and they implement Layer 2 networking over Layer 3 using the IP transport in the middle to exchange data. VMware’s NSX allows you to implement virtual segments in the soft switches and as discussed use MAC over IP. To support remote Layer 2 islands there is no need to stretch VLANs and connect broadcast and failure domains together. VMware NSX supports complicated application stacks in cloud environments. It has many features including Layer 2 and Layer 3 segments, distributed VM NIC firewalls, distributed routing, load balancing, NAT, and Layer 2 and Layer 3 gateway to connect to the physical world. NSX uses a proper control plane to distribute the forwarding information to soft switches. The NSX cluster controller configures the soft switches located in the hypervisor hosts. The controller will have at a min of 3 nodes with a max of 5 for redundancy. To form the overlay (on top of the underlay) between tunnel endpoints, NSX uses VXLAN. VXLAN has now become the defacto for overlay creation. There are three modes available - multicast, new unicast modes, hybrid modes. Hybrid modes use multicast locally and does not rely on the transport network for multicast support. This offers huge benefits as many operational teams would not like to implement multicast on core nodes. Multicast is complex. The core should be as simple as possible, concerned only with forwarding packets from A to B. MPLS networks operate this way and they scale to support millions of routes. VMware NSX operates with Distributed Routers. It looks like all switches are part of the same router, meaning all switches have same IP and all listen to MAC addresses associated with that IP. The distributed approach creates one large device. All switches receive packet sent to the gateway and do Layer 3 forwarding. One of the most powerful features of NSX is the VM NIC firewalls. The firewalls are In-kernel firewall and no traffic goes into userworld. One drawback of the physical world is that physical firewalls are a network choke point, they also cannot be moved to easily. Networks today need to be agile and flexible and distributed firewalls fit that requirement. They are fully stateful and support IPv4 and IPv6. Nexus 1000v Series The Nexus 1000v Series is a software-based NX-OS switch that add capabilities to vSphere 6 (and below) environments. The Nexus 1000v may be incorporated with other Cisco products, such as the VSG and vASA to offer a complete network and security solution. As many organisations move to the cloud they need intelligent and advanced network functions with a CLI that they know. The Nexus 1000v architecture is divided into two main components - a) Virtual Ethernet Module (VEM) and b) Virtual Supervisor Module (VSM). These components are logically positioned differently in the network. The VEM is inside the hypervisor and executes as part of the ESXi kernel. Each VEM learns individually and in turn builds and maintains its own MAC address table. The VSM is used to manage the VEM’s. The VSM can be designed in the high available design (2 for redundancy) and control communication between the VEM and the VSM can now be Layer 3. When the communication was Layer 2, it required a packet and control VLAN configuration. The Nexus 1000v can to be viewed as a distributed device. The VSM control multiples VEMs as one logical device. The VEM do not need to be configured independently. All the configuration is performed on the VSM and automatically pushed down to the VEM that sit in the ESXi kernel. The entire solution is integrated into VMware vCenter. This offer a single point of configuration for the Nexus switches and all the VMware elements. The entire virtualization configuration is performed with the vSphere client software, including the network configuration of the Nexus 1000v switches.  One major configuration feature of the Nexus 1000v is the use of port profiles. Port profiles are configured from the VSM and define the different network policies for the VM. They are used to configure interface settings on the VEM. When there is a change to a port profile setting, the change is automatically propagated to the interfaces that belong to that port profile. The interfaces may be connected to a number of VEM, dispersed around the network. There is no need to configure on an individual NIC basis. In vCenter a port profile is represented as a port group. They are then applied to individual VM NIC through the vCenter GUI. Port Profiles are dynamic in nature and move when the VM is moved. All policies defined with port profiles follow the VM throughout the network. In addition to moving policies the VM also retains network state.   Conclusion The terms network virtualization and decoupling go hand in hand. The ability to decouple all services from physical assets is key for a flexible and automated approach to networking. VMware NSX offers an API driven platform for all network and security services while existing vSphere & Cisco Nexus 1K deployments are CLI and orchestration driven. The advantages and disadvantages of both should be weight up not just from a feature parity but also from a deployment model approach. The NSX platform being a big bang approach. If you are in the process of deciding between these two solutions, and want to actually try them out – Ravello Networking Labs provides an excellent platform to test VMware NSX and Cisco Nexus 1000v deployments with a couple of clicks. One can use an existing NSX or Cisco 1000v blueprint as a starting point to tailor to their requirement, or create one from scratch. Just open a Ravello trial account, and contact Ravello to get your ‘feet wet’ with an existing deployment topology that you can run on your own.

Author: Matt Conran   Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

How to set up and run a penetration testing (pentest) lab on AWS or Google Cloud with Kali Linux, Metasploitable and WebGoat

Author: Clarence Chio Clarence works at Shape Security on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented independent research on Machine Learning and Security at Information Security conferences in several countries, and is also the organizer of the "Data Mining for Cyber Security" meetup group in the SF Bay Area. In this blog, I describe how you can deploy Kali Linux and run penetration testing (also called pen testing) on AWS or Google Cloud using Ravello System’s nested virtualization technology. This ‘Linux/Web Security Lab’ lets you hit the ground running in a matter of minutes and start exploiting security vulnerabilities. By the way, if you haven’t already seen it, this blog by SimSpace about on-demand Cyber Ranges on Ravello is very interesting as well. You’ve been living under a rock if you haven’t noticed the high profile security breaches that have shaken the technology industry in recent years. From huge government spying scandals to the countless company databases infiltrations, we have never been more aware of the need for securing the complex systems on which we so heavily rely on. Security awareness is at an all-time high, but the information security profession largely still remains out of reach for most in the tech industry. What exactly do penetration testers do? How does fuzzing or reverse engineering help to make networks and systems more secure? This blog post aims to help give beginners and security amateurs some hands-on experience in using popular systems and tools used by security professionals to help keep those black hats out. It’s difficult to embark on your ethical-hacking endeavors by trying to find vulnerabilities in an ATM. That’s kind of like learning to swim by swimming across the English Channel. You want to build up some water-confidence and learn the strokes before you enter the big leagues. This is precisely why ‘deliberately vulnerable’ systems such as Metasploitable (by Rapid7) and WebGoat (by OWASP) were born. Making use of the built-in security vulnerabilities in these systems, you can get familiarized with the tools used in real-world vulnerability assessments and learn more about how systems have been compromised in the past. You will be surprised at how many of these old vulnerabilities still exist in modern systems that we use everyday. If you not sure of what you’re doing, it’s a generally not good idea not to deliberately execute vulnerable code on your machine. Sandboxing these applications in a Virtual Machine (VM) is a good way to ensure that attackers don’t get into your system while you’re learning the ropes. However, setting up these VMs correctly and securely can be quite a bit of work. It requires procuring necessary hardware, getting the appropriate permissions to execute these mock tests,securing the VMs, so nothing leaks out into your corporate network and much more. What if you could procure the necessary hardware on demand on public cloud and build these completely sandboxed environments which represent your corporate network topologies and system setup to learn and execute penetration testing exercises? Public clouds for right reasons don’t easily allow building and running penetration testing, because of the impact it can have on their other customers on a shared infrastructure. You can still do some of this testing on AWS, however, you have to go through an approval and setup process. Ravello’s HVX nested virtualization technology implements a fully fenced L2 overlay network on top of AWS and Google Cloud, so you can set up Security Smart Labs with multiple systems/VMs with complex networking representative of corporate environments, namely, promiscuous mode, multiple NICs, static IPs and more. You can build environments with multiple systems, test and run the environment on AWS or Google Cloud and save them Ravello blueprints. Ravello blueprints provide with capability to save entire environments and spin up multiple isolated copies across the globe on AWS and Google Cloud within minutes. This can be used to provision on-demand security labs for pen testing training, sales demos and POCs. Section I: Setting Up Your Environment In this brief walkthrough, we will get a simple and extensible environment set up in Ravello with 3 VMs - Kali Linux, Metasploitable 2, and WebGoat 7.0 running on Ubuntu. Kali is a Linux distribution based off Debian, designed for penetration testing and vulnerability assessments. More than 600 penetration testing tools applications come pre-installed with the system, and is today’s system of choice for most serious ethical hackers. Metasploitable is an intentionally vulnerable Linux VM, and WebGoat is a deliberately insecure web application server with dozens of structured lessons and exploit exercises that you can go through. After getting the lab environment setup, we will run through a couple of simple examples where we use Kali as a base for launching attacks on Metasploitable and WebGoat. By the end of this exercise, you will have successfully exploited your first Linux system and web server. To get started, first ensure that you have a Ravello account and search for the ‘Linux/Web Security Lab Blueprint’ published by me on the Ravello Repo. Select ‘Add to Library’, and proceed to the Ravello dashboard. After selecting the ‘Library’ → ‘Blueprints’ tab on the dashboard sidebar, you can then select the blueprint you just added to your library and click the orange ‘Create Application’ button. This will take you to the ‘Applications’ section of the dashboard, where you can launch the application by publishing it to the cloud. Publishing the application will launch these VMs on a cloud environment, made possible by Ravello’s nested virtualization technology. It will take roughly 10 minutes for the VMs to launch. Once you see that all 3 VMs are running, we will then be ready to enter the boxes. Using the ‘Console’ feature of the Ravello platform is the easiest way to get command line or graphical access to the boxes within your web browser. You can also SSH into the boxes in your own terminal by following the instructions provided in the ‘More’ tab in the bottom of the dashboard right sidebar under ‘Summary’. Enter all the boxes through the console and find out each VM’s IP address (usually 10.0.0.*) either through the command line (ifconfig) or by looking at the top right hand corner of the console page. Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. Section II: Exploiting Metasploitable with Armitage on Kali Linux Let’s enter the Kali Linux console, which will bring you through the boot and login sequence of the OS. You can either boot from the image or install the OS - I prefer the former because there is no need (in this case) for any state to be saved between sessions. The main tool that we will be exploring today is Armitage. Armitage is ‘a graphical cyber attack management tool for the Metasploit Project that visualizes targets and recommends exploits’. We will make use of exploits that Armitage recommends and see just how easy it is to exploit a vulnerable Unix system like Metasploitable. From the Kali desktop, launch a terminal window. Armitage requires PostgreSQL to be running in the background, and also requires some Metasploit state to have been initialized. Execute the following commands to meet these requirements and launch armitage: $ service postgresql start $ service metasploit start $ service metasploit stop $ armitage This will bring up a window where you have to configure Armitage’s connection to Metasploit. The default settings are shown in the above screenshot, and the username:password ‘msf:test’ will work. Allow Armitage to start Metasploit’s RPC server. Once in Armitage, do a ‘Quick Scan (OS detect)’ of the Metasploitable VM by entering it’s IP address into this dialog box. As you might guess, the Quick Scan function of Armitage allows you to scan a range of IP addresses and discover all machines in that range by performing an ‘nmap’ scan. Once the scan is complete, you’ll see that there will be a Linux machine icon that appears in the canvas area of the Armitage window. The scan has detected that the machine is running Linux, and Armitage has further determined a whole range of attacks that the machine may be vulnerable to. Let’s try to launch a Samba "username map script" Command Execution attack on the machine. According to Metasploit’s exploit database, ‘This module exploits a command execution vulnerability in Samba versions 3.0.20 through 3.0.25rc3 when using the non-default "username map script" configuration option. By specifying a username containing shell meta characters, attackers can execute arbitrary commands. No authentication is needed to exploit this vulnerability since this option is used to map usernames prior to authentication!’ The default options will work just fine. After the attack has been launched, you will know that it is successful when you see that the original icon has changed. Congratulations, you have exploited your very first linux box. Right clicking the icon will reveal a whole range of new interactions that you can now have with the Metasploitable VM - without ever having to enter the username and password at all! Select the ‘Interact’ option as shown in the below screenshot. This brings up a console, which allows you to execute arbitrary code. You can do all sorts of things, like echoing a friendly statement to /tmp/pwn on the box. You can verify your action by switching to the Metasploitable VM console and checking to see if the changes you made are indeed reflected there. Of course, this just scratches the surface of what you can do with Armitage, and the 600+ other penetration testing tools on Kali. Spend time exploring the tools and understanding what it does under the surface. It will be worth it. Section III: Exploiting Webgoat We will work on exploring Webgoat’s extensive range of web application vulnerability tutorials next. Enter the Webgoat console and execute the Webgoat jar file in the background to start the server. You do this by entering $ nohup java -jar /opt/app/webgoat-container-7.0-SNAPSHOT-war-exec.jar & This command executes the Webgoat java server in the background, ignoring the HUP (hangup) signal, so the server will continue to run even if the shell is disconnected. The server will take a couple of minutes to initialize and start up. Next, switch to the Kali desktop and navigate to the Webgoat URL. In my case, it is http://10.0.0.11:8080/WebGoat since my WebGoat VM has 10.0.0.11 as it’s IP address. Login with any of the credentials presented to you on the login screen, then navigate to the ‘Shopping Cart Concurrency Flaw’ exercise. This is one of the simplest and most elegant exploits of a ecommerce web application. I assure you that variants of this exploit exists in some websites out there. This exercise exploits the web application’s flawed shopping cart logic that allows a user to purchase an expensive item for the price of a less expensive item. As you may have guessed from the title of the exercise, you will need two browser tabs open on this page for this to work. Then, you have to follow the following sequence of steps carefully. In one tab, you will purchase a low-priced item by updating it ‘Quantity’ to 1, updating the cart, then selecting ‘Purchase’. In the other tab, update the ‘Quantity’ of the highest-priced item to 1, then update the cart. Do not select ‘Purchase’. Return to the first tab where you were buying the low-priced item and complete the purchase. You have purchased the high-priced item but paid the low-price for it. Many of the exercises in WebGoat demonstrate real web application vulnerabilities that OWASP has identified to be the most common in modern web applications. If you want a complete and hands-on education in web application security, there is no better place to being. Section IV: Fin If you went through the above sections, you have successfully exploited a Linux machine and tricked a web application with just a few clicks. However, don’t be misled by the simplicity of the above exercises! Penetration testing and vulnerability assessments are often extremely complex, tedious, and sometimes discouraging. Playing with toy systems that are intentionally insecure will help you get familiar with tools and understand the reasons why insecure systems are insecure. It will help you to build applications with security in mind, and become more conscious of the dangers of careless software development. When you have spent some time playing in the lab, I strongly encourage you to use the lab to build an environment that allows you to perform vulnerability assessments on your own systems. Ravello’s flexibility allows you to create a close replica of system and network infrastructures within a sandbox that can be repeatedly spun up and destroyed with a few clicks. Lastly, keep in mind that breaking into computer systems is illegal. Most system administrators, government agencies, and companies don’t have a great sense of humor, and you don’t have to do any real damage to get into a considerable amount of trouble. Just trying to break into a system is a serious offence in many jurisdictions.

Author: Clarence Chio Clarence works at Shape Security on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented independent research on Machine...

Virtual training infrastructure: The backbone of hands on labs for ILT & self-paced learning

Virtual training infrastructure is essential for ISVs, for training providers and for enterprises. It is key that this infrastructure will support the nature of the training use case - hands on labs for each student should be easily configured, identical and isolated instances; you should have the ability to spin them up in any geography, and then quickly and easily tear them down. This post demonstrates how three different companies are using Ravello as their virtual training infrastructure to run ILT classes, self-paced trainings and more. What characterizes a “good” virtual training infrastructure? There are several key components that should be “deal breakers” when you’re searching for virtual training infrastructure: Technological fidelity: your training infrastructure should accommodate any and all of the features you have in the environment. If it requires certain networking configurations - you should not need to compromise the quality and fidelity of the environment just to “make the training work”. Training is supposed to expose all the product capabilities and features, so there is no wiggle room here. Repeatable deployments: the trainer shouldn’t spend valuable time on setting up the same environment again and again every time a class is scheduled. Your training infrastructure should provide for a quick and easy way to save your environment configuration and run it whenever it is needed. On demand usage: the nature of the training use case is that it’s tough to anticipate the number of environments required in a given class (or timeframe, if it is a self-paced training use case). Your virtual training infrastructure should let you avoid capacity planning and simultaneously enable you to only pay for capacity that is in fact in use (so you don’t buy excess capacity “just in case”). Accessibility: the virtual training infrastructure should be accessible anywhere in the world. Location should not play a factor in access to or performance of the training lab. Flexibility: your training infrastructure should “work with you”. If all you need is an easy to use portal to share isolated environments with students - your infrastructure should accommodate that. If you require advanced integration and customization - the infrastructure should allow you as much complexity as your use case requires. Virtual training infrastructure in action in the real world Now that we covered the must-haves of your training infrastructure, I thought it might be useful to see some real-life examples of companies using Ravello for their virtual training. Global partner training with hands on labs When Red Hat’s Global Partner Enablement (GPE) team learned about Ravello, they were looking for training infrastructure that would allow them to: Expose partners to all the features of their technology and products Be able to scale up and down without the need for capacity planning Deliver best-quality training classes around the globe - regardless of the location of the partner training class Deliver high-volume training very quickly to keep up with development of new products and features - once the training class was designed to be able to repeat it quickly and not spend time setting it up again. Using Ravello the Red Hat GPE team has full blueprints of multi-node OpenStack, RHEV, RHEL and OpenShift which they can spin up as required. With Ravello’s nested hypervisor, Red Hat can utilize AWS’s capacity and run virtual training labs in any geography, providing on-demand isolated environments for all students, without the need to allocate capacity in advance. Instructor led end-user training with hands on labs ROI Training, a leading provider of technical, financial and management training for enterprises around the globe, customizes training classes to suit the needs of their customer organization. Looking for a solution that will allow them to be rid of hardware investments, to move to a usage based cost structure and for student self-service access to virtual labs - ROI Training chose Ravello. With Ravello ROI Training creates on demand lab environments in the public cloud region that provides the best learning experience for students, without investing in hardware or shipping computers. ROI Training also uses Ravello’s blueprints to essentially create a portfolio of lab environment templates for each course, that can be easily used to spin up multiple copies of the environment on demand. Finally, the usage based model allows ROI Training to enjoy cloud economics - and pay only for the environments that are running, for the resources consumed. Fully integrated self-paced online security training Blackfin Security is a leading provider of a full suite of online security training, bringing a hands-on approach to security training, with a self-paced subscription-based security training portal, as well as onsite or on-demand threat simulation events. With two core requirements of zero changes to the application environments when deploying labs for students, and a high standard for a consistent and coherent user experience for self-paced trainings, Blackfin found they can fulfill their virtual training infrastructure needs by using Ravello to run training labs on AWS or Google Cloud. Blackfin Enterprises uses Ravello’s REST API to create a new and enhanced self-paced online security training experience for its students. Ravello enables Blackfin to run the VMware based multi-VM applications on Google Cloud and AWS without any modifications to the VMs or the networking configuration. [video url="https://www.youtube.com/watch?v=O8r4W8zwqjg"] I hope these three examples illustrated how Ravello can meet any virtual training infrastructure requirements that you may encounter in your use case. You can start your trial here and build out your training lab for ILT, self-paced training, hands-on labs and more. Let us know if you have any questions.

Virtual training infrastructure is essential for ISVs, for training providers and for enterprises. It is key that this infrastructure will support the nature of the training use case - hands on labs...

NFV Orchestration: Setup NFV Orchestration on AWS and Google Cloud (part 4 of 4 post series)

Authors: Jakub Pavlik Jakub Pavlik and Ondrej Smola are engineers at tcpcloud – a leading private cloud builder. Matt Conran Matt Conran is an independent network architect and consultant, and blogs at network-insight.net This article details NFV orchestration using public cloud NFVI as a 4 part series. This post details setting up a fully functioning NFV orchestration with firewalling and load balancing services chaining, and comes with a fully-functional NFV service chaining topology with Juniper Contrail service chaining firewall and load-balancer services in a topology that you can access on Ravello and try out. The NFV topology in this Ravello blueprint presents firewalling and load balancing Virtual Network Functions (VNF’s). There are prepared 3 use case scenarios showing FWaaS and LbaaS launched by OpenStack Heat template: PFSense - free Open Source FreeBSD based firewall, router, unified threat management, load balancing, multi WAN, Linux. FortiGate-VM - is a full-featured FortiGate packaged as a virtual appliance. FortiGate-VM virtual appliance is ideal for monitoring and enforce virtual traffic on leading virtualization, cloud and SDN platforms, including VMware vSphere, Hyper-V, Xen, KVM, and Amazon Web Services (AWS). Neutron Agent-HAproxy - free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. Architecture Components The following diagram shows logical architecture of this blueprint. OpenStack together with OpenContrail provides NFV infrastructure. The virtual resources are orchestrated through Heat and then different tools are used for VNFs management. Components NFV - Service Chaining in OpenContrail through VMs by OpenStack VNF - orchestrator for VMs containing FwaaS (FortiGate, PFSense), LbaaS (Neutron plugin HAProxy) The NFV topology consist of 5 nodes. The management node is used for public IP access and is accessible via SSH. It is also used as a JUMP host to connect to all other nodes in the blueprint. The controller node is the brains of the operation and is where Openstack and OpenContrail are installed. Finally, we have three compute nodes named Compute 1, Compute 2 and Compute 3 with Nova Compute and the Opencontrail vRouter agent installed. This is where the data plane forwarding will be carried out. The diagram below display the 5 components used in the topology. All nodes apart from the management node have 8 CPU, 16GB of RAM and 64GB of total storage. The management node has 4 CPU, 4GB of RAM and 32GB of total storage. The intelligence runs in the Controller who has a central view of the network. It provides route reflectors for Opencontrail vRouter agents and configure them to initiates tunnels for end point connectivity. OpenContrail transport is based on well known protocols of MPLSoverUDP, MPLSoverGRE or VXLAN. The SDN controller can program the correct next hop information to direct traffic to a variety of devices by playing with labels and next hop information. Previous methods for services chaining include VLANs and PBR, which are cumbersome to manage and troubleshoot. Traditional methods may require some kind of tunneling if you are service chaining over multiple Layer 3 hops. The only way to provide good service chaining capabilities is with a central SDN controller. The idea of having a central viewpoint of the network is proving to be a valuable use case for SDN networks. Internal communication between nodes is done over the 10.0.0.0/24 network. Every node has one NIC on the 10.0.0.0/24 network and the Management and Controller nodes have an additional NIC for external connectivity. Installation of OpenStack with Open Contrail For the installation of Juniper Contrail we used the official Juniper Contrail Getting Started Guide. The name and version of package contrail-install-packages_2.21-102~ubuntu-14-04juno_all.deb This will install both OpenStack and OpenContrail. From the diagram below you can see that the virtual network has 5 instances, 9 interfaces and 4 VN’s for testing. The OpenContrail dashboard is the first place to view a summary of the virtual network. Login information for every node: User: root Password: openstack also User: ubuntu Password: ravelloCloud Login to openstack and opencontrail dashboards: User: admin Password: secret123 Openstack dashboard url depend on ravello public ip for controller node but is always x.x.x.x/horizon. For example: http://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com/horizon/ OpenContrail dashboard is on same url address but on port 8143. For example: https://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com:8143/login NOTE: For properly working vnc_console in openstack you should change line “novncproxy_base_url” on every compute node in /etc/nova/nova.conf to your url of controller. Example: novncproxy_base_url = http://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com:5999/vnc_auto.html The two services we will be testing are load balancing and firewalling service chaining. Load balancing will be created on the LbaaS agent and firewalling will be based on Fortigate and PFSense. Within OpenStack we create one external network called “INET2”, which can be accessed from the outside (Management and Compute nodes in ravello). The “INET2” network has a floating IP pool of 172.0.0.0/24. The pool is used to simulate public networks. The simple gateway for this network is on Compute2. All virtual instances in openstack can be accessed from OpenStack dashboard. Through console in instance detail. OpenStack Heat Templates Heat is the main project of the OpenStack orchestration program. It allows users to describe deployments of complex cloud applications in text files called templates. These templates are then parsed and executed by the Heat engine. OpenStack Heat Templates are used to demonstrate load balancing and firewalling inside of Openstack. The location of these templates is on the Controller node in the /root/heat/ directory. Every template has two parts - an Environment with specific variables and Template. They are located in: /root/heat/env/ /root/heat/template/ We have 3 heat templates to demonstration the NFV functions. lbaaS pfsense firewall - opensource firewall fortigate vm firewall - 15 day trial version You can choose from two main use case scenarios: LbaaS Use Case Scenario To create the heat stack with the LbaaS function use the command below: heat stack-create -f heat/templates/lbaas_template.hot -e heat/env/lbaas_env.env lbaas This command will create 2 web servers and lbaas service instances. The load balancer is configured with VIP and floating IP which can be accessed from "public" (Management and Compute nodes in Ravello) Firewalls (FwaaS) Use Case Scenarios To create the heat stack for the pfsense function use the command below: heat stack-create -f heat/templates/fwaas_mnmg_template.hot -e heat/env/fwaas_pfsense_env.env pfsense To create the heat stack for the fortigate function use the command below: heat stack-create -f heat/templates/fwaas_mnmg_template.hot -e heat/env/fwaas_fortios_contrail.env fostios This will create service instance and one ubuntu instance for testing. Description of Load balancing Use Case The Heat templates used for the load balancer profile will create a number of elements including the pool, members and health monitoring. It instructs OpenContrail to create service instances for load balancing. This is done through Openstack Neutron LBaaS API. More information can be found here. The diagram below displays the load balancer pools, members and the monitoring type: The load balancing pool is created on a private subnet 10.10.10.0/24. A VIP is assigned, which is allocated from the virtual network named public network. The subnet for this network is 10.10.20.0/24. The load balancer consists of 2 ports to the private network and 1 port to public network. There is also floating IP assigned to VIP that is used for reachability from outside of OpenStack/OpenContrail. The diagram below summarises the network topology for the virtual network: For testing purposes the load balancing heat templates create 2 web instances in the private network. There is also a router connected to this private network. This is because after boot the web instances will attempt to download, install and configure apache2 web service. The diagram below displays the various parameters with the created instances: Accessing the web server's VIP address initiates classic round robin load balancing. NOTE: Sometimes web instances does not install or configure apache2. This because of virtual simple gateway was not automatically created on compute2. In this case just create this gateway manually from python command located in /usr/local/sbin/startgw.sh on compute2. After that you can delete heat stack with lbaas and create it again or just set up apache2 manually. CURL is used to transfer data and test the load balancing feature. The diagram below displays running command line CURL to the VIP address and a round robin results of instance 1 and 2. Description of FWaaS/NAT Description of FWaaS/NAT We have prepared one heat templates for firewall service instance with NAT and two heat environments for this template. One for pfsense firewall and second for fortigate firewall. PfSense Information about this firewall can be found here. Login information User: admin Password: pfsense Fortigate Information about this firewall can be found here. Login information User: admin Password: fortigate NOTE: Compute2 has to have default gateway for testing. Viz. Lbaas. Fortigate provisioning This action must be taken after Fortigate VM is successfully deployed by Heat Template. Openstack is running instance MNMG01. This instance is used for configuration of Fortigate service instance. The configuration can be done with two python scripts. fortios_intf.py fortios_nat.py fortios_intf.py - this script will configure the interfaces for the firewall fortios_nat.py - this script will configure the firewall NAT rules Running scripts: python fortios_intf.py python fortios_nat.py NOTE: Configuration information are stored in .txt files. fortios_intf.txt fortios_nat.txt Network Topology The firewall service instance is connected into 3 networks. It has INET2 as external network, private_net for testing instances and svc-vn-mgmt for management instance. The topology is same for both examples (pfsense and fortigate). In private_net is one virtual instance for testing connectivity to external network. For successful service chaining heat will also create policy in contrail and assign it to networks. Contrail is used to orchestrate the service chaining. Configuration and testing pfsense By default, pfsense firewall is configured to NAT after the heat stack is started. As a result, there is no need to make any configuration for this function. Pfsense image was preconfigured with DHCP services on every interface and there is outbound policy for NAT. After we start the heat with pfsense there is already functional service chaining. Testing instance has default gateway to contrail and contrail redirects it to pfsense. There is also NAT session in pfsense. In shell run command: pfctl -s state Configuration and testing fortigate Fortigate can be configured from the management instance. This instance has floating ip 172.0.0.5 and login is root and password openstack or it can also be accessed through vnc console from openstack dashboard. In this instance are 2 python scripts. One of the python scripts is for the configuration of interfaces (fortios_nat.py) and second is for configuration of firewall policy NAT (fortios_intf.py). NOTE: If fortigate firewall has different ip that 10.250.1.252 than it has to be change information in /root/.ssh/config. python fortios_nat.py python fortios_intf.py After running these two scripts, testing instance has connectivity to external network. Interested in trying this setup with one click? Just open a Ravello trial account, and add this NFV blueprint to your account and you are ready to play with this NFV topology with Contrail orchestrating and service chaining load-balancer and firewall as VNFs.

Authors: Jakub Pavlik Jakub Pavlik and Ondrej Smola are engineers at tcpcloud – a leading private cloud builder. Matt Conran Matt Conran is an independent network architect and consultant, and blogs at net...

Installing and configuring Trend Micro Deep Security, vSphere and NSX environment on AWS and Google Cloud

Trend Micro Deep Security, a security suite providing antivirus, intrusion prevention, firewalling, url filtering and file integrity monitoring for both virtual and physical systems. For virtualized systems, Deep Security can provide you with both client-based as well as clientless solutions providing a single management solution for Virtual Desktops, servers as well as physical systems. In addition, Deep Security can integrate with VMware's NSX, providing automated network firewalling and security options whenever deep security detects malicious activity on your systems. In this blogpost, we'll show how to setup a lab environment for Trend Micro Deep Security using AWS and Google Cloud capacity for both agentless as well as agent-based protection and the integration with VMware vSphere. If you are a reseller and/or system integrator, you can build Deep Security labs like these on public cloud and use them for your sales demo, proof of concepts(POCs) and training environments. You pay hourly based on the size of your lab and only when you are using it. You can setup an environment with Trend Micro Deep Security appliance, other servers and client systems within Ravello Systems interface, test and run it on AWS or GCE and then save it as your demo/POC/training blueprint. Then, when you need to spin multiple Trend Micro Deep Security environments across the globe for your team, you can spin them up on AWS or Google Cloud using the already saved blueprint within minutes. Preparing your environment For this blog, we've prepared the following environment in Ravello Systems. VMware Horizon view connection server (optional) Trend Micro Deep Security Manager running on Windows 2012R2 Domain Controller 2 ESXi Host servers openfiler storage server (optional) Center server running on Windows 2012R2 Since we'll mainly focus on the setup of deep security, we'll not focus too much on the vSphere setup. Click on the link for a brief overview how to configure and deploy VMware vSphere in Ravello. In addition, here's a detailed guide for vCenter. Installation of Deep Security Manager The Window hosts is added to the testlab.local domain as dsm.testlab.local. After this the latest Windows version of deep security manager is downloaded from downloadcenter.trendmicro.com. Choose your installation language. Click ok. Pre installation check is noticing the VM is not configured with enough resources to run a production environment, but as this is a demostration purpose this shouldn't be a problem. Click Next. Read the license agreement and click the accept radio button when you agree. Click Next. The Upgrade Verification runs to check if there is a previous version installed. In this demo environment we are starting with a new installation. Change the location accordingly. Click Next. Fill in the required external database hostnames, database instance and so on. For this demo purpose I’m using the embedded installation. Note: Do not choose the embedded database for a production environment, as the installer will tell you also...   Enter the Activation code. For this lab we’ll be using a trial license which can be acquired through this link. Hostnames, IP adresses and port names. Change only when your environment somehow uses the ports required. Click Next. Configure your administrator account and click next. In this step, we’ll configure our security updates. This creates a scheduled tasks for security update (and update your procedures that these are scheduled tasks). For this demo environment we do not use a proxy server to connect to the Trend Micro site for the security updates. Next, we’ll configure the same scheduled task for our software updates. Enable a Relay agent for distribution of definitions and updates to the protected agents and virtual appliances in your lab environment. In this case we’ll install the relay on the management server, but in a production environment it’s recommended to install this on one or multiple separate servers. Since this is a demo environment we’ll disable the smart feedback. Before starting the installation, you are shown a summary with all the installation. Confirm that everything is configured correctly and select “install”. Once the Installation is finished, allow for the DSM console to open and click finish. After logging in to the deep security manager, we should be shown the following dashboard: Deep Security Manager Configuration First we’ll add the vCenter we installed earlier for this lab. Open the “computers” tab, then rightclick “computers” (in the leftmost menu) and select “add VMware vCenter. Enter the configuration details of your vCenter server, then click next. Accept the vCenter server SSL certificate and select finish. Now that you’ve configured the vCenter configuration of Deep Security, it’s time to deploy the virtual appliances used for the agentless protection. Since we are using vSphere 6 with Trend Micro Deep security 9.6, we will not deploy the filter driver. This something to watch out for if you are reading other blog posts or if you are familiar with older versions of deep security and vsphere. First, we’ll need to import the vSphere security appliance.Download the 9.5 virtual appliance from this link. Once the download has completed, open “Administration”, then drill down to updates ->software -> local. Import the file you just downloaded. After importing the package, open your vCenter in the computers view, then drill down to “hosts and clusters”. right click the host you want to protect and select “actions -> Deploy agentless security”. Enter any name for the appliance and select the details of deployment. Next, enter your network configuration. If you are using DHCP you can leave that enabled, for this lab we’re using static address assignment so we’ll configure the appliance with the correct network settings. Provision the appliance as either thick or thin (your preference), and wait for the deployment to finish. Once the deployment finishes, you can continue with the activation of the Virtual appliance. Afterwards, the apliance should show up in the list of computers, and you should be able to activate virtual machines without installing the agent. Agent based protection First, we’ll have to add our active directory to the deep security manager. While you can also protect systems without active directory, this makes the deployment significantly easier. Go back to “Computers”, then right click “computers” in the left menu. Select “Add Directory” and enter your AD details. Next, we’ll create a scheduled task to synchronize the directory. Next we’ll have to import the agent. Open “Administration”, then drill down to updates ->software -> download center. Search for “Windows”. Then, select the latest agent version, right click and select “import”. Once the import is done, Select “Support” in the top right part of the management console, then select “Deployment scripts”. Select your platform and copy the script. After adding our active directory, we should be able to see the machines joined to the domain. Verify that you can see your machines by opening the computers tab and browsing through your list of computers. Log in to the machine you wish to protect and run the script, which will install the agent. Normally in a production environment you’d either deploy the agent through a management tool or preinstall it in the image, but for now manual installation will suffice. After the agent has been installed, go back to the deep security manager and open the computers view. Right click one of the machines you wish to protect, and select actions -> activate/reactivate. After a minute or so, the status of your machine should change to “managed (Online)” and your virtual machine will be protected by Trend Micro Deep Security. By opening the details of a protected computer (or creating a policy) you can enable features such as anti-malware, intrusion prevention, firewalling or one of the other security products that are integrated in Deep Security. With this setup, you should be ready to start testing the product and its extensive set of options to protect your environment.

Trend Micro Deep Security, a security suite providing antivirus, intrusion prevention, firewalling, url filtering and file integrity monitoring for both virtual and physical systems. For...

OPNFV Testing on Cloud

Author: Brian Castelli Brian Castelli is a software developer with Spirent creating test methodologies for today's networks. His current focus is on SDN and NFV.   The OPNFV project is dedicated to delivering a standard reference architecture for the deployment of carrier-grade Network Function Virtualization (NFV) environments. Testing is critical to the success of the project and to the success of real-world deployments, as evidenced by the many test-related sub-projects of OPNFV. One of those subprojects, VSPERF, is dedicated to benchmarking one of the key NFV components: The virtual switch. The VSPERF community has developed a test harness that is integrated with several test tools. When deployed, virtual switches can be tested in a stand-alone, bare-metal environment. Standard benchmarking tests, such as RFC2544, are supported today, with more tests in the pipeline. Running VSPERF for optimum performance and consistency requires dedicated hardware. This is acceptable for running the tests themselves, but it increases the cost of test development. Developers need a lower-cost environment where they can rapidly create and prove the functionality of tests that can then be moved to hardware for final testing. The hardware requirement also poses a problem for product marketing and sales. Those teams need a way to demo VSPERF test capabilities to potential customers without lugging around hardware. The Ravello Systems environment presents a solution to the problems faced by these two groups. By virtualizing the test environment and taking advantage of Ravello’s Blueprint support, we were able to: Virtualize the entire VSPERF test environment Create a blueprint to enable rapid deployment Create multiple, low-cost, access-anywhere development environments for engineers in various geographic locations Create on-demand demo environments for customer visits and trade shows Only pay for what we use We started with virtual versions of our standard products.   A License Server is required to handle licensing of test ports A Lab Server is required to handle REST API communication from the VSPERF test harness STCv is our virtualized test ports. In this case, we used a single STCv instance with two test ports. This configuration gives our virtual test ports the best performance and consistency. The 10.0.0.0 subnet was created for management. 11.0.0.0 and 12.0.0.0 were created for dataplane traffic. Next we instantiated the VSPERF test harness host, connecting it to the appropriate networks. All along the way, Ravello’s interface gave us the options we needed to configure. DNS names. Each node was given a fully-qualified domain name that remained constant even though underlying IP addresses might change from one power-up to the next. This gave us the ability to script and configure without worry about changes. CPU, memory, storage. Each node could have as much or as little resources as necessary. How many times have we been frustrated in the lab to find out that we need more cores or a larger hard drive? This is a configuration change instead of a purchase order. Publish optimizes for cost or performance. We can minimize cost or publish to get better performance. Timeout. By default, applications will timeout after a configurable period of time. This saves us from unnecessarily wracking up charges by accidentally running over the weekend. Of course, Ravello also supports “forever” operation for nodes that we really do want to run all weekend. Overall, Ravello Systems is good value and flexibility for the needs described here. Brian’s current role is to support the development of NFV test methodologies and to support Spirent’s participation in the OPNFV project.

Author: Brian Castelli Brian Castelli is a software developer with Spirent creating test methodologies for today's networks. His current focus is on SDN and NFV.   The OPNFV project is dedicated to...

How To Install And Run Windows 8 From ISO on AWS EC2

Although AWS does not natively allow you to install your own Win7, Win8 or WinXP on AMI by attaching an ISO, it’s fairly easy to do this using Ravello’s nested virtualization. This is particularly useful for client testing on AWS. This step by step guide focuses on how to run Windows 8 on AWS. You can also refer to our other windows guides here. Steps: Create an account and login to Ravello, then click “Create Application” to create a new application in Ravello Drag an empty VM from the library on to the Ravello canvas. Click on “Import VM” and follow the prompts to upload your Win8 ISO Click on “Publish” -->choose Performance Optimized, and select AWS region After you publish, click on the console button to complete your installation in the console Save your image to the library so that you can skip these steps when you want to re-use it later. Now you can also spin up a farm with hundreds of Windows clients using a single API call. Performance tuning tips: Give your empty VM at least 2 vCPU Install VMware tools to eliminate any mouse sync issues in the console Use para virtualized devices such as vmxnet3 for networking and PVSCSI for disks Screenshots: This is a technology blog. If you want to use Ravello to run Windows, you must comply with Microsoft's licensing policies and requirements. Please consult with your Microsoft representative.

Although AWS does not natively allow you to install your own Win7, Win8 or WinXP on AMI by attaching an ISO, it’s fairly easy to do this using Ravello’s nested virtualization. This is particularly...

NFV Orchestration: Networking Automation using Juniper Contrail (part 3 of 4 post series)

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. This article details NFV orchestration using public cloud NFVI as a 4 part series. This post in the series looks into how service orchestration using Juniper Contrail can help assist with multi-tenancy and workload mobility, and also increase service velocity through NFV orchestration and service chaining. Juniper Contrail for network automation Contrail uses both an SDN controller and vRouter instance to influence VNF. Juniper view their entire platform as a cloud network automation solution, providing you a virtual network. Virtual networks reside on top of physically interconnect devices, with a lot of the intelligence getting pushed to the edge of the network with tunnel creation. The role of the network is now closer to user groups. This is the essence of cloud enabling multitenancy that couldn't be done properly with VLANs and other traditional networking mechanisms. Contrail exposes API to orchestration platforms to receive service creation commands and provision new network services. From these commands it spins up virtual machines of the required network functions, for example NAT or IPS. Juniper employs standardised protocols, BGP and MPLS/VPN. Extremely robust and mature implementations. Why reinvent the wheel when we are proven technologies that work? Supporting both virtual and physical resources, Contrail also leverage Open Source cloud orchestration platform OpenStack and becomes a plugin for the Neutron project of OpenStack. Openstack and Contrail are fully integrated, when you create a network in Contrail it shows up in OpenStack. Contrail has also extended to use the OpenStack Heat infrastructure to deploy the networking constructs. Benefits of Juniper’s Contrail Network Abstraction Juniper's Contrail allows you to consume the network in an abstract way. What this means is that you can specify the networking requirement in a simple manner to the orchestrator. The abstract definitions are then handed to the controller. The controller acts as a compiler and takes these abstract definitions and converts them into low level constructs. Low level constructs could include a routing instance or ACL that are required to implement the topology that you specify in the orchestration systems. The vRouter carries out distributed forwarding plane and is installed in a hypervisor as a kernel module of every x86 compute nodes. It extends the IP networks into a software layer. The reason the vRouter is installed in all x86 compute nodes is because VM’s can be spun up in any x86 compute node. So if VM-A gets spun up on compute A, we have the forwarding capability on that node. It does not augment the Linux Bridge or the OVS, it is a complete replacement. The intelligence of the network is now with the controller that programs the local vRouter kernel modules. The fabric of the network, be it a leaf and spine or some other physical architecture only needs to provide end-to-end IP connectivity between the endpoints. It doesn't need to carry out any intelligence, policies or service decision making. All that is taking care of by the contrail controller that pushes rules down to the vRouters sitting at the edge of the network. Now that the service is virtualization it can be easily scaled with additional VM’s as traffic volume grows. Elastic and dynamic networks have the capability to offer on-demand network services. For example, you have an requirement to restrict access to certain sites during working hours. NFV enables you to order a firewall service via a service catalogue. The firewall gets spun up and properly service chained between the networks. Once you no longer require the firewall service, it is deactivated immediately and the firewall instance is spun down. Any resources utilized by the firewall instance are released. The entire process enables elastic and on demand service insertion. Service insertion is no longer correlated to physical appliance deployment, which in the past severely restricted product and service innovations. Service provider can try out new services on demand, increasing the time to market. Service chaining increases services rollout velocity For example, we have an application stack with multiple tiers. The front end implements web functionality, middle tier implements caching functionality, and a backend that serves as the database tier. You require 3 networks for the tiers and VM in each of these implement this functionality. You attach a simple type of security policy, so only HTTP between front end to caching is permitted and before sending to the database tier it get scrubbed to the virtual firewall. This requirement is entered into the orchestrations system and in terms of the compute orchestration system (Openstack - Nova) the VM’s (matched per tier) are launch on the corresponding x86 compute modes. For VM’s that are in the same network but on different hosts, the network is extended by the vRouters by establishing a VPN tunnel. A mesh of tunnels can be created to whatever hosts are needed. The vRouter creates a routing instance on each host for each network and enforces the security policies. All the security policies are implemented by the local vRouters sitting in the kernel. Security policies assigned to tenants are no longer made in your physical network, no need for the box by box mentality, it is contained in the vRouter, which is distributed throughout the network. Nothing in the physical environment needs to be changed. The controller programs the flows. For example, if VM-A goes to VM-B send the packet to the virtual load balancing device and then to the virtual firewall. All routers are then programed to look for this match and if it is met, it will send to the correct next hop for additional servicing. This is the essence of contrail service chaining. The ability to specify a list of ordered segments you would like the traffic to take. The controller and the vRouter take care of making sure the stream of traffic follows the appropriate chain. For example, for HTTP traffic send it through a Firewall and a Load Balancer but for telnet traffic just send to the Firewall.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

NFV Orchestration: Increase service velocity with NFV (part 2 of 4 post series)

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. In this second part of a 4-post series around NFV orchestration we detail how NFV (Network Function Virtualization) can help alleviate multi-tenancy and network mobility challenges and increase service velocity (pace at which services can be rolled out) across enterprises and service providers. Increasing the service velocity with NFV In traditional networks, the time to deploy new services is very long. Physical network functions are contained in physical boxes and appliances. Traffic requiring service treatment must be physically wired to the box. In a multi tenant environment, deploying new services for new tenants was difficult and time consuming. It took a long time for product innovations and affected new service and product testing. There is a need to evolve the network and make it more cloud compatible. Networks need to move away from manual provisioning and respond to service insertion in an automated fashion. Service insertion should take minutes and not weeks. How do we get the network to adapt to these new trends? One way to evolve your network is to employ network function virtualization. With NFV, ordering a new service can be done in seconds. For example, a consumer can request via a catalogue a number of tiers and certain traffic flows permitted between tiers. The network is automatically provisioned without human intervention. NFV eliminates human intervention and drives policy automation. NFV and SDN compliment each other but are used to satisfy separate functions. SDN is used to program network flow, while NFV is used to program network functions. Network Function Virtualization decouples network functions from proprietary hardware and runs them in software. It employs virtualization techniques to manage network services in software as opposed to running these functions on static hardware devices. NFV gives the illusion to a tenant, the perception that they have an logically isolated network for themselves. It’s basically a software module running on x86 hardware. Proprietary hardware is cheap and you are paying for the software & maintenance costs. Why can’t you run network services on Intel x86? The building blocks for NFV are Virtual Network Functions (VNF’s). VNF handle specific network function such as firewalling, intrusion protection, load balancing and caching. They run in one or more virtual machines and can be service chained by a SDN controller or some other mechanism. Once the network services are virtualized they can be dynamically chained to a required sequence by an SDN Controller, for example Contrail SDN Controller. The chain is usually carried out by creating dynamic tunnels between endpoints and routing traffic through the network function by changing the next hop. The chaining technology is not limited to the control of an SDN Controller. Locator/ID Separation Protocol (LISP) can also be used to implement service chaining with its encapsulation / decapsulation functions. Once the network functions are in place, LISP can be used to set up the encapsulations paths. Please read Part 3 of this article where I go into how network automation, service chaining can help increase services velocity (roll out of services) in enterprises and service providers. A special thanks to Jakub Pavlik and his team at tcpcloud - a leading private cloud builder - for collaborating on this post.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

Run an NFV Architecture (OPNFV) on AWS and Google - Brahmaputra Edition

Author: Iben Rodriguez Iben is Cloud consulting and Virtualization Architect. He is trained in agile, ITIL, SOX, PCI-DSS, ISO27000. He is working to shift SDN testing functions out of the test lab and closer to the developers and operators. Setup and operate your own OPNFV Architecture for dev, test, training using Ravello Systems on AWS and Google Cloud. What is OPNFV Architecture The Genesis project of OPNFV defines a number of core technologies that are part of the open source NFV platform. These include: Openstack with various core projects such as Horizon, Nova, Neutron, Glance, Cinder, etc... Open vSwitch Integration with an approved network controller The Brahmaputra release of OPNFV from February 2016 includes support for 4 different “bare-metal” installers for OpenStack integrated with 4 different network controller options. APEX Compass4NFV FUEL JOID These installers deploy OpenStack per the Genesis guidelines on hardware you provide in a network test lab or “in the cloud” with various approved network controller options such as: OpenContrail from Juniper OpenDaylight ONOS from ON.Labs No SDN controller using default OpenStack Neutron options There are a number of feature and test projects that use these environments as a platform after it’s built. If you’re new to OPNFV and DevOps there can be a pretty steep learning curve to get up to speed with all the components needed to get a working platform going and maintain it. Organizations wishing to participate in the development and testing of OPNFV Architecture should follow the guidelines established by the Pharos project which specify 6 physical servers connected to a Jenkins build server that uses scripts to issue commands to a Jump Box machine. This jump box then installs an Operating System onto the target nodes and then configures the OpenStack, network controller, storage, and compute functions. Preparing the environment and running the scripts can take a few hours even for an automated script and that doesn’t include all the time spent planning and debugging a new installation. Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. These blueprints we provide are based on the OPNFV Copper Academy work done by Bryan Sullivan from AT&T which provides a design for a lighter weight 4 node design that can run virtual either on premise or hosted “in the cloud”. Here’s a screenshot of the network layout for the blueprints covered by this blog: Setup and run OPNFV Architecture lab on AWS and Google Cloud This series of posts and blueprints is intended to make it easier (and cheaper) to set up an OPNFV test environment. All you will need to get started is a web browser and an account with Ravello Systems. The following 5 blueprint configurations are being made available and shared on Ravello Repo and will be kept up to date on a regular basis as new OPNFV releases are available: Builder blueprint with a MaaS controller pre-installed and three empty nodes for bootstrap, controller, and compute. All ready for configuration and OPNFV build out with your SDN choice. It will take a few hours to deploy this blueprint, configure maas, deploy, and produce a working OPNFV customize exactly to your parameters. This is NOT a self-contained blueprint and you must provide ssh keys, github, and complicated file editing. Intended for the more advanced developer. Contrail blueprint including an already deployed OPNFV with OpenStack Kilo and Juniper Contrail SDN Controller. Spin up an app from this blueprint and in 20 minutes you will have a working OpenStack environment. Beginner level. ODL blueprint including an already deployed OPNFV with OpenStack Liberty and Open Daylight SDN controller. ONOS blueprint including an already deployed OPNFV with OpenStack Liberty and ON Labs: ONOS SDN controller NOSDN blueprint including an already deployed OPNFV with OpenStack Tip/Master and no SDN controller What is Ravello Systems Ravello’s HVX nested hypervisor with overlay networking is delivered as a software as a service. You can run any virtual environment with any networking topology in AWS and Google Cloud without modifications. Using Ravello blueprints, you automatically provision fully isolated multi-tier environments. Getting Started with the OPNFV Academy Blueprints Here are the general steps needed to get started with these blueprints and get up and running quickly with one of the pre-built configurations we have provided. See the readme from the github repo for more detailed steps. Open a new web browser window and Add this blueprint to your library Create an application from the blueprint. Check the Network topology as follows: Start the application - wait 10 to 15 minutes for the machines to spin up and be ready Once the VM is started you can find the IP address for the MAAS server in the Ravello dashboard. Perform a Basic Functional Test to ensure the admin console for each function is working. open a new web browser window to the IP address of the MAAS server. open new web browser windows with the MAAS server IP address followed by the port for the function you wish to use (insert screenshot) Juniper OpenContrail OpenStack Horizon Login to JuJu GUI admin console to see deployed model corresponding to the blueprint. This screenshot shows the Juju bundle (collection of 32 charms) for OpenStack with OpenContrail SDN. Open Daylight DLUX console MaaS admin console Next steps After this the possibilities are endless. Be sure you join the GitHub Repo to post any issues or suggestions you have. You can become familiar with the various tools such as Juju, MaaS from Canonical. Try out the other OPNFV blueprints from Ravello Repo. These blueprints are small non-HA versions - make your own blueprint with an HA (High Availability) deployment. Sign up for an account with the Linux Foundation that will give you access to update the wiki, post patches to Gerrit, update JIRA issues, and use Jenkins. Modify and create your own blueprints on Ravello to share them with others. A REST API and Python SDK are also available allowing automation of Ravello workloads as part of the product lifecycle for your company.

Author: Iben Rodriguez Iben is Cloud consulting and Virtualization Architect. He is trained in agile, ITIL, SOX, PCI-DSS, ISO27000. He is working to shift SDN testing functions out of the test lab and...

Nested virtualization: How to run nested KVM on AWS or Google Cloud

Running nested KVM on public clouds such as AWS and Google has traditionally been a challenge because hypervisors like KVM hypervisors are designed to run on physical x86 hardware and rely on virtualization extensions offered by modern CPUs (Intel VT and AMD SVM) to virtualize the Intel architecture. It is now possible with Ravello’s nested virtualization technology. Ravello’s nested virtualization technology is called HVX - it runs on the public cloud and implements virtualization hardware extensions (Intel VT and AMD V) functionality in software. Now HVX exposes a true x86 platform type to the "VM" running on top of the public cloud. This allows enterprises to run hypervisors like KVM on AWS. From an implementation perspective, we have adapted our binary translation so that it recognizes the double-nesting, and effectively removes one layer of nesting and runs the guest directly on top of HVX. As a result, the performance overhead is relatively low. In addition, we have also implemented nested pages support inside HVX which will make running a hypervisor on top of HVX even more efficient. Currently, Red Hat uses Ravello to run their global training for OpenStack with nested KVM on AWS - in various regions all over the world. Here is a step by step guide on how to install and run RHEV with nested KVM on Ravello. If you all you need is a host with KVM, you can use one of the vanilla VMs provided in the Ravello library, enable the nested flag and go ahead. . You can try this on Ravello for free using our 14 day trial. And be sure to check out the ready made blueprints for OpenStack and other linux deployments available on Ravello Repo.

Running nested KVM on public clouds such as AWS and Google has traditionally been a challenge because hypervisors like KVM hypervisors are designed to run on physical x86 hardware and rely on...

NFV Orchestration: Overcome multi-tenancy challenges (part 1 of 4 post series)

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. This article details NFV orchestration using public cloud NFVI as a 4 part series. This post looks into challenges traditional networks have with multi-tenancy and workload mobility. In the next, we'll show how Network Function Virtualization (NFV) fits in and can increase service velocity. Leap-frogging the network to the advances in data-center Over the last 10 years there has been an increasing proliferation of virtualization, primarily seen in the areas of compute and storage. However, in a data centre there is an additional functional block called the network. Networks have been lagging behind with slow innovation and has not virtualized to the same extent as storage and compute. We need to view and manage the network as one large fabric system in a centralized fashion. To increase agility, the network must become programmatic and not viewed as individual nodes managed box by box. Central view and managerial points for the network increase network efficiency. Networks should be consumed by the consumers of the infrastructure in a self service manner. For example, an application developer can deploy a stack without having to wait for the network team to provision rules or by interacting with multiple technical teams for deployment. The network must become seamless and automated. The ability to rollout network services, applications in VM or containers without network intervention is key to increase time to market. There has been a transition from hardware centric data centres to agile virtual cloud data centres. One important aspect of the cloud data centre is that infrastructure is being consumed as a service. When infrastructure is consumed as a service, then the consumers of the infrastructure become the tenants of the cloud infrastructure. Multiple tenants accessing cloud's resources make the cloud data centre multi tenant in nature. In a public cloud, this would be resources made available to multiple customers. In a private cloud, this is resources made available to different departments or organisation units. Multi-tenancy and the ability for many customers to share resources puts pressure on traditional networking technologies. Issues resulting from network multi-tenancy Securing multi tenant cloud environments drives the need for tenant isolation. Tenant A should not communicate to tenant B, without explicit permit statements. A tenant should consist of an independent island of resources, protected and isolated from other islands. Every tenant should have an independent view of an isolated portion of the network and peak loads should not affect neighboring tenants in separate virtual networks. Noisy neighbors are prevented by policing and shaping at a VM or tenant level. Both shaping and policing limit the output rate but both with functional differences. Security is a major concern in multi tenancy environments and breaches in one tenant should not affect others. Beachheading, the process of an attacker jumping from one compromised location to the other should not be permitted. And if a tenant becomes compromised, traffic pattern and analytics should be provided, enabling the administrator to then block the irregular traffic patterns caused by the attacker. Increasing number and type of applications are moving to the cloud but unfortunately traditional network infrastructures are not near agile enough to support them. Traditional networks are not designed for the cloud or to connect virtual endpoints within a cloud environment. Originally, they were invented to connect physical end points together. We are beginning to see the introduction of software based network in the form of overlays used in combination with traditional physical networks, underlays. Legacy VLANs are used to segment the network, which has proved to be inefficient to segment a dynamic and elastic multi tenant network. VLANs are very tedious and intervention was needed on every switch in the data centre as tenant state was held on individual nodes in the fabric. VLANs also restrict the number of tenants due to the number of bits available, limiting you to 4096 VLANs. This will soon run out when deploying multi tenant tier application stacks. VLAN designs also require MAC visibility in the core and when a switch runs out of MAC tables size it will start to flood. Unnecessary flooding wastes network bandwidth and hampers network performance. Also, layer 2 domains convert to a single broadcast and failure domain, causing havoc in the event of a broadcast storm. Instead of all these kludges we need to run networks over IP. Similar to how Skype runs over the Internet. Scalable networks are built over IP and overlays can be used to provide Layer 2 connectivity.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

Understanding vCloud Air pricing: How virtual private cloud on-demand compares to AWS

One of the biggest and clearest advantages public cloud computing has over traditional data centers is the cost - with the cloud pricing model Capex becomes Opex, and with a quick use of all the provided calculators - you know exactly what you’re going to pay. No negotiation, “Plug and play”. With it’s own vCloud Air pricing calculator - does VMware-gone-on-demand also fit the description? The short answer is yes. Virtual Private Cloud OnDemand is pay per usage. Much as the standard - CPU and RAM are metered per minute when VMs are powered on, and storage is metered per minute since allocation to the VM. Public IPs are metered by the minute and since allocation to a gateway. Support - some version of percentage of compute bill. vCloud Air and AWs pricing example In our narrower scope - I priced out an application both on the AWS and on the vCloud Air calculators. For this scenario, let’s consider an email security appliance. The appliance setup involves: Email security appliance: 2 vCPU, 4GB RAM, 50GB storage Email server (exchange): 4 vCPU, 16GB RAM, 200GB storage Database server: 4 vCPU, 16GB RAM, 500GB storage Active Directory: 2 vCPU, 4GB RAM, 50GB storage Windows client: 2 vCPU, 4GB RAM, 50GB storage Summary of results   Resources / configurations Estimated monthly cost vCloud Air 3 X (2,4,50), 1 X (4,16, 200), 1X (4, 16, 500) $820 AWS 3 X t2.medium, 2 X m4.xlarge $524   For both options I went with an application that runs 24X7 for a month. On vCloud Air this required three configurations, for the three types of VMs. Using standard storage, adding 1 public IP for the entire application and running in US Virginia 1 region, the total price is $820/month, including ~$53 for support. On AWS this meant provisioning 3 t2.medium instances (Linux) and 2 m4.xlarge instances (Linux), and adding the necessary storage (850GB), running in the Virginia region, we get to an estimate of $524/month. So that I don’t give anyone the wrong idea: This is a good place to say that there are significant differences between Virtual Private Cloud OnDemand and your public cloud options, both with advantages and disadvantages. AWS, for instance, provides much more extensive additional services, and a broader geographic spread. L2 networking, however, isn’t accessible there, while with vCloud Air - it is supported. Here’s a more in depth analysis of AWS vs vCloud Air. It’s not just a matter of price, and even there (as you will see in the following paragraph) - things can change when optimizing for your own use case. Buying options Back to the longer answer to the question. Where it might start to get confusing is when you consider your buying options, because you can actually pay for on-demand in advance with a subscription purchasing program. It might sound a little less on-demand. But actually, VMware SPPs can be seen in this context strictly as a different way to buy - get initial credits, and then, depending on your specific selected program - use them or roll them over. Furthermore, while it might seem confusing in the beginning, the concept is not at all foreign to the public cloud: AWS provides the reserved instances option, Google Cloud has sustained use discounts. The concept actually fits well with VMware, since it is much more like the type of service that VMware typically sells - programs for different periods of time provide for different discount options - and utilizes existing and familiar buying options from VMware. Choosing the SPP option does affect other aspects of your purchase process, like the way in which you can add configurations in different regions, but I’ll leave it out of the scope. You’ll see that much like AWS and vCloud air have pretty different offerings, so does Ravello, in our case - enabling full support of running VMware VMs complete with L2 networking on AWS or Google Cloud using nested virtualization. As we dig more into vCloud Air pricing, and other cloud pricing options, please join the conversation. Take a look at how we do things here, and how Ravello pricing works, and give your feedback in the comments.

One of the biggest and clearest advantages public cloud computing has over traditional data centers is the cost - with the cloud pricing model Capex becomes Opex, and with a quick use of all the...

Five most popular penetration testing tools

Ethical hackers are embracing public cloud for penetration testing. Using Ravello on AWS and Google cloud, enterprises are creating high-fidelity replicas of their production environments – and using it for penetration testing to find and fix vulnerabilities in their network, web and applications before a hacker does. This article looks at five most popular tools used by ethical hackers for penetration testing – 1. Kali Linux – Kali is one of the most popular suite of open-source penetration testing tools out there. It is essentially a Debian Linux based distro with 300+ pre-installed security & forensic tools all ready to go. The most frequently used tools are - Burp Suite - for web applications pentesting. Burp Suite can be used for initial mapping and analysis of an application's attack surface, finding and exploiting security vulnerabilities. It contains a proxy, spider, scanner, intruder, repeater, and sequencer tool. Wireshark - network protocol analyzer that needs no introduction Hydra - tool for online brute-forcing of passwords Maltego - a tool for intelligence gathering Aircrack-ng - wireless cracking tool John - offline password cracking tool Owasp-zap - for finding vulnerabilities in web applications. Owasp-zap contains a web application security scanner with an intercepting proxy, automated scanner, passive scanner, brute force scanner, fuzzer, port scanner etc. Nmap - for network scanning. Nmap is a security scanner and contains features for probing computer networks, including host discovery and service and operating system detection – generally mapping the network’s attack surface. Nmap features are extensible by scripts that provide more advanced service detection and vulnerability detection. Sqlmap - for exploiting sql injection vulnerabilities One can download Kali Linux from Kali website and install the ISO on an empty VM on Ravello with a couple of clicks. 2. Metasploit Community – Metasploit framework enables one to develop and exploit code against remote target machines. Metasploit has a large programmer fan base that adds custom modules, test tools that can test for weaknesses in operating systems and applications. While open-source Metasploit framework is built into the Kali Linux the more feature rich versions – Metasploit community edition and Metasploit Pro are available from Rapid7 and highly recommended. Metasploit Pro comes with additional functionality such as Smart Exploitation (that automatically selects exploits suitable for discovered target), VPN pivoting (that allows one to run any network based tools through a compromised host), dynamic payloads to evade anti-virus / anti-malware detection, collaboration framework that helps sharing information as a part of the red-team effectively.   3. CORE Impact – Core Impact is equally appealing to newbies as it is to experts.  It provides a penetration testing framework that includes discovery tools, exploit code to exercise remote & local vulnerabilities, and remote agents for exploring and exploiting a network. CORE Impact works by injecting shell-code into the vulnerable process and installing remove agent in the memory that can be controlled by the attacker. Local exploit can then be used to elevate privileges and this exploited host can them be used to look for other hosts to attack in a similar manner. CORE Impact’s easy to use interface (just point and attack!), flexible agents, regular updates to exploits and built-in automation makes it a popular choice for enterprises. But good things don’t come cheap – CORE Impact comes with a very expensive price tag. 4. Canvas –  Canvas expects users to have a considerable knowledge of pentesting, exploits, system insecurity and focuses on exploitation aspects of penetration testing. It doesn’t perform any discovery, but allows one to manually add hosts to interface and initiate a port scan & OS detection. This discovered information becomes a part of host’s ‘knowledge’ and ethical hacker needs to select the appropriate exploits based on this knowledge. If the exploit is successful a new node signaling an agent populates on the node-tree on canvas. Nodes can be chained together through hosts (much like CORE Impact) so that attacks can percolate deeper into the networks. Although Canvas is a commercial tool (just like CORE Impact), it is roughly one-tenth the price of CORE Impact. 5. Nessus – Nessus is a vulnerability scanner and very popular amongst security professionals. It comes with a huge library of vulnerabilities & tests to identify them. Nessus relies on response from target hosts to identify the holes, and the ethical hacker may use an exploitation tool (e.g. Metasploit)  in conjunction to verify that reported holes are indeed exploitable. So which is the best penetration testing tool out there? There is no one correct answer. It depends on the target, scope and ethical hacker’s proficiency with pentesting. Interested in checking the effectiveness of your favorite pentesting tool? Just open a Ravello trial account, upload your VMs to recreate a high fidelity replica of environment you want to pentest, and point your favourite pentest tool at it. Since Ravello runs on public cloud with access to data-center-like networking, a growing number of enterprises are using it to create realistic pentesting environment to scale.

Ethical hackers are embracing public cloud for penetration testing. Using Ravello on AWS and Google cloud, enterprises are creating high-fidelity replicas of their production environments – and using it...

How to overcome known limitations with existing methods to import and run VM/KVM images on AWS and Google Cloud

This blog describes how you can overcome the limitations of AWS VM ImportInstance feature. There are several reasons for users to want to import their existing server and client system VMs (OVF, OVA, VMX/VMDK, QCOW) to AWS. If you have existing virtualized multi tier applications and you want to leverage public clouds for dev/test/training environments for these applications, you would want a way to quickly run these applications on public cloud while preserving their software and configuration settings. Another scenario is, if you develop applications that run on multiple windows clients, you are probably testing them on multiple windows clients like Windows 7, Windows 8, windows 8.1, Windows 10 etc. Public cloud presents a very attractive alternative to run on-demand, at scale client testing. Most leading public clouds in the recent years have added support for importing existing VMs from data Center and for building cloud instances using the imported VM image. This process involves using CLI based cloud interface and converts the existing VM image to an AMI. Then, the user can instantiate instances from the AMI and has to redefine the networking, to replicate the application’s existing network topology. There are some known limitations on the disk formats, networking and other configurations of a VM that can’t be moved over AS IS during this import process. This can pose a problem for a variety of purposes. For dev/test purposes, you want the application environment on AWS to mirror your DC hosted application topology and environment configurations. This is essential for high fidelity testing and for eliminating false positives during testing. Some known examples of where such limitations can be a challenge are: When your Vms are using custom linux kernel. You can’t replicate these on AWS or Google Cloud When your application VMs in the DC are configured with multiple NICs. You can’t carry over these networking configurations during VM Import When your windows VMs are configured with GUID Partition Table (GPT) partitioned volumes When your applications VMs are 32 bit systems When your client test cycles requires testing on older windows versions like Windows XP, 32 bit windows etc When your VMs have large disks When you have standardized infrastructure configuration (networking, storage) for your application you use for training labs Ravello Systems lets you import VM/KVM images and run them unmodified on AWS and Google Cloud. Ravello technology is high performance nested hypervisor capable of running unmodified guests on top of already virtualized hardware. HVX exposes VMware or KVM devices to the virtual machine running on top. This means that the VM images are not converted or modified during the import process. They run unmodified on top of Ravello on top of AWS. Everything about the VM stays the same - the same operating system, paravirtualized drivers (VMXNet3 network driver, PVSCSI storage driver etc.), application settings, network settings, VMware tools etc. Importing VM/KVM images from your DC to run on top of AWS and Google Cloud Step 1 Download Ravello VM import tool from your Ravello account and connect to your ESXi vCenter. You can also upload VM images directly from your laptop if you have already exported them. Select the Vm images you want to upload and start the upload. Depending on the size of the VM image and your network bandwidth, this may take some time. There are some best practices, tools and tips to optimize the upload time. Step 2 You can start multiple VM image uploads at the same time and monitor the progress in the upload tool. There is also a CLI version available for those who like command line interface. Step 3 Once the VM images are uploaded, they are available in your private VM library. Step 4 Drag and drop the VM image from the library on the Application workspace area. Then, you can verify the VM software configuration, disk and networking configuration before running it on top of AWS or Google Cloud. The VM image is not converted and comes over unmodified with the same system properties as were set in your DC. All disk/volume configurations are supported and you don’t need to change the existing disk configuration of your VM to run it on top of AWS and Google cloud. Ravello HVX transparently maps it to underlying cloud storage. You can have as many disks and each of your disks can be 25TB. The uploaded VM image retains all the networking configuration. If your NIC is configured with multiple NICs, you can define those under networking configuration with the same static IPs as your DC. You can drag and drop multiple VM images that you have uploaded for your application environment, verify that all Vms have retained the networking and other configurations and publish/run the application on AWS or Google Cloud. To summarize, you can take your existing application VM images(VMWare or KVM - ova, ovf, qcow, vmx/vmdks), upload them using an easy to understand GUI tool and run them unmodified on AWS and Google Cloud. You can upload and run VM images which are built using custom linux kernel, Windows Vms with custom patches, Vms with multiple NICs, GUID Partition Table (GPT) partitioned volumes, 32 bit windows VM images or windows XP and large multiple disks(in TB).

This blog describes how you can overcome the limitations of AWS VM ImportInstance feature. There are several reasons for users to want to import their existing server and client system VMs (OVF, OVA,...

5 common mistakes when choosing a VMware hosting provider from vCloud Air network

Almost every single one of the larger VMware customers I’ve worked with has, at some point or the other, needed additional capacity. Before Ravello entered the scene to run VMware workloads unmodified on AWS/Google Cloud, the most common approach was to use a VMware hosting provider to access additional capacity. If you’re looking to choose a VMware hosting provider, take a look at the vCloud Air service provider network which lists 4000 service providers who run a vSphere based cloud. You could also look at vCloud Air as an option and compare it with more specialized providers like Skytap and Cloudshare. In fact, if you are looking to compare different hosting providers, many of them offer a test drive or free trial as well. But it’s not an easy decision. Hosting providers come in all sizes and each of them offers different flavors of hosting: dedicated servers, public cloud, virtual private cloud. Over the years I’ve had customers say that in hindsight they made this mistake or that when choosing their hosting provider. Below I’m listing the top 5 mistakes to avoid as you choose your VMware hosting provider: 1. Scale/Size - and geographic distribution Often times, the smaller hosting providers providers seem more attractive because of the personalized attention and trusted relationship you can build. However, one of the reasons you are choosing a hosting provider is that you need additional capacity. Know that most of the other customers that they service also have bursty, unpredictable demand patterns. Be sure to assess the size of the hosting provider, understand how many customers they service. The smaller providers may be oversubscribed at the same time that your own demand peaks. Look for clauses in the contract that may allow for your reserved capacity to be used by others when in need. Also, look for options to deploy in other regions in the world if you need to. It may not be a priority today, as you grow you might benefit from having that option handy. In general, the bigger the hosting cloud, the higher the probability that it will be able to absorb variations in demand patterns and not let that variability hurt you, especially when you need that capacity. 2. Control - and built-in automation It’s fairly common for customers to say that they don’t have access to their VM or vmdks or data when they need it because they are using a hosting provider with a managed service. Also, most providers have some level of built-in automation and depending on their level of sophistication they may be able to provide you additional benefits like cloning entire environments on-demand. Know that it is possible to retain full control over your environment even when using a managed service provider. Always read the fine print. Look for clauses that prevent you from downloading a VM or moving an environment in-house easily. 3. Negotiating the price Very few service providers, and this includes Skytap and Cloudshare will publish their prices online. By contrast public clouds like AWS/Google tend to be very transparent and publish prices on their website. This is the philosophy we chose here at Ravello as well. But in the case of hosting providers there is often a long intense negotiation that occurs for every deal, on a yearly basis. Depending on your negotiation skills you might be able to get a really good deal or you might end up paying a lot more than the other customers. There’s just no way to know what the best price is when you are signing that contract and that’s often an unpleasant experience. So watch out for this one and be sure to negotiate when prices are not publicly listed on the website. 4. Monthly fees/minimums Given the very nature of their business, hosting providers will require that you pay them some monthly fee or commit to a monthly minimum usage. Note that large cloud providers such as AWS and Google Cloud are able to provide truly on-demand access where you only pay for usage - primarily because they have such large scale that they can count on having enough usage in a given month. You may think that you are committing to a low monthly minimum but it can add up very quickly if things happen to change and you aren’t using that capacity for a few months. The monthly fees may also relate to services rendered, access to support or reserved capacity. 5. Overage charges The other side of the capacity equation are the overage charges. Since hosting providers need to plan for capacity across a number of customers, they try to estimate and control the variability by having overage charges when a customer’s usage exceeds the planned usage either in the month or during the year. It is very difficult to predict usage patterns so many customers tend to underestimate usage at the beginning of the year and end up paying huge amounts in overage charges due to this simple mistake. On a different note, Ravello uses nested virtualization technology to run existing VMware workloads or even ESXi hosts themselves in AWS/Google Cloud. It’s very different from a VMware hosting solution because it’s a SaaS solution where customers deploy and manage their own VMware workloads, and use all the built-in automation to save/share environments Ravello prices are fully transparent and similar to AWS pricing models than hosting providers - which also means you only pay for usage without any monthly fees or overage charges.

Almost every single one of the larger VMware customers I’ve worked with has, at some point or the other, needed additional capacity. Before Ravello entered the scene to run VMware workloads unmodified...

Five benefits of penetration testing on AWS using Ravello

Enterprises are looking to secure their network, web and applications against vulnerabilities – they are hiring ethical hackers to penetrate their infrastructure to discover holes. Public cloud (e.g. AWS or Google) is an excellent candidate to build pentesting environments to scale, but lacks some key functionality. Ravello's Security Smart Lab on AWS and Google overcome these drawbacks, and enable creation of high fidelity production environment replicas that can be used for effective penetration testing. Security – a priority, but execution is challenging With many enterprise breaches fresh in the memory, CISOs are focusing to coordinate incident detection and response in areas of networks, hosts, threat intelligence, and user behavior monitoring. They want their enterprise environments to be breach-proof and workforce fully trained and capable of thwarting any security incidents. Penetration testing or ethical hacking their network, web and application environments to discover ‘holes’ before a malicious hacker does, is their top priority. While the goal is clear, the execution presents a challenge. Enterprises are wary of penetration testing on their production infrastructure, worried that it may impact their business. To avoid this risk, they try to recreate a mock setup that mimics their production infrastructure in-house in their datacenters, and use it for penetration testing. However, amount of resources needed to have a realistic representation of the production environment to scale, typically prevents this mock setup to be effective for network, web or application penetration testing. Public cloud enables scale, but has some short-comings Cloud presents an interesting alternative, when it comes to building to scale. Using public clouds such as AWS, Google, Azure one can build replicas of enterprise environments that mimic the real world scale – but they are still far from being realistic representation of the DC based enterprise. AWS penetration testing enthusiasts typically run into the following challenges: Cloud version of virtual appliances are different from the ones enterprises run in their data-centers. Take for example Palo Alto Networks VM Series Firewall. While VM Series has a AMI (Amazon Machine Image), the functionality supported by VM Series AMI pales in comparison to VM Series Firewall intended for datacenters (VMWare or KVM version). Data-center networking is different from Cloud networking. Public cloud inherently blocks broadcast and multicast packets and provides access to only Layer 3 and above. Most (if not all) enterprise deployments rely on some Layer 2 protocol or the other for advanced functionality that their setup depends on (e.g. VRRP is typically needed for High Availability). Different networking & storage configuration. The environment setup such as networking and storage configuration e.g. IP addressing, netmask, VLANs is different between the production and the cloud environment Despite these drawbacks, if one were to proceed with penetration testing on public cloud, they would still not be able to perform an integrated scan of compute instances for vulnerabilities, compliance violations, and advanced threats. Public cloud providers typically block such a scan as it can put other compute instances used by different customers at risk. Further, AWS requires one to request for permission for vulnerability and penetration testing ahead of time. Public cloud + nested virtualization + networking overlay = On-demand Penetration Testing Ravello’s security smart lab on AWS & Google cloud overcomes these limitations. It enables organizations to create effective environments for their application or web pentesting on AWS & Google cloud. Here’s how Ravello’s Security Smart Lab overcomes the challenges – Isolated security sandbox capsules – Using Ravello’s isolated self-contained security sandbox capsules it is possible block any traffic from going out of the capsule – opening up doors to run extensive vulnerability scans even while running on public cloud. Running scans inside the capsule doesn’t pose any risk to other compute instances. High fidelity copy of enterprise environment – Ravello’s nested virtualization technology enables one to create an exact copy of the enterprise environment with the same virtual appliances and VMs that are being used in datacenter environments. This enables ethical hackers to pentest on exactly the same setup as their production enterprise environment – helping uncover real vulnerabilities in advance. Datacenter networking on public cloud – Ravello’s networking overlay enables clean Layer 2 networking on public cloud, enabling all the features that require access to broadcast, multicast frames amongst others. Further, this networking overlay enables one to keep the same networking configuration (right down to same IPs, netmasks and VLAN tags for each of the networks and NICs) Advanced tools: Port Mirroring – For effective pen testing, one needs advanced tools such as port mirroring to tap into packets traversing through a switch. Ravello’s security labs comes built-in with such tools. Accelerated penetration testing – Ravello’s ability to take a ‘blueprint’ snapshot of a setup and instantiate multiple copies of the same enables ethical hackers and pen-testers to parallelize their effort at finding more security holes in web or application penetration testing in a short amount of time These capabilities make Ravello Security Smart Lab an ideal environment for ethical hacking and penetration testing without risking their business. Using Ravello, AWS pentesting enthusiasts can get best of both datacenter capabilities and public cloud benefits (scale, cost-economics, on-demand capacity) in one unique service. Interested in trying out Ravello? Just open a Ravello trial account, and drop us a line. We will help you get started with your penetration testing environment in no time.

Enterprises are looking to secure their network, web and applications against vulnerabilities – they are hiring ethical hackers to penetrate their infrastructure to discover holes. Public cloud (e.g....

How to run Cumulus switch on AWS & Google cloud

Cumulus Networks provides a Linux based OS for data-center switches, and has seen a great adoption in the last year. Enterprises love Cumulus since existing Linux management, automation and monitoring tools work seamlessly with Cumulus switches – dramatically simplifying data-center operations. Network Architects want to try out Cumulus, but before rolling out new technology on their networks – they want to build out a leaf-spine topology to realistic enterprise scale and test things out. Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. Ravello's Network Smart Labs presents a great platform where all this and more is possible. With underlying technologies – nested virtualization and networking overlay – that power Ravello, it is possible to create full featured deployments with data-center networking (Layer 2) on AWS. Whether one wants to build Cumulus leaf-spine deployment from scratch or use an existing deployment as a starting point, Ravello presents an easy to use platform to do so.   Infact, there is a pre-built Cumulus switch leaf-spine topology on Ravello Repo that one can run with a single click. Just open a Ravello trial account, and add the blueprint to your Ravello library. If you want to build and run Cumulus switch deployments from scratch, Christian Elsen has written a very nice article on running Cumulus switches on AWS and Google cloud.  

Cumulus Networks provides a Linux based OS for data-center switches, and has seen a great adoption in the last year. Enterprises love Cumulus since existing Linux management, automation and monitoring...

Beyond Mininet: Use Ravello to test layer two OpenDaylight services in the cloud

Author: John Sobanski John Sobanski (Sr. Systems Architect) has been with Solers, Inc. for over ten years. John enjoys architecture, business development and machine learning. He has been an early advocate of the OpenDaylight platform and Ravello to both Public and Private customers. OpenDaylight allows network engineers to control switches with high level intelligence and abstracted services. Before Ravello, your engineers needed to deploy physical switches or use Mininet in order to integrate and test OpenDaylight. Neither AWS, Google Cloud, nor Azure provide native access to layer two (Ethernet, VLAN, LLDP, etc.) in the cloud. Ravello, however, provides a simple method to access Layer 2 (L2) services in the cloud. This lab will show you or your engineers how to integrate and test OpenDaylight in the cloud, using full Virtual Machines (VM) instead of Mininet containers. In this blog post you will learn how to: Connect virtual machines to a dedicated virtual switch VM in the cloud with Ravello Deploy and configure OpenDaylight Use a REST API to configure your network switch Easily steer flows through a firewall on ingress, but bypass on egress using OpenDaylight Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. Scenario You have a product distribution system where egress throughput greatly exceeds ingress throughput. For security reasons, you perform Deep Packet Inspection (DPI) on flows between external (EXT) hosts and your Demilitarized Zone (DMZ) proxies. To ensure internetwork communications pass through the DPI, you implement a DPI "router on a stick" where a switch "bent pipes" the traffic at L2. The egress traffic will increase past the capacity of the DPI appliance. You realize that there are cheaper methods of securing your egress flows than upgrading to a bigger DPI appliance. With egress flows you want to ensure that return/ACK traffic does not include exploits and that egress flows do not facilitate zombies or “phone home” exploits. Some ideas: Ensure only approved ports: Access Control List (ACL) Iptables Host firewalls Mitigate against malicious code over approved ports: HIDS on Servers Uni-directional bulk data push with Error Detection and Correction over one way fiber TLS with X.509 certificates You would like to have DPI inspection on ingress flows, but not egress, since the other security measures will cover the egress flows. One approach is to add "don't scan egress flows" logic to your DPI appliance, but that wastes capacity/resources and could saturate the backplane An approach with legacy Network protocols is very difficult to implement, and results in asymmetric routes (i.e., will break things) Using OpenDaylight, we have a simple solution that only requires matches/actions on six (6) flows The goal: When EXT initiates, pass through DPI When DMZ initiates: Bypass DPI on PUT (egress) Scan on GET (ingress) Here is the logic for our OpenFlow rule set: ACL only allows permitted flows For ingress (EXT -> DMZ) flows, allow normal path to virus scan via gateway For egress (DMZ -> EXT) PUT flows, intercept packet Change destination MAC from gateway to EXT Change destination Port from gateway to EXT Decrement TTL by one For egress (DMZ -> EXT) GET flows (treat as ingress) DMZ uses dummy IP for EXT server Switch intercepts packet Switch changes source IP to dummy DMZ address Switch changes destination IP to correct EXT IP Packet continues on its way to gateway Reverse logic for return traffic Lab Setup This setup goes into the details of our test bed architecture. You can either create the architecture from scratch or use the blueprint Solers provides in the Ravello library. Architecture Our test uses the following architecture, and Ravello allows us to access layer two services in a cloud environment: Deploy four Linux virtual machines with Open vSwitch version 2.3.1 or greater. You can leave the management ports for all VMs with the default (AutoMac/ VirtIO/ DHCP/ Public IP) settings. Be sure to enable SSH as a service for all four VMs. Your central "s3" VM will contain the virtual switch and controller, so open up ports 8181 (ODL) and 8080 (Web). Each of the arrows in our Architecture diagram represents a physical link, or wire. We simulate these physical wires in the Ravello layer as a network underlay. While we configure this Ravello layer with IP, the Ravello layer presents these networks as physical links to our Virtual Machines: Some troubleshooting hints: Ensure all ports are trunk ports (it is okay to keep the Management ports as Access 1) You will be tempted to make the underlay links /30, since they are point to point. Ensure, however, that you make these /24s, as in the diagram above We do not show the management ports (eth0) in the diagram above, since they are out-of-band. Be sure to include the MAC addresses above, since we will use these values to trigger OpenDaylight services. Configure your canvas to match the same Layer 3 and Layer 2 topology above. As an example, you would set the following Network configurations for the "ext" VM above: Name: eth1 MAC: 72:57:E7:E1:B4:5F Device: e1000 (default) Static IP: 172.16.103.2 Netmask: 255.255.255.0 Gateway: DNS: External Access: Inbound (OFF), Outbound (ON), Public IP (Uncheck "even without external services") Advanced: Mode (Trunk), VLAN Tags () Repeat the appropriate configurations for all four Virtual Machines. Your network will look like the following diagram: Once you finish configuring your Ravello layer, you can SSH into the virtual machines. Note, at this virtual machine layer you will configure different IP addresses for the Virtual Machine NICs (but the MAC addresses will match). EXT Server The EXT server simulates an un-trusted client and server. Edit the NIC: $ sudo vim /etc/network/interfaces.d/eth1.cfg auto eth1 iface eth1 inet static address 10.10.1.102 netmask 255.255.255.0 post-up route add -net 10.10.2.0 netmask 255.255.255.0 gw 10.10.1.1 post-up route add -net 6.6.6.0 netmask 255.255.255.0 gw 10.10.1.1 You will need to restart the network service for the change to take effect. $ sudo service networking restart Then upload server.py and create a file named "test.txt". Finally, issue the following command to pre-populate the arp table: $ sudo arp -s 10.10.1.1 5A:F6:C6:6A:DB:05 DMZ Server Run the following shell command: $ sudo vim /etc/network/interfaces.d/eth1.cfg auto eth1 iface eth1 inet static address 10.10.2.101 netmask 255.255.255.0 post-up route add -net 10.10.1.0 netmask 255.255.255.0 gw 10.10.2.1 post-up route add -net 5.5.5.0 netmask 255.255.255.0 gw 10.10.2.1 $ sudo service networking restart $ sudo arp -s 10.10.2.1 FE:C3:2D:75:C2:26 In addition, upload server.py and create a file named "test.txt". Firewall You need to turn the "firewall" into a router to pass traffic between the two NICs and make the change permanent: $ sudo sysctl -w net.ipv4.ip_forward=1 $ sudo vim /etc/rc.local sysctl -w net.ipv4.ip_forward=1 $ sudo vim /etc/network/interfaces.d/eth1.cfg auto eth1 iface eth1 inet static address 10.10.1.1 netmask 255.255.255.0 $ sudo vim /etc/network/interfaces.d/eth1.cfg auto eth2 iface eth2 inet static address 10.10.2.1 netmask 255.255.255.0 $ sudo service networking restart $ sudo arp -s 10.10.1.102 72:57:E7:E1:B4:5F $ sudo arp -s 10.10.2.101 BA:74:4C:7A:93:50 L2 Switch First ensure that your server brought up all interfaces. If not, bring them up manually: $ sudo ifconfig eth1 up $ sudo ifconfig eth2 up $ sudo ifconfig eth3 up $ sudo ifconfig eth4 up Then install OVS: $ sudo apt-get install openvswitch-switch $ sudo vim /etc/rc.local ifconfig eth1 up ifconfig eth2 up ifconfig eth3 up ifconfig eth4 up exit 0 $ sudo ovs-vsctl add-br br0 $ sudo ovs-vsctl add-port br0 eth1 $ sudo ovs-vsctl add-port br0 eth2 $ sudo ovs-vsctl add-port br0 eth3 $ sudo ovs-vsctl add-port br0 eth4 $ sudo ovs-vsctl set bridge br0 protocols=OpenFlow13 At this point, you should be able to ping from DMZ to EXT and vice versa. If this is not the case, then follow these troubleshooting hints: Pre-populate the arp cache Run route commands to ensure proper routes Ensure all ports at the Ravello layer are trunk Ensure all point to point links at the Ravello layer use a /24 and not /30 Ensure that the VM Mac Addresses match up with the Ravello layer MAC addresses Ensure that NIC's eth1, eth2, eth3 and eth4 on SW3 do not have IP addresses and that the OVS switch ports match up with the Linux Kernel switch ports To do this, run $ sudo ovs-ofctl -O OpenFlow13 show br0 Do not proceed until you can ping full mesh across DMZ, EXT, and the FW virtual machines (excluding management ports). Install OpenDaylight OpenDaylight allows you to control switches with high level intelligence and abstracted services. First, if you do not already have Java installed, you need to install Java 7(+): $ sudo apt-get install openjdk-7-jdk $ sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/java-7-openjdk-amd64/bin/java 1 $ sudo update-alternatives --config java Then add the following line to the end of your ~/.bashrc file: export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 # This matches sudo update-alternatives --config java Then download, unzip and run OpenDaylight: $ wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.3.2-Lithium-SR2/distribution-karaf-0.3.2-Lithium-SR2.zip $ sudo apt-get install unzip $ unzip distribution-karaf-0.3.2-Lithium-SR2.zip $ /home/ubuntu/distribution-karaf-0.3.2-Lithium-SR2/bin/karaf clean This will take several minutes to start. Once you get the Karaf prompt, only add the following module: opendaylight-user@root>feature:install odl-l2switch-switch-ui Installing the odl-l2switch-switch-ui module may also take several minutes. You can check to see if OpenDaylight started by running: $ sudo netstat -an | grep 6633 Finally, upload and unzip the REST API scripts. Connect your OpenVswitch to OpenDaylight Open a new shell to SW3. If you kill the Karaf prompt, it closes OpenDaylight. Then, connect to the local Controller: $ sudo ovs-vsctl set-controller br0 tcp:0.0.0.0:6633 $ sudo ovs-vsctl set controller br0 connection-mode=out-of-band $ sudo ovs-vsctl list controller When you list the controller, you will want to see: connection_mode : out-of-band is_connected : true target: "tcp:0.0.0.0:6633" Ping around your network. It will take some time for the OpenDaylight controller to "learn" your network. The virtual switch off-loads all of the intelligence to the controller. We recommend first pinging "in network" (i.e., have DMZ and EXT ping their local gateways), and then ping between networks. Now go to the DLUX GUI and login with admin/admin. http://.220.71.123:8181/index.html#/login You will see your devices in the OpenDaylight GUI: You can also dump the flows of the local switch to show that OpenDaylight "learned" the Layer 2 topology: $ sudo ovs-ofctl -O OpenFlow13 dump-flows br0 OFPST_FLOW reply (OF1.3) (xid=0x2): cookie=0x2a00000000000000, duration=385.025s, table=0, n_packets=729, n_bytes=71328, idle_timeout=1800, hard_timeout=3600, priority=10,dl_src=ba:74:4c:7a:93:50,dl_dst=fe:c3:2d:75:c2:26 actions=output:2 cookie=0x2a00000000000003, duration=28.425s, table=0, n_packets=2915, n_bytes=285404, idle_timeout=1800, hard_timeout=3600, priority=10,dl_src=72:57:e7:e1:b4:5f,dl_dst=5a:f6:c6:6a:db:05 actions=output:1 cookie=0x2a00000000000002, duration=28.438s, table=0, n_packets=1695, n_bytes=166034, idle_timeout=1800, hard_timeout=3600, priority=10,dl_src=5a:f6:c6:6a:db:05,dl_dst=72:57:e7:e1:b4:5f actions=output:3 cookie=0x2a00000000000001, duration=385.030s, table=0, n_packets=1947, n_bytes=190654, idle_timeout=1800, hard_timeout=3600, priority=10,dl_src=fe:c3:2d:75:c2:26,dl_dst=ba:74:4c:7a:93:50 actions=output:4 cookie=0x2b00000000000000, duration=412.169s, table=0, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x2b00000000000003, duration=408.298s, table=0, n_packets=914437, n_bytes=89614674, priority=2,in_port=3 actions=output:2,output:1,output:4,CONTROLLER:65535 cookie=0x2b00000000000001, duration=408.298s, table=0, n_packets=960900, n_bytes=94168048, priority=2,in_port=1 actions=output:2,output:4,output:3,CONTROLLER:65535 cookie=0x2b00000000000002, duration=408.298s, table=0, n_packets=906546, n_bytes=88841280, priority=2,in_port=4 actions=output:2,output:1,output:3,CONTROLLER:65535 cookie=0x2b00000000000000, duration=408.298s, table=0, n_packets=905487, n_bytes=88737612, priority=2,in_port=2 actions=output:1,output:4,output:3,CONTROLLER:65535 cookie=0x2b00000000000001, duration=412.168s, table=0, n_packets=7, n_bytes=1400, priority=100,dl_type=0x88cc actions=CONTROLLER:65535 Lab Execution To observe the OpenDaylight triggered "egress bypass" service follow these steps: Observe baseline operations Push a file from the DMZ server to the EXT server Observe that traffic passes through the firewall Configure our switch with OpenDaylight Use the REST API to inject the egress bypass rules into our switch Observe egress bypass   Push a file from the DMZ server to the EXT server once more Now observe that traffic does not pass through the firewall Observe ingress scanning Trigger the DMZ server to pull a file from the EXT server Since this flow is ingress, we will observe the traffic pass through the firewall Observe Baseline Operations Open separate SSH terminals for your external (EXT) server, DMZ server, and the firewall. Start a web server on your EXT server with the following command, which starts a Python web server that accommodates GET and PUT: ubuntu@ext:~$ sudo python ./server.py 80 Snoop the traffic on your firewall (FW) with the following command: ubuntu@fw:~$ clear; sudo tcpdump -i eth2 port 80 Now PUSH a file from DMZ to EXT: ubuntu@dmz:~$ curl http://10.10.1.102 --upload-file test.txt We will see success on the DMZ shell, via the following message: ubuntu@ext:~$ sudo python ./server.py 80 Starting a server on port 80 ----- SOMETHING WAS PUT!! ------ User-Agent: curl/7.35.0 Host: 10.10.1.102 Accept: */* Content-Length: 5 Expect: 100-continue 10.10.2.101 - - [04/Dec/2015 15:49:29] "PUT /test.txt HTTP/1.1" 200 - Test Our PUSH from DMZ to EXT took a path through the firewall, so we see a packet dump on the snoop shell: Configure Switch with OpenDaylight If you haven't already, start and connect to OpenDaylight. Refer to the Lab Setup section above for details. Once it started, use the REST API to discover the ID of your virtual switch. In any browser, go to the following address: http://:8080/restconf/operational/opendaylight-inventory:nodes/ You should see just one node, your local OVS switch. Copy the ID of the node. For example, we list our ID below (NOTE: DO NOT USE THIS ID, YOURS WILL BE DIFFERENT). Our switch uses ID 49213347348856. Use this ID in the put_flows.sh script, in order to inject the flows into the switch with the REST API. Alternatively, you can manually install the flows using POSTMAN. From the shell of SW3, run the following command: ubuntu@sw3:~/demo_fw_flows_ravello$ ./put_flows.sh 49213347348856 > PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/404 HTTP/1.1 < HTTP/1.1 200 OK > PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/505 HTTP/1.1 < HTTP/1.1 200 OK > PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/606 HTTP/1.1 < HTTP/1.1 100 Continue < HTTP/1.1 200 OK > PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/707 HTTP/1.1 < HTTP/1.1 100 Continue < HTTP/1.1 200 OK > PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/808 HTTP/1.1 < HTTP/1.1 100 Continue < HTTP/1.1 200 OK > PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/909 HTTP/1.1 < HTTP/1.1 100 Continue < HTTP/1.1 200 OK ubuntu@sw3:~/demo_fw_flows_ravello$ If you do not see an "OK" for every flow then run the script again. You can verify that OpenDaylight populated the switch with the following command: ubuntu@sw3:~$ sudo ovs-ofctl -O OpenFlow13 dump-flows br0 OFPST_FLOW reply (OF1.3) (xid=0x2): cookie=0x0, duration=120.455s, table=0, n_packets=0, n_bytes=0, priority=200,tcp,in_port=2,nw_src=10.10.1.102,nw_dst=10.10.2.101,tp_src=80 actions=set_field:5.5.5.5->ip_src,set_field:ba:74:4c:7a:93:50->eth_dst,output:4 cookie=0x0, duration=120.392s, table=0, n_packets=0, n_bytes=0, priority=200,tcp,in_port=3,nw_src=10.10.1.102,nw_dst=6.6.6.6,tp_src=80 actions=set_field:10.10.2.101->ip_dst,set_field:5a:f6:c6:6a:db:05->eth_dst,output:1 cookie=0x0, duration=120.616s, table=0, n_packets=0, n_bytes=0, priority=300,tcp,in_port=3,nw_src=10.10.1.102,nw_dst=10.10.2.101,tp_src=80 actions=set_field:ba:74:4c:7a:93:50->eth_dst,output:4 cookie=0x2b00000000000000, duration=594.845s, table=0, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x2b00000000000003, duration=591.004s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=3 actions=output:2,output:1,output:4,CONTROLLER:65535 cookie=0x2b00000000000001, duration=591.006s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=1 actions=output:2,output:4,output:3,CONTROLLER:65535 cookie=0x2b00000000000002, duration=591.004s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=4 actions=output:2,output:1,output:3,CONTROLLER:65535 cookie=0x2b00000000000000, duration=591.006s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=2 actions=output:1,output:4,output:3,CONTROLLER:65535 cookie=0x0, duration=120.580s, table=0, n_packets=0, n_bytes=0, priority=200,tcp,in_port=1,nw_src=10.10.2.101,nw_dst=10.10.1.102,tp_dst=80 actions=set_field:6.6.6.6->ip_src,set_field:72:57:e7:e1:b4:5f->eth_dst,output:3 cookie=0x0, duration=120.513s, table=0, n_packets=0, n_bytes=0, priority=200,tcp,in_port=4,nw_src=10.10.2.101,nw_dst=5.5.5.5,tp_dst=80 actions=set_field:10.10.1.102->ip_dst,set_field:fe:c3:2d:75:c2:26->eth_dst,output:2 cookie=0x0, duration=120.616s, table=0, n_packets=0, n_bytes=0, priority=300,tcp,in_port=4,nw_src=10.10.2.101,nw_dst=10.10.1.102,tp_dst=80 actions=set_field:72:57:e7:e1:b4:5f->eth_dst,output:3 cookie=0x2b00000000000000, duration=594.845s, table=0, n_packets=20, n_bytes=4000, priority=100,dl_type=0x88cc actions=CONTROLLER:65535 ubuntu@sw3:~$ In addition, you can use the REST API with a browser to see the flows. Be sure to substitute your ID in the URL: http://:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0 Observe egress bypass At this point, you should still have your PYTHON server running on EXT and a snoop running on FW. If not, go to baseline operations above to set these up. Now, run the PUSH command from the DMZ server and observe the action: ubuntu@dmz:~$ curl http://10.10.1.102 --upload-file test.txt Again, we see "SOMETHING WAS PUT" on our EXT server... ...but this time we do not see traffic on the firewall! Now, let's do a DMZ GET to EXT. In this case, we treat the flow as ingress, even though the DMZ initiates. We use a dummy IP to trigger a flow match. The egress port of the switch will NAT it back to the real destination IP. We see instant feedback on the DMZ Console: Go to the EXT server and you will see notice of the GET (Note the dummy IP for our server). Finally, go to the FW snoop shell and you will see this GET went through the firewall: Before you end the lab, remove the flows: ubuntu@sw3:~/demo_fw_flows_ravello$ ./remove_flows.sh 49213347348856 > DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/909 HTTP/1.1 < HTTP/1.1 100 Continue < HTTP/1.1 200 OK > DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/808 HTTP/1.1 < HTTP/1.1 100 Continue < HTTP/1.1 200 OK > DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/707 HTTP/1.1 < HTTP/1.1 100 Continue < HTTP/1.1 200 OK > DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/606 HTTP/1.1 < HTTP/1.1 100 Continue < HTTP/1.1 200 OK > DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/505 HTTP/1.1 < HTTP/1.1 200 OK > DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/404 HTTP/1.1 < HTTP/1.1 200 OK ubuntu@sw3:~/demo_fw_flows_ravello$ For more fun with OpenDaylight, see Solers' presentation at OpenDaylight. You can find the PowerPoint here or the video here.

Author: John Sobanski John Sobanski (Sr. Systems Architect) has been with Solers, Inc. for over ten years. John enjoys architecture, business development and machine learning. He has been an early...

How to setup your ESXi lab for upgrading from VMware vSphere 5.5 to 6.0

With the new release of VMware vSphere 6.0, many organizations are thinking about upgrading from existing 5.5 to 6.0 version. However, upgrading multi host ESXi environments running production systems is not an easy task. Most IT administrators would like to perform upgrade in a controlled lab environments so they can practice the upgrade steps, create run book and then do the actual upgrade in their data center environments. The challenge is that it takes a long time to procure hardware and setup isolated multi host ESXi environments, which can be used a test labs to perform upgrades. Ravello Systems allows you to run nested ESXi on public clouds AWS and Google Cloud. In this blog, we will describe how you can practice upgrade from 5.5 to 6.0 in ESXi lab environments created on public clouds. VMware vSphere can be deployed with either a windows based vCenter or the Linux-based VMware vCenter Server Appliance (VCSA). In this document we'll discuss the upgrade procedure from both platforms to both other platforms, and discuss the advantages and disadvantages of both. In addition, we'll talk about the ways to easily upgrade ESXi from 5.5 to 6.0 and when to use which method. Let’s assume that our existing ESXi 5.5 environment which is to be upgraded to 6.0 consists of following components: One preinstalled ESXi host to host the vCenter 6 VM with the following specs: 2 CPU 12GB Memory 150GB Storage For more information on hardware requirements, see here. Any other ESXi hosts - if you are already running these in your current lab environment or wish to install or upgrade additional ESXi hosts. One preinstalled vCenter 5.5 server This can be either running on windows or the VCSA. This VM can be running directly on Ravello for the windows vCenter. If you are running the vCSA it should be running on a nested ESXi and you should keep in mind the additional resources required to host the new VCSA. Fully resolvable DNS, both forward and reverse. Keep in mind that the Ravello DNS currently does not perform reverse lookups and as such you'll need an external DNS server such as Microsoft AD DNS or a Linux-based DNS server such as Bind or PowerDNS First, you will setup ESXi 5.5 environment which mirrors your existing 5.5 setup in your data center. Follow the instructions in this blog to setup this environment. Then, execute a test upgrade in this isolated 5.5 environment replica on Ravello, document the steps and you acn then run them in your Data center for the actual upgrade. Upgrading Windows-based vCenter Upgrading the Windows-based vCenter from 5.5 is a relatively simple process. First, we'll have to start with the requirements before we can start the upgrade: Ensure your operating system is compatible. Any windows server version from 2008R2 onwards is supported (up to 2012R2). Verify that the hostname of your vCenter server is resolvable, both forwards and backwards. You can test this by starting a command prompt on your windows server, then running the following commands. nslookup yourvcenter.host.name nslookup yourvcenter.ip.address The first result should return the Ip address of the vCenter, which should match the second command you'll run. This in turn should return the hostname that you entered in the first command. After this has been completed, we can start the actual upgrade. First, attach the iso image of the vCenter installer to your Ravello windows vCenter VM. Start the autorun.exe located on the DVD and start the installation: Next, accept the terms of the license agreement and continue: Provide your administrator credentials for vCenter single-signon: Accepting the default ports is recommended, so for this setup we'll do that. If you change any default ports, take note of the non-custom ports in case you'll need these later. Again, accepting the default install locations is recommended. if you wish to change these locations note them down. Confirm that you have backed up your vCenter and select upgrade, and you are on your way. The upgrade will take anywhere between 10 and 45 minutes depending on a variety of factors, so this is the time to get some coffee and watch the progress bar. After the upgrade is complete, you should be presented with the following screen: Launch the web client (or if you have already configured ports to be forwarded in Ravello connect from your own computer) and try to log in with your administrator@vsphere.local credentials. If the vSphere web client doesn't load, ensure that you have the desktop experience role installed, which is required to load all vSphere web client components. This can be done through the "Add roles and features" wizard included in windows, or easier, through the following powershell command: Install-WindowsFeature Desktop-Experience After logging in, you will be presented with your new vCenter 6 web client. If you open the help -> About VMware vSphere in the top right, you should see the new version number. And that's it for the Windows vCenter upgrade! Afterwards, you can proceed with the "Upgrading ESXi" if you wish to upgrade your ESXi hosts to version 6 as well. Upgrading VMware vCenter Server Appliance Upgrading the vCenter Server appliance, while not much more complicated follows a slightly different procedure from the Windows vCenter Server. Instead of performing an in-place upgrade a new appliance will be deployed, configured and started, after which the old appliance will be disabled and IP addresses will be swapped. Since we currently cannot install the VCSA 6 appliance directly on Ravello, we'll need to run this nested on ESXi. First off, we'll need to validate the requirements before upgrading the VCSA: Verify that the hostname of your existing 5.5 vCenter server is resolvable, both forwards and backwards. You can test this by starting a command prompt on your windows server, then running the following commands. nslookup yourvcenter.host.name nslookup yourvcenter.ip.address The first result should return the Ip address of the vCenter, which should match the second command you'll run. This in turn should return the hostname that you entered in the first command. Ensure you have an ESXi host running on Ravello which is configured to run virtual machines and matches the hardware requirement mentioned above. In addition, this ESXi host needs to have a port group which is in the same network as the current VCSA. Ensure you have a windows machine running in Ravello to perform the upgrade. This can be a temporary virtual machine, but currently a windows machine is required for the VCSA upgrade procedure. Connect the vCenter 6 ISO to your windows machine and log in to the desktop, preferably through RDP. Ensure that the desktop experience is installed if you are running a server OS through either the "Add roles and features" wizard included in windows, or easier, through the following powershell command: Install-WindowsFeature Desktop-Experience Open the VCSA directory on the DVD, then run the VMware-Clientintegrationplugin.msi installer. Follow the installation and reboot your machine if required, then open the root directory on the DVD. Open the vcsa-setup.html file, which should start your browser and launch the install page. If you get any popups regarding allowing the VMware Client integration plugin, click accept. Click upgrade. Then, select "continue upgrade". Accept the license agreement, then enter the details of your ESXi host you will be deploying to. This should be the ESXi host you have already deployed in Ravello, but can be one already managed by your current VCSA. Accept the certificate when given the warning. Then, enter the name of your virtual appliance. This name should match the name of your existing vCenter appliance. Then, configure your source vCenter, being the vCenter that you are upgrading from. Enter your old VCSA hostname, password, SSO port and the hostname, username and password for the ESXi host your current VCSA is running on. Select your appliance size and datastore. For the appliance size there are very little reasons to use anything but tiny when running in a lab environment. As the last step, configure the temporary network. The temporary network is the port group on your new ESXi host which should be reachable from your other VCSA appliance. For the Network Address, the subnet mask and the gateway, keep in mind that these are not the IP addresses of your new vCenter, but a temporary IP address that will be used while the new VSCA is migrating data from the old VCSA. As such, it should not be an existing IP address. In addition, ensure DNS servers are entered and you can resolve the hostname of your old vCenter and ESXi host on these DNS servers, otherwise your installation will fail. Review the settings, click complete and wait until the installation is done. This can take anywhere between 15 minutes up to 90 minutes. Keep in mind that your browser might not always refresh, but if you wish to follow the status you can always open the console of your new VCSA VM. After the upgrade is complete, you should be presented with the following screen: Launch the web client (or if you have already configured ports to be forwarded in Ravello connect from your own computer) and try to log in with your administrator@vsphere.local credentials. If the vSphere web client doesn't load, ensure that you have the desktop experience role installed, whcih is required to load all vSphere web client components. This can be done through the "Add roles and features" wizard included in windows, or easier, through the following powershell command: Install-WindowsFeature Desktop-Experience After logging in, you will be presented with your new vCenter 6 web client. If you open the help -> About VMware vSphere in the top right, you should see the new version number. And that's it for the Windows vCenter upgrade! Afterwards, you can proceed with the "Upgrading ESXi" if you wish to upgrade your ESXi hosts to version 6 as well.

With the new release of VMware vSphere 6.0, many organizations are thinking about upgrading from existing 5.5 to 6.0 version. However, upgrading multi host ESXi environments running production systems...

How to model and test NFV deployments on AWS & Google Cloud

Author: Hemed GurAry, CISSP and CISA, Amdocs Hemed GurAry is a Cloud and Security architect with Amdocs. Hemed specializes in network and application architecture for Finance and Telcos, bringing experience as a PMO and a leading team member in high key projects. His ongoing passion is hacking new technologies. Network Function Virtualization has taken the networking world by storm. It brings to the table many benefits such as cost savings, network programmability and standardization to name a few. Ravello with its nested virtualization, software defined networking overlay and an easy to use ‘drag and drop’ platform offers a quick way to set up these environments. With Ravello being a cloud based platform, it is available on-demand and opens up the opportunity to build sophisticated deployments without having to invest time and money to create a NFVI from scratch. This three-part series blog post will walk you through the instructions on how to build using Ravello a complete NFV deployment with a working vFW service chain onboard. The deployment will based on Juniper Contrail and Openstack comprising of three nodes. We will start with this part by installing and configuring the NFV setup. Deployment Architecture VMs Start with three empty virtual servers, each server has the following properties: 4 CPU’s, 32GB of Memory, 128GB of Storage and one network interface. Note: It’s important to define a hostname and use static IP’s for each server to preserve the setup’s state. Software The following software packages are used in this tutorial: Ubuntu Precise Pangolin Minimal Server 12.04.3 Juniper Contrail release 2.01 build 41 + Openstack Icehouse Cirros 0.3.4 Network The three virtual servers running on Ravello are connected to our underlay network, CIDR: 10.0.0.0/24. Three overlay networks were configured on our contrail WebUI: Management – 192.168.100.0/24 Left – 10.16.1.0/24 Right – 10.26.1.0/24 Configuration Steps Below are step-by-step instructions on how to configure the setup: Setup VM’s and install the operating system Download Contrail packages and install controller node Fabric testbed.py population Install packages on compute nodes and provision setup Setup self-test Step 1: Setup VM’s and install the operating system We will start with configuring the Ravello application, setup the VM’s and install the operating system on each VM. This guide will focus on elements specific to Contrail so if you don’t know on how to build a Ravello application please refer to Ravello User Guide first. It is also assumed you are able to install Ubuntu on the servers, either by installing or by using a preconfigured image. We installed an empty Ubuntu 12.04.3 on an empty Ravello image and then reused a snapshot. The following properties are the same for all the VM’s. CPUs: 4 Mem Size: 32GB Display: VMware SVGA Allow Nested Virtualization: Yes Disk: hda

Author: Hemed GurAry, CISSP and CISA, Amdocs Hemed GurAry is a Cloud and Security architect with Amdocs. Hemed specializes in network and application architecture for Finance and Telcos, bringing...

Non-dummies guide to nested ESXi lab on Ravello

Ravello’s nested ESXi offering has been out for quite some time. With more and more users and use cases, and advanced setups created on a regular basis - we wanted to make sure you know where to find guides and tools to help you quickly run your VMware vSphere/ESXi lab on Ravello. Before you get started: do you really need to install ESXi? The first thing to do before you get started is make sure you really need to run ESXi in the cloud. VMware applications can, and have been, running successfully natively on Ravello’s HVX right from the start: running SharePoint environments, SAP, .NET and other enterprise applications, and even virtual networking and security appliances usually should be done natively on Ravello. We previously published a blog to help you figure out whether you can run your VMware application on Ravello’s HVX or whether you need to install the ESXi hypervisor. If you need help - feel free to email us with your use case. Nested ESXi: setting up If indeed you determined running the hypervisor is required - we’ve provided a few how to guides to set up your basic lab and get going: Install and configure ESXi on the public cloud: upload ESXi ISO to Ravello, install ESXi, configure ESXi and save your ESXi to your Ravello VM library. Install and configure VMware vCenter 5.5 server on the public cloud: upload vCenter Server appliance to Ravello, create vCenter VM in Ravello, configure it to run on Ravello, save it to your VM library. Set up a full VMware datacenter in a public cloud: create a data center, configure ESXi hosts to use NFS, create VMs to run on VMware cluster, set the VMs start and shutdown order, save application blueprint Advanced set ups: VPNs, NFS, DHCP for 2nd level guest and more Now that you’ve got your basic setup going, you will probably want to add some more advanced elements to your lab environment. Here are a few step-by-step guides to start with: Build simple shared storage using an NFS server: simply install and configure your NFS server and save it to your VM library. Setup DHCP for 2nd level guests running on ESXi: since Ravello is actually unaware of the 2nd level guests running on your ESXi hypervisor, those guests cannot reach Ravello’s DHCP server by default. Here you’ll learn how to define the networking in Ravello and your vSphere environment to support another DHCP server and install and configure your own 2nd level guest DHCP server VM to service the other guests. Setup a VPN connection to environment running in Ravello from a vanilla pfSense image: step by step to guide you through the scenario where one environment in running in the cloud with Ravello, and another ban be in an on-premise data center, or a in a VPC in AWS. etc. Build a 250 node VMware vSphere/ESXi lab environment in AWS for testing: this large scale ESXi data center in AWS, which costs less than $250/hr, is a guide useful for enterprises for upgrade testing their VMware vSphere environments of roe new product and feature testing. Additional VMware products how-tos Install and run VSAN 6.1 environment on AWS or Google Cloud: we created this guide to facilitate testing out new features and showcasing storage management products working with this VSAN release. We walk you through configuring your VSAN environment and saving the setup as a blueprint in your Ravello library. This is very useful, for example, for demo and POC environments that can be provisioned in minutes. Install VMware NSX 6.2: Software defined networking is an essential component of the software defined data-center. While installing NSX on a “normal” platform can be resource-intensive and time consuming, it is valuable as it enable you to virtualize your networking infrastructure. Learn how by provisioning NSX on Ravello, you can install it once, and re-deploy any time, greatly reducing the time required for the setup of a new testing, demo or PoC environment. Install and configure vRealize Automation and test orchestration scripts: A simple deployment to try out vRA, test upgrade scenarios and new features, develop new customizations and more. The setup contains: the vRealize appliance, an identity appliance, an IAAS server, a domain controller and an orchestrator appliance, and as an option - a windows vCenter server and two ESXi hosts to test the deployment of virtual machines. I hope this brief description of the guides we’ve put together will help you quickly find your way to what you’re looking to run on Ravello. Feel free to comment here with other products you’d like to have guides to, or tell about the setups you’ve build in your lab.

Ravello’s nested ESXi offering has been out for quite some time. With more and more users and use cases, and advanced setups created on a regular basis - we wanted to make sure you know where to...

How to integrate Brocade SDN Controller with OpenStack on AWS & Google

Brocade’s SDN Controller and OpenDaylight controller are excellent options for companies that are looking to bring virtual network services on OpenStack. In the past year, Brocade has made significant investments to improve the integration between OpenDaylight and Openstack  – including more complete interface to Neutron and OVS-DB and touching on policy, topology, provisioning, additional south-bound plugins.   Brocade's SDN Controller with OpensStack[/caption] Ravello with its nested virtualization and networking overlay serves as an excellent platform for modeling, building and testing Network Function Virtualization (NFV) and SDN topologies – such as orchestration of network services using OpenDaylight on OpenStack Neutron – before production roll-outs. Network ISVs and enterprises alike can use Ravello to jump-start their NFV PoCs and deployments by accessing data-center networking on AWS & Google cloud without having to wait for hardware resources, and incurring CapEx. Alec Rooney from Elbrys Networks has written a detailed article on how to setup a fully-functional Brocade SDN / OpenDaylight Controller integrated with OpenStack on Ravello and he has a video to walk you through the steps. Interested in setting up your very own OpenDaylight OpenStack integration? Just open a Ravello account and follow the instructions.

Brocade’s SDN Controller and OpenDaylight controller are excellent options for companies that are looking to bring virtual network services on OpenStack. In the past year, Brocade has made significant...

Malware analysis using REMnux on AWS

Calling all malware analysts! We are proud to share that REMnux is now available on Ravello Repo. Using Ravello’s nested virtualization and networking overlay technology, it is now possible to run REMnux in an isolated sandbox environment for malware analysis on public clouds like AWS. For the uninitiated, REMnux is a Linux toolkit for helping malware analysts with reverse engineering malicious software. At the heart of this toolkit is REMnux Linux distribution based on Ubuntu. REMnux incorporates many tools for analyzing Windows and Linux malware, examining browser-based threats such as obfuscated JavaScript, exploring suspicious document files and taking apart other malicious artifacts. Using REMnux, forensic investigators and incident responders can intercept suspicious network traffic in an isolated lab when performing behavioral malware analysis.   Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. The REMnux blueprint inthe Ravello repo is configured to use 1 CPU, 8GB RAM and 14GB storage to optimize resource utilization while keeping costs low if you’re running a single REMnux virtual appliance as part of the application. Also, the blueprint requests a publicly-accessible IP address and allows your REMnux virtual appliance to be reachable on TCP ports 22, 25, 80 and 443. You might want to tweak these settings based on your needs. Ravello lets you add other VMs to the environment and tweak many network parameters. To run REMnux, please open a Ravello trial account, and add the REMnux blueprint to your library. Read more on REMnux blog on how to get your REMnux environment running on Ravello.

Calling all malware analysts! We are proud to share that REMnux is now available on Ravello Repo. Using Ravello’s nested virtualization and networking overlay technology, it is now possible to run REMnux...

How to install and configure vRealize Automation (vRA) and test your orchestration scripts - Part 1

Whether you are installing and configuring VMware’s vRealize Automation (vRA) for the first time or need a lab to test your automation and orchestration scripts, you will find this step by step guide useful. Instead of relying on spare hardware, I will be deploying this in a Ravello lab which runs on AWS/Google Cloud. Since I can install ESXi on Ravello, I’ll be treating it just like my data center - so the steps will be similar after that. On a side note, you might want to refer to our previous posts about setting up labs for VSAN, NSX or just vCenter on Ravello and see what the VMware community is saying about it. You can use this guide to try out the vRA product, test drive upgrade scenarios, test new features or develop and test new customizations without requiring the resources of a physical environment. vRealize Automation can be deployed in a multitude of ways, but for this setup we’ll try to keep the deployment as simple as possible, without any failover or redundancy capatibilities. After setting up the simple deployment of vRealize automation, configuring a highly available setup is left as an exercise to the reader. For the deployment of vRealize automation we’ll need the following components: Windows domain Controller or LDAP server. Windows vRealize IAAS & SQL (Express) server. vRealize virtual appliance. Optional: vRealize identity appliance. Optional: vRealize orchestrator appliance. Optional: vCenter server – This can be either the vCenter appliance or the windows virtual machine, in which case you could use the machine already provisioned for active directory. Optional: 1 or more ESXi hosts. Currently, my vRealize lab in Ravello looks as follows. It contains the vRealize appliance, an identity appliance, an IAAS server, a domain controller and an orchestrator appliance. It also includes a windows vCenter server and two ESXi hosts to test the deployment of virtual machines. This however is completely optional and not required for the deployment of vRealize Automation. vRealize automation can be used for a multitude of tasks. One of these is the deployment of virtual machines on vSphere, vCloud Director, Openstack, Amazon Web Services or a variety of other public or private cloud providers. For this you'll need some kind of cloud provider or virtualization platform. The other functionality is the provisioning of advanced services through a workflow engine called vRrealize Orchestrator. This allows you to provision miscellaneous services through tools such as Powershell, bash, REST API's or a multitude of plugins available for various products. You can test both of these features in a Ravello Lab. Ravello nested virtualization with hardware acceleration capability enables you to run Openstack and ESXi environments on AWS and Google Cloud. Also, you can run Exchange and other Windows, Linux systems as VMs to test out vRA orchestration capabilities. Deployment This deployment presumes that you already have a vCenter server running. If you are not using vCenter in this lab, you’ll have to deploy the identity appliance. Pre-deployment notes For all Linux based appliances we’ll need to change the compliance check to make sure the appliance boots automatically on Ravello. This can be done in the following way: Login to the appliance using ssh or the console Run vi /etc/init.d/boot.compliance Change line 47 – (add “-q”) From MSG=`/usr/bin/isCompliant` To MSG=`/usr/bin/isCompliant -q` Change line 48 – substitute (“0” instead of “$?”) From CODE=$? To CODE=0 Save the changes you made in /etc/init.d/boot.compliance. In addition, all appliances and servers should be pointing to the same NTP time source. In virtual machines, this can be configured through the OS settings, in virtual appliances this can be configured through the port 5480 VAMI interface in admin -> time settings. vRealize Identity appliance To deploy the identity appliance, you’ll have to convert it from stream optimized to non stream optimized first before it can be uploaded to the Ravello content library. A detailed procedure on how to do this can be found on here. After the appliance has been deployed, log in to the console with root and a blank password and change the password by running passwd. Then, run /opt/vmware/share/vami/vami_config_net to configure your network. Lastly, configure a service in ravello on the identity appliance to open port 5480. After doing this, you can log in to the vami interface through https://your-public-ip:5480 to configure the rest of the identity appliance. Open the SSO tab and enter a domain and password. Move on to the “Host Settings” and enter a SSO hostname here. Keep in mind that this name should be the same as the hostname registred in either Ravello DNS or your AD DNS. Open the SSL Tab and either select “Generate a self-signed certificate” or “Import a PEM encoded certificate” if you have your own SSL certificate. Enter your certificate details and apply. After a short while the certificate will be generated. Of not here is that the common name should match the SSO Hostname you entered earlier. Lastly, if you have active directory, open the Active Directory tab and enter your domain information. This is not a required step since you can configure AD authentication in vRealize automation afterwards. vRealize Appliance After your identity appliance is configured, move on to the vRealize Automation appliance. This can be downloaded from the VMware site as an OVA file. Rename the ova file’s extension to .zip and extract the OVF, which you can then upload to the Ravello content library. After powering on the appliance, we’ll have to configure it. Log in to the console with root and a blank password and change the password by running passwd. Then, run /opt/vmware/share/vami/vami_config_net to configure your network. Lastly, configure a service in ravello on the identity appliance to open port 5480. After doing this, you can log in to the vami interface through https://your-public-ip:5480 to configure the rest of the vRA appliance. Open the vRA settings tab and configure the host settings. Select the “Update host” option and enter the hostname. Personally, I prefer to set this to an external DNS name if you will be accessing your lab environment from outside. This can be either the DNS name ravello gave you (can be found in the summary of the virtual machine) or a CNAME record pointing to your ravello DNS name or IP. Select “Generate Certificate” or “Import” depending on whether you have a presigned certificate or not. Keep in mind that the common name should exactly match the hostname you entered above. The process to activate this can take a few minutes, so take a coffee break, and after the service is configured move on to the SSO tab. Enter your SSO host here. Depending on whether or not you chose to use an identity appliance or not, this should be either your identity appliance’s hostname (the same as you configured in the appliance) or your vCenter hostname. Port should be 7444 for vCenter 5.5 or the identity appliance and 443 if you are running vCenter 6. Enter your administrator user, default tenant (depending on what you configured in vCenter or the identity appliance, administrator and vsphere.local by default) and your password. After waiting for a few minutes, SSO should return an OK status and will have been configured. Move on to the licensing tab and enter your license code. This is required to even run vRealize automation, but you should be able to get a trial license. Open the “IaaS install” page and download the IaaS installer to your vRealize automation server. Leave the rest of the settings default. vRealize IAAS This part presumes that you have installed SQL express or SQL server already. If you haven’t done so yet, install this service first before proceeding. Start by downloading the vRealize automation prereq script. Run the script and follow the instructions, after which your server should be correctly configured to install vRealize automation in the easiest way possible. After preparing the server, start the installer you downloaded from the vRA appliance earlier. Enter the credentials for your vRA appliance (the root/password you set earlier). Then, ensure that all the prerequisites are met. If the prerequisite checks complain about the windows firewall and you’ve ensured that the firewall is either off or the ports are opened correctly, select “ByPass” to ignore these checks. Enter a password for your user account, a decryption key for the database, and your SQL server. When configuring the DEM worker, select the “Install and configure vSphere agent” and note down the values of the vSphere agent name and the Endpoint name, since you’ll need those when adding a vSphere backend. I usually name my vSphere agent the same as the FQDN of my vCenter server, but you can call it anything you want, as long as the name of your endpoint configurd in vRealize Automation is the same. On the component registry page, click load at the default tenant to load the tenant information. Download the certificate and select “Accept Certificate”. Enter your SSO credentials (default administrator@vsphere.local) and click Test. Then, enter the hostname of your IAAS server (this needs to be DNS resolvable) and click Test. After all these steps have been performed, your installation starts and after about 10-15 minutes you should have a working vRealize automation setup. Starting the services initially can take quite a bit of time, so some patience is required, but after 10-15 minutes you should be able to log in to the vRealize Appliance interface on https://your-vra-hostname/vcac. If you’ve forwarded port 443 to the vRA appliance (not the IAAS server) the console will be accessible through https://your-vra-public-hostname/vcac. This concludes the initial setup of vRealize Automation environment. In the next part we'll continue with the vRealize Automation configuration and deployment of virtual machines on various cloud platforms.

Whether you are installing and configuring VMware’s vRealize Automation (vRA) for the first time or need a lab to test your automation and orchestration scripts, you will find this step by step guide...

Demonstrating SD-WAN with ease on AWS using Ravello

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. Software defined WAN (SD-WAN) has gained ground in recent years. SD-WAN technology brings to forefront many benefits - ranging from lower cost, increased flexibility to reduced complexity of the overall branch office network. In addition larger players such as Cisco – which launched its IWAN (Intelligent WAN) solution some years back and Citrix with its Cloud Bridge offering, this domain has seen many new entrants – CloudGenix, VeloCloud, Viptela, Talari Networks, Aryaka to name a few. These networking companies need an environments to demonstrate the value gained from SD-WAN, and Ravello’s Network Smart Labs offers the perfect environment to do so. The following describes in detail how to setup Performance Routing (PfR) - one of the cornerstones of Cisco’s IWAN solution - to show SD-WAN in action. It also includes a base blueprint built with Cisco CSR1000v on Ravello Networking Smart Labs. Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. WAN edge The WAN edge is one of the most important functional blocks in the network. It is also one of the hardest areas to design. Traditional WAN’s are based around Border Gateway Protocol (BGP), which is used to peer with other BGP speakers in remote AS. BGP is a policy based routing protocol that allows you to tailor outbound traffic with a variety of metrics. It has proved to be the de facto WAN protocol and is great for reducing network complexity. However, by default, BGP does not take into account transit performance or detect transitory failures. It misses the shape of the network and cannot dynamically adjust the routing table based on real-time events. To enhance performance, many WAN protocols are manually combined with additional mechanism such as IP SLA and Enhanced Object tracking but these add to configuration complexity. Also, traditional routing is destination based only, which prohibits any type of granular forwarding. All these factors have made the WAN edge a cumbersome module in the network. Flow and application awareness are needed to meet today's application requirements. We need additional insights into the protocols crossing the WAN edge in order to make intelligent routing decisions. Performance Routing (PfR) Performance Routing (PfR) formerly known as Optimized Edge Routing (OER) enhances the WAN and adjusts traffic flows based on real-time events. It adds intelligence to network and makes the WAN intelligent and dynamic. It doesn't replace classic IP routing, it augments it and adds application awareness. PfR can select an egress or ingress interface based upon unique characteristics like reachability, jitter, delay, MOS score, throughput and monetary cost. Pfr gains its intelligence by automatically collecting statistics with Cisco IP SLA for active monitoring, and NetFlow for passive monitoring. There is no need to manually configure Netflow or IP SLA, it is automatically implemented by the PfR network. Link and path information is analysed by a central controller, known as PfR Master Controller (MC), a decision is made based on predefined policy and then an action is carried out by the local Border Edge routers (BR). The MC is where all the decisions are made. It's similar to an SDN controller but IOS based. It does not participate in any data plane forwarding, only control plane services, similar to how a BGP route reflector sits in the network. All policies are configured on the controller, such as preferred link and path parameters. It gathers information from the BR edge nodes and determines whether or not traffic classes are in or out of policy. If traffic is not in policy, it can instruct the BR to carry out route injection or dynamic PBR injection and use an alternative path. The PfR BR sits within the data plane and participates in traffic forwarding. It is an edge router with one or more exit links to an ISP. The MC doesn't make any changes, it is the BR that actually implements the enforcements. A BR can be enabled on the same router as a MC or it can be separate. All information between the MC and BR is protected with key chains. Pfr is a useful tool to have in any network. It has a observe mode, which lets the Pfr nodes analyse path and link characteristics and report back for analysis. There is also a route control mode. If the controller determines there is an out of policy event, it can influence the routing table to a more preferred path. IWAN Lab Setup on Ravello The Ravello Lab consists of two LAN networks separated by a Core. There is a jump host that has access to all nodes and it's here that external connectivity is permitted. On LAN1 we have 2 x BR and 1 x MC. The MC and the BR device functionality are combined on BR2. Each BR has two uplink to either core nodes, SP1 and SP2. OSPF is running internally and redistribute connected subnets is used for transit link reachability. On LAN2 we have a single BR and MC component on BR3. OSPF is also running in the Internal LAN, redistributing connected subnets for transit link reachability. There are a number of test networks on SP1 and SP2. 12.0.0.1/24 & 13.0.0.1/24 on SP1. 14.0.0.1/24 & 15.0.0.1/24 on SP2. These networks are pingable from the LAN routers and can be used for reachability and performance testing. Configuring the Nodes The first thing to do is set up a keychain so the BR and MC devices can communicate. All communication between the BR and MC is protected. The authentication key must be configured on both the Master Controller and the Border Router. key chain PFR key 1 key-string CISCO A PFR network must have at least two exit interface and these must be explicitly configured on the MC. Logging is also turned on. On BR1 interfaces GigabitEthernet3 and GigabitEthernet4 directly connect to SP1 and are specified as external. pfr master logging border 150.1.3.3 key-chain PFR interface GigabitEthernet4 external interface GigabitEthernet3 external interface GigabitEthernet1 internal On BR2 interfaces GigabitEthernet3 and GigabitEthernet4 directly connect to SP2 and are specified as external. border 150.1.4.4 key-chain PFR interface GigabitEthernet1 internal interface GigabitEthernet3 external interface GigabitEthernet4 external On BR3 interfaces GigabitEthernet1 and GigabitEthernet2 directly connect to SP1 and SP2 and are specified as external. border 150.1.5.5 key-chain PFR interface GigabitEthernet1 external interface GigabitEthernet2 external interface GigabitEthernet4 internal Once completed, you setup the BR functionality on BR1, BR2 and BR3. The loopback addresses are reachable to the internal LAN of each node. pfr border local Loopback0 master 150.1.X.X key-chain PFR The following command show pft master displays the status of the BR connectivity and also the default settings. Notice that the default mode is mode route control Both LAN routers have reachability to test prefixes 12.0.0.1 - 15.0.0.0.1. Use these endpoints to test PFR functionally. As a test under the pft config change the external interface to max-xmit-utilization absolute 1 border 150.1.4.4 key-chain PFR interface GigabitEthernet4 external max-xmit-utilization absolute 1 interface GigabitEthernet3 external max-xmit-utilization absolute 1 interface GigabitEthernet1 internal Send large packet from LAN1 to 14.0.0.1 and telnet to the prefix from a different host. The IP 14.0.0.1 is on SP2. The large PINGS trigger the out of policy event and the telnet triggers the Netflow. You will notice that the prefix 14.0.0.1 is now out of policy. The complete configurations for this setup is available on GitHub. Conclusion Ravello Network Smart Labs offers a unique way for SD-WAN solution providers and their ecosystem of resellers and trainers to show their technology in action. Interested in playing with this blueprint? Just open a Ravello account and add this blueprint to your library.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

How to Install Ixia BreakingPoint Virtual Edition: Hands on lab

Author: George Zecheru George Zecheru is a Senior Product Manager at Ixia responsible for the Applications & Security portfolio. The owner of patent, George has over 13 years experience in Telecommunications industry. Once upon a time all you needed to protect your network was a simple firewall. As the Internet adoption increased, the protection provided by firewalls was soon discovered to be inadequate to respond to the increased sophistication of today’s threats. Security vendors have responded with improved protection mechanisms pushing the inspection all the way up to the “content” (application layer). Today’s NGN Firewalls are equipped with the intelligence to detect and prevent intrusion attempts, identify malicious files, applications, users and devices. Ixia’s BreakingPoint is industry’s leading application and security test solution used to validate the stability, performance and security of the new generation content-aware devices including NGN Firewalls, Web Application Firewalls, IDS/IPS, DLP, lawful intercept systems, URL Filtering, Anti-Spam, anti-DDoS, Application Delivery Controllers and WAN accelerators. BreakingPoint solution recreates every aspect of a realistic network including scale and content. Ixia’s Global Application and Threat Intelligence (ATI) program fuels BreakingPoint with the intelligence required in simulating realistic traffic conditions and relevant attacks. All this intelligence is consolidated into a large database of applications and various attacks (exploits, malware botnets and DoS/DDoS). Ravello’s networking overlay makes it possible to create full-featured network & security labs on the public cloud. With a clean Layer 2 networking access, enterprises, ISVs, their resellers, have adopted Ravello for a variety of use-cases – network modeling, development-testing, training, sales demos, PoCs, cyber ranges, security sandbox environments to name a few. This blog covers the configuration steps required to setup BreakingPoint VE on Ravello’s software defined overlay and complement your existing network security labs allowing you to recreate. Using Ixia’s BreakingPoint VE (Virtual Edition) on Ravello you can: Conduct enticing demos by recreating every aspect of a realistic network Understand your network better and how it works Validate your network security architecture Train your customers and strengthen the skills of your security professionals Improve your operational readiness for refuting security attacks Environment Setup Deploy BreakingPoint VE on your local VMWare ESXi setup Use Ravello’s Import Tool to upload your VMs directly from VMware ESXi setup Verify and adjust the VM settings Publish your setup to AWS or Google cloud 1. Deploy BreakingPoint on your local hypervisor BreakingPoint VE 3.5 and earlier version relies on the hypervisor’s API to deploy the line cards. Consequently before you deploy BreakingPoint VE setup on Ravello you will need to deploy it first on a local hypervisor – VMWare ESXi or KVM. The following document provides instructions to install BreakingPoint VE on your local hypervisor. You can download the Ixia OVA file (for VMware) and the installation guide from Ixia’s strikecenter portal. BreakingPoint allows you to use a system controller with up to 12 line cards, and each line card can be configured with up to 8 traffic interfaces (test interfaces). My example uses a setup consisting of a single line card with 2 traffic interfaces. If you need more line cards it is important to have your entire setup built before you upload the corresponding VMs to Ravello’s library. Important: In your local setup, BreakingPoint will use DHCP to acquire IP addresses for the management interfaces of system controller and the line cards. Once you upload the VMs to Ravello’s library you must configure the management interfaces to match the IPs assigned to BPS VE virtual machines on your local setup. This step must be done before you start your VMs. In the event of an IP mismatch, the controller will fail to discover the line cards. Assigning the IP address you want in Ravello is straightforward – just use “IP configuration = DHCP” and type the desired IP address to “Reserve IP” field. 2. Use Ravello VM Import Tool to upload your BreakingPoint VE VMs Ravello VM Import Tool provides a simple method to upload your VMs to Ravello’s library by importing the images directly from your vCenter or vSphere setup. Here is a quick how to reference to use VM Import tool. 3. Verify and adjust VM settings In this part you will need to configure the VMs to match the network configuration from your local setup and ensure each VM has the right CPU, RAM, NIC driver. Settings Validation First verification step prompts you to verify the general settings (VM name, VM description, host name) VM Names: In my setup I used BPS-WebUI for the system controller and bpsLC for my line card VM VM Description: I added the “BreakingPoint Firmware Version” Second step prompts you to verify the System Settings Assign 4 vCPUs and 8 GB of RAM for each VM. Third step prompts you to verify the Disk There are no changes required but verify the settings are as shown below The third step prompts you to verify the Network The BreakingPoint system controller has two management interfaces: eth0 – provides access to the Web User Interface and ctrl0 – control interface for managing the communication with the virtual line cards The BreakingPoint line card has a single management interfaces (eth0) and allows a minimum of 2 traffic interfaces (test interfaces) and a maximum of 8. eth0 – provides access to the Web User Interface Verify all NICs use VMXNet3 as a Device. As mentioned in step 1, it is important to configure each management interface with same IPs as assigned during installation on your local setup. Virtual Machine Interface IP Address VLAN System Controller ctrl0 192.168.109.199 1 eth0 192.168.109.200 1 Line Card eth0 192.168.109.202 1 The line card includes at least 3 NICs – one for management and two for traffic. The first interface on your local VMWare setup (eth0) is the designated management interface. Please note that the import tool may reverse the order of NICs and it is important to assign the management address to the right interface. Assigning the management IP address to an incorrect NIC will break the communication with the system controller and make your line card undiscoverable. In my setup, the management interface was displayed as the second NIC. Below is the configuration for each one of the NICs associated with my BreakingPoint VE line card - the management interface has the IP address 192.168.109.202 reserved through DHCP and uses same VLAN tag “1”. For the traffic interfaces I used VLAN 200 and disabled the DHCP service by using a static IP address. With the settings validated and adjusted per above instructions you can now create your application by adding the BreakingPoint System Controller VM and the BreakingPoint Line Card VM. To complete my setup I added a Windows VM to use it as a local hop to access the BreakingPoint user interface. An overview of my network setup is captured in the following snapshot. 4. Publish your application to the cloud of your choice Conclusion Ravello’s Network Smart Lab provides an easy way to use Ixia Breaking Point Virtual Edition to test NGN Firewalls, Web Application Firewalls, IDS/IPS, DLP, lawful intercept systems, URL Filtering, Anti-Spam, anti-DDoS, Application Delivery Controllers and WAN accelerators without needing any hardware. Interested in trying out – just open a Ravello account, follow the instructions in this article. About Ixia Ixia provides application performance and security resilience solutions to validate, secure, and optimize businesses’ physical and virtual networks. Enterprises, service providers, network equipment manufacturers, and governments worldwide rely on Ixia’s solutions to deploy new technologies and achieve efficient, secure, ongoing operation of their networks. Ixia's powerful and versatile solutions, expert global support, and professional services equip organizations to exceed customer expectations and achieve better business outcomes. Learn more about Ixia’s story!

Author: George Zecheru George Zecheru is a Senior Product Manager at Ixia responsible for the Applications & Security portfolio. The owner of patent, George has over 13 years experience...

Install and run VMware NSX 6.2 for Sales demo, POC and training labs on AWS and Google Cloud

In this blog post, we’ll discuss the installation of NSX 6.2 for VMware vSphere on AWS or Google Cloud through the use of Ravello. NSX allows you to virtualize your networking infrastructure, moving the logic of your routing, switching and firewalling from the hardware infrastructure to the hypervisor. Software-defined networking is an essential component of the software-defined datacenter and is most likely the most revolutionary change since the creation of VLANs. The biggest problem with installing NSX on a normal platform is that it can be quite resource-intensive, it requires physical network components, and the initial setup can be a bit time-intensive. By provisioning NSX on Ravello, we can install once and redeploy anytime, greatly reducing the time required to deploy new testing, demo or PoC environments. To set up your vSphere lab on AWS with Ravello, create your account here. Setup Instructions To set up this lab, first we start off with the following: 1 vCenter 6.0U1 windows server 3 clusters consisting of 2 ESXi hosts each 1 NFS server In addition to this, we’ll have to deploy the NSX Manager. This can either be deployed as a nested virtual machine or directly on Ravello. In this example, we deployed the NSX Manager as a Ravello VM by extracting the OVF from the OVA file and importing it as a virtual machine. Of the three vSphere cluster, two will be used as compute workloads and one will be used as a collapsed management and edge cluster. While this is not strictly needed, the setup allows us to test stretching NSX Logical switches and distributed logical routers across layer 3 segments. For the installation of ESXi you can refer to how to setup ESXi hosts on AWS and Google Cloud with Ravello. In addition, your vSphere clusters should be configured with a distributed switch, since the standard vSwitch doesn’t have the features required for NSX. Each host in the compute cluster has the following specs: 2 vCPU 8 GB memory 3 NIC (1 Management, 1NFS, 1 vTEP, each on a separate dvSwitch) 1 20GB disk for the OS installation The hosts in the management cluster have the following specs: 4 vCPU 20 GB memory 4 NIC (1 Management, 1NFS, 1 vTEP, 1 transit, each on a separate dvSwitch) 1 20GB disk for the OS installation The reason for the increased size on the management cluster is due to the deployment of our NSX controllers, edge services gateways and management virtual machines. After publishing our labs and installing the the base vSphere setup (or provision virtual machines from blueprints, I have blueprints for a preinstalled ESXi and vCenter which saves quite some time) we can get started on the configuration of NSX. The installation of the NSX Manager is actually quite simple. After deploying the virtual appliance, it will not be reachable through the web interface yet, because no IP address has been set. To resolve this, we can log in to the console with the username admin and the password default. After logging into the console, we have to run the command enable, which will ask for your enable password (also set to default) and then run setup. This will set the initial configuration allowing you to access the system through the web interface. After configuring the manager, open a web browser and connect to https://ip-of-your-manager. After logging in, you should see the initial configuration screen: Start off with “manage appliance settings” and confirm that all settings are correct. Of special importance is the ntp server, which is critical to the functionality of NSX and should be the same on both vCenter, ESXi and the NSX Manager. After configuring the appliance, we can start with the vCenter registration. Either open “Manage vCenter registration” from the main screen, or from the configuration page under Components ->NSX Manager service. Start with the lookup service, which should point to your vCenter server. If you are running vCenter 6 or higher, use 443 for the port, otherwise use 7444. For the credentials, you should use an administrator account on your vCenter server. In the vCenter server configuration, point it to the same vCenter as used for the inventory service. In case the registration doesn’t work, wait a few minutes. The initial boot of the services can take up to 10 minutes, so the services might not have started yet. You can check this by opening “view summary” on the main page. If the status doesn’t say connected after registration, click on the circular icon right to the status. The synchronization works automatically but we can speed up the initial synchronization by forcing it manually. After the initial setup, log out of the vSphere web client and log in again. You should see a new icon called “networking and security”. This gives you an environment preconfigured for NSX but without the controllers or NSX drivers actually installed in the hypervisors. This allows you to quickly provision a study or lab environment allowing people to configure NSX themselves without having to spend time on deploying appliances or recreating ESXi hosts and vCenter servers. We’ll handle the preparation of the clusters in the next chapter, so if you want to create a fully functional NSX environment and blueprint is, read on. Cluster Preparation First, we’ll deploy a controller. Go to “Networking and security”, open “Installation” and select the “Management” tab. At the bottom, you should see a plus icon which will deploy a controller. Select the datacenter to deploy in, select your cluster and datastore and optionally a specific host and folder. Connect your controller to the same network as your vCenter server and NSX Manager and select an IP pool. Since we haven’t created an IP pool yet, we can do that now. Click on the green plus icon above the IP pool list and enter your network configuration. This IP pool will automatically provision static IP adresses to your controllers. In a production environment, you should run a minimum of 3 controllers (and always an odd number), but since this a lab environment 1 controller will suffice. If you would like, you could deploy 3 controllers by repeating these steps and reusing the IP pool created earlier. After deploying a controller, move to the “Host Preparation” tab. Click the “install” link next to your cluster, and after a few minutes the status should show “Installed”. Repeat this step for every cluster you want to configure. After the NSX drivers have been installed on your cluster hosts, click the “Configure” in the VXLAN column link for each cluster. Select the distributed vSwitch you’ve provisioned for your VTEP network and an IP pool. Since we haven’t created an IP pool for VTEP yet, we’ll create one by selecting “New IP Pool”. Create this IP pool in the same way as we previously did for the Controller network. Leave the rest of the settings default. After a few minutes,your VTEP interfaces should have been created which you can also see in the networking configuration of the ESXi host. A new vmkernel port has been created with an IP address for the IP pool. The TCP/IP stack will also be set to “vxlan” as opposed to the default. After configuring VXLAN on each cluster, we can move on to the VXLAN configuration. Open the “logical network preparation” tab and edit the segment ID & multicast Addresses allocation. The Segment ID configures the range of VXLAN network ID’s (also known as VNI) that NSX is allowed to use. This is mainly of importance if you run multiple VXLAN implementations in the same physical underlay. While this is not likely in a Ravello lab environment we’re still required to configure this. The multicast addresses are mainly used when NSX is set to use multicast or hybrid mode, and it’s not required to configure this. The last step required is to configure at least one transport zone. Open the “Transport zones” tab and click the Plus icon to create a new one. Enter a name, select “Unicast” for replication mode and select the clusters that will be part of the transport zone. If you wish to stretch logical networks or distributed logical routers across clusters, select all clusters in your datacenter for this transport zone. If you wish to restrict logical networks or distributed logical routers to specific clusters (for example, your edge network) select only the clusters that should have access to these networks. After creating a transport zone, you should have a fully functional NSX environment and you can start creating logical switches, distributed routers, edges, distributed firewalls and use any feature available to you in NSX. Saving your environment as a Blueprint Once you have installed your NSX environment, save it as a Blueprint. Then, you can share it with your team members in your sales engineering organization, training group, your customers/prospects and partners. They can then, with a few clicks provision a fully functional instance of this environment on AWS or Google Cloud for their own use. You don’t need to schedulde time on your sales demo infrastructure in advance, you can customize your dmeo scenario using a base blueprint, provision as many student training labs on-demand and pay per use.

In this blog post, we’ll discuss the installation of NSX 6.2 for VMware vSphere on AWS or Google Cloud through the use of Ravello. NSX allows you to virtualize your networking infrastructure, moving...

How to build a large scale BGP MPLS Service Provider Network and model on AWS – Part 2

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. Service Providers around the globe deploy MPLS networks using label-switched paths to improve quality of service (QoS) that meet specific SLAs that enterprises demand for traffic. This is a two-part post – Part 1 introduces MPLS constructs and Part 2 (this post) describes how to setup a fully-functional MPLS network using Ravello’s Network Smart Lab. Interested in playing with this 14 node Service Provider MPLS deployment – just add this blueprint from Ravello Repo. The blueprint uses Multiprotocol Extensions for BGP (MP-BGP) to pass customer routing information with both Border Gateway Protocol (BGP) route reflection and full mesh designs. The MPLS core is implemented with OSPF as the IGP and Label Distribution Protocol (LDP) to distribute labels. To transport IPv6 packets over an MPLS IPv4 core, an additional mechanism known as 6PE is designed on a standalone 6PE route reflector. Labels are assigned to transport IPv4 and IPv6 packets across an IPv4 MPLS core. Creating MPLS Service Provider Network on Ravello I decided to use Cisco CSR1000v to create my MPLS Service Provider network on Ravello as it supports a strong feature set including MP-BGP, LDP, 6PE, and route reflection. CSR1000v is a fully featured Layer 3 router, and fulfilled all device roles (P, PE, RR and Jump) for the MPLS/VPN network. I created a mini MPLS/VPN network consisting of 2 x P nodes, 8 x PE’s, 2 x IPv4 route reflectors, and 1 x 6PE route reflectors. A JUMP host was used for device reachability. The P nodes have core functionality and switch packets based on labels. The PE’s accepts customer prefixes and peer with either a route reflector or other PE’s nodes. The route reflectors negate the need for a full mesh in the lower half of the network. Once the initial design was in place, I was able to drag and drop Cisco CSR1000v VM’s to build a 15 node design with great ease. I also expanded (on the fly) the core to a 4 node square design and scaled back down to two for simplicity. This type of elasticity is hard to replicate in the physical world. All device are accessed from the managment JUMP host. A local host file is created allowing you to telnet by name and not IP address e.g telnet p1, telnet PE2 etc. The diagram below displays the physical interface interconnects per device. The core P1 and P2 are the hub for all connections. Every node connects to either P1 or P2. The table below displays the management address for each node, and confirms the physical interconnects. Device Name Connecting To Mgmt Address PE1 P1 192.168.254.20 PE2 P2 192.168.254.21 PE3 P1 192.168.254.22 PE4 P2 192.168.254.23 PE5 P1 192.168.254.24 PE6 P1 192.168.254.25 PE7 P2 192.168.254.26 PE8 P2 192.168.254.27 RR1 P2 192.168.254.28 RR2 P2 192.168.254.29 RR3 P1 192.168.254.31 P1 P2,RR3,PE1,PE3 192.168.254.10 P2 P1,RR1,RR2,PE2,PE4 192.168.254.12 MGMT All Nodes. External Logical Setup In this lab, there are two types of BGP route propagation a) Full Mesh and b) Route Reflection. A full mesh design entails all BGP speakers peering (neighbor relationship) with each other. If for some reason a PE node is left out of the peering, due to BGP rules and loop prevention mechanism, it will receive no routes. In a large BGP design, full mesh creates a lot of BGP neighbor relationships and hampers router resources. For demonstration purposes, PE1, PE2, PE3, and PE4 peer directly with each other, creating a BGP full mesh design. For large BGP networks, designers employ BGP route reflection. In a route reflection design, BGP speakers do not need to peer with each and peer directly with a central control plane point, known as a route reflector. Route reflection significantly reduces the number of BGP peering sessions per device. Instead of each BGP speaker peering with each other, they peer directly with a route reflector. For demonstration purposes, PE5, PE6, PE7 and PE8 peer directly with RR1 and RR2 (IPv4 route reflectors). In summary, there are two sections of the network. PE1 to PE4 are in the top section and participate in a BGP full mesh design. PE5 to PE8 are in the bottom section and participate in a BGP Route Reflection design. All PE’s are connected to a Provider node, either P1 or P2. The PE’s do not have any physical connectivity connectivity to each other but they do have logical connectivity. The top and bottom PE’s cannot communicate with each other and have separate VRFs for testing purposes. However, this can be changed by adding additional peering with RR1 and RR2’s or by participating in the BGP full mesh design. The 3rd BGP route reflector is called RR3 and serves as the 6PE device. Both PE1 and PE2 peer with the 6PE route reflector for IPv6 connectivity. The Provider (P) nodes have interconnect address to ALL PE and RR nodes, assigned 172.16.x.x. The P-to-P interconnects are labelled 10.1.1.x. The IPv4 and IPv6 route reflectors are interconnected to the P nodes and assigned address 172.16.x.x. They do not have any direct connections to the PE devices. Following screenshot shows how all the Service Provider MPLS network nodes are setup on Ravello. BGP and MPLS Configuration PE1, PE2, PE3 and PE4 are configured in a BGP full mesh. Each node is a BGP peer of each other. There are three stages to complete this design. The first stage is to create the BGP neighbor, specify the BGP remote AS number, and source of the TCP session. Both BGP neighbors are in the same BGP AS making the connection a IBGP session and not an EBGP session. By default, BGP neighbor relationships are not dynamic and neighbors are explicitly specified on both ends. The command remote-as determines IBGP or EBGP relationship, “update-source Loopback100” sources the BGP session. router bgp 100 bgp log-neighbor-changes neighbor 10.10.10.x remote-as 100 neighbor 10.10.10.x update-source Loopback100 The second stage is to activate the neighbor under the IPv4 address family. address-family ipv4 neighbor 10.10.10.x activate The third stage is to active the neighbor under the VPNv4 address family. We also need to make sure we are sending extended BGP attributes. address-family vpnv4 neighbor 10.10.10.x activate neighbor 10.10.10.x send-community both A test VRF named PE1 is created to test connectivity between PE1 to PE4. PE1 has test IP address of 10.10.10.10, PE2 has 10.10.10.20, PE3 has 10.10.10.30 and PE4 has 10.10.10.40. These addresses are reachable through MP-BGP and are present on the top half PE’s. The test interfaces are within the PE VRF and not the global routing table. interface Loopback10 ip vrf forwarding PE1 ip address 10.10.10.x 255.255.255.255 The diagram below display the routing table for PE1 and the test results from pinging within the VRF. The VRF creates a separate routing table from the global table so when pinging one needs to make sure to execute the ping command within the VRF instance. PE5, PE6, PE7 and PE8 are configured as route reflector clients of RR1 and RR2. Each of these PE’s has a BGP session to both RR1 and RR2. RR1 and RR2 are BGP route reflectors configured within a cluster-ID for redundancy. To prevent loops a cluster-id of 1.1.1.1 is implemented. They reflect routes from PE5, PE6, PE7 and PE8, not PE1, PE2, PE3 and PE4. The main configuration points for a route reflector design are on the actual route reflectors; RR1 and RR2. The configuration commands on the PE’s stay the same but the only difference is that they have single BGP peering to the route reflectors and not each other. Similar, to the PE devices, the route reflector sources the TCP session from the loopback100, specifies the remote AS number to highlight if this is an IBGP or EBG session. The cluster-ID is used to prevent loops as the bottom half PE’s peer with two route reflectors. router bgp 200 bgp cluster-id 1.1.1.1 bgp log-neighbor-changes neighbor 10.10.10.x remote-as 200 neighbor 10.10.10.x update-source Loopback100 The PE neighbor is activated under the IPv4 address family address-family ipv4 neighbor 10.10.10.x activate Finally, the PE neighbor is activated under the VPNv4 address family. The main difference is that the route-reflector-client command is enabled to the neighbor relationship for the PE nodes. This single command enables route reflection capability. address-family vpnv4 neighbor 10.10.10.x activate neighbor 10.10.10.x send-community both neighbor 10.10.10.x route-reflector-client The following displays the PE8 test loopback of 10.10.10.80 within the test VRF PE2. The cluster-id of 1.1.1.1 is present in the BGP table for that VRF. RR3 is a standalone IPv6 route reflector. It interconnects with P1 with IPv4 addressing not IPv6. It does not serve the IPv4 address family and is used for IPv6-only. The send-label command labels IPv6 packets for transportation over an IPv4-only MPLS core. The command is configured on the PE side under the IPv6 address family. The following snippet displays the additional configuration on PE1 for 6PE functionality. Note the send label command. address-family ipv6 redistribute connected neighbor 10.10.10.11 activate neighbor 10.10.10.11 send-community extended neighbor 10.10.10.11 send-label The IPv6 6PE RR has similar configuration to the IPv4 RR, except the IPv6 address family is used, not the IPv4 address family. Note, the 6PE RR does not have any neighbors activated under the IPv4 address family. The following snippet displays the configuration on RR3 (IPv6 6PE ) for PE1 neighbor relationship. router bgp 100 bgp log-neighbor-changes no bgp default ipv4-unicast neighbor 10.10.10.1 remote-as 100 neighbor 10.10.10.1 update-source Loopback100 address-family ipv4 exit-address-family address-family ipv6 neighbor 10.10.10.1 activate neighbor 10.10.10.1 route-reflector-client neighbor 10.10.10.1 send-label RR3 serves only PE1 and PE2 and implements a mechanism known as 6PE. PE1 and PE2 are chosen as they are physically connected to different P nodes. A trace from PE1 to PE2 displays additional labels added for IPv6 end-to-end reachability. An additional label is assigned to the IPv6 prefix so it can be label switched across the IPv4 MPLS core. If we had configured a mechanism known as 6VPE (VPNv6) we would see a three label stack. However, in the current configuration of 6PE (IPv6 not VPNv6) we have 2 labels; A label to reach the remote PE (assigned by LDP) and another label for the IPv6 prefix (assigned by BGP). These two labels are displayed in the diagram below. - Label 18 and Label 41, representing a two label stack. The MPLS core consists of the P1 and P2 Provider nodes. These devices switch packet based on labels and run LDP to each other and to the PE routers. LDP is enabled simply with the mpls ip command under the connecting PE and P interfaces. Interface Gix mpls ip OSPF Area 0 is used to pass INTERNAL routing information. There are no BGP or customer routes in the core. There are a number of ways to configure OSPF to advertise routes. For demonstration purposes this blueprint uses both ways. The core nodes only have Internal reachability information. The snippets display both ways to configure OSPF; enabled under the interface or configured within the OSPF process. interface GigabitEthernet ip ospf 1 area 0 router ospf 1 network 0.0.0.0 area 0 The image below displays the MPLS forwarding plane with the show mpls forwarding-table command. The table display the incoming to outgoing label allocation for each prefix in the routing table. The outgoing action could either be a POP label or an outgoing label assignment. For example, there is a POP label action for PE1 and PE3 loopbacks as these two nodes are directly connected to P1. However, for PE2 and PE4, who are connected to the other P node, there is an outgoing label action of 18 and 20. As discussed, OSPF is the IGP and we are running Area 0. The command show ip ospf neighbors displays the OSPF neighbors for P1. It should be adjacent to all the directly connected PE’s, P2 and RR3. Complete configuration for this setup can be accessed at this Github account. Conclusion This post walks through step by step instructions on how to create a 14 node MPLS network using Ravello’s Network Smart Lab. Interested in playing with this MPLS core network? Just open a Ravello account and add this fully functional MPLS network blueprint to your library.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

How to build a large scale BGP MPLS Service Provider Network and model on AWS – Part 1

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. Service Providers around the globe deploy MPLS networks using label-switched paths to improve quality of service (QoS) that meet specific SLAs that enterprises demand for traffic. This is a two-part post – Part 1 (this post) introduces MPLS constructs and Part 2 describes how to setup a fully-functional MPLS network using Ravello’s Network Smart Lab. Interested in playing with this 14 node Service Provider MPLS deployment – just add this blueprint from Ravello Repo. Multiprotocol Label Switching (MPLS) One could argue that Multiprotocol Label Switching (MPLS) is a virtualization technique. It is used to build virtual topologies on top of physical topologies by creating connections between the network edges, explicitly called tunnels. MPLS is designed to reduce the forwarding information managed by the internal nodes of the network by tunneling through them. MPLS architectures build on the concept of complex edges and simple cores, allowing networks to scale and serve millions of customer routes. MPLS changed the way we think about control planes. It pushed most of the control plane to the edge of the network. The MPLS Provider (P) nodes still run a control plane but the complex decision making is now at the edge. In terms of architecture and scale, MPLS leads to very scalable networks and reduces the challenges of distributed control plane. It allows service providers to connect millions of customers together and place them into separate VRF containers allowing overlapping IP addressing and independent topologies. The diagram below represents high level components of a MPLS network. Provider Edge (PE) nodes are used to terminate customer connections, Provider (P) nodes label switch packets and Route Reflectors (RR) are used for IPv4 and IPv6 route propagation. Figure 1 Service Provider MPLS Network How does MPLS work? Virtual routing and forwarding (VRF) instances are like VLANs, except they are Layer 3 and not Layer 2. In a single VRF-capable router you have multiple logical routers with one single management entity. The virtual VRF is like a full blown router. It has its own routing protocols, topology, and independent set of interfaces and subnets. If you want to connect multiple routers with VRFs together, you can use either Layer 2 trunk, Generic Routing Encapsulation (GRE) or MPLS. First, you need an interface per VRF on each router, which means you need a VLAN or GRE tunnel for each VRF, also a VRF routing protocol on every single hop. This is known as Multi-VRF. For example, with a Multi-VRF approach, if you have 4 routers in sequence you have to run two copies of the routing protocol on every single path. Every router in the path must have the VRF configured, resulting in numerous routing protocol adjacencies and convergence events occurring if a single link fails. Multi-VRF with its hop-by-hop configuration does not scale and should be used through a maximum of one hop. A more scalable solution is a full blown MPLS network, as described below. An end-to-end MPLS implementation is more common than multi-VRFs. They build a Label Switched Path (LSP) between every ingress and egress router. To enable this functionality, you have to enable LDP on individual interfaces. Routers send LDP Hello messages and establish an LDP session over TCP. It is enabled only on Core (P to P, PE to P, P to RR) interface and not on user interfaces facing CE routers. Every router running LDP will assign a label to every prefix in the CEF table. These labels are then advertised to all LDP neighbors. Usually, you only need to advertise labels for the internal PE BGP next hops, enabling Label Switched Paths within the core. Provider (P) and Provider Edge (PE) Nodes MPLS shifted the way we think about control plane and pushed the intelligence of the network to the edges. It changed how the data plane and control planes interrelate. The decisions are now made at the edge of the network with devices known as Provider Edge (PE) routers. PE’s control the best-path decisions, perform end-to-end path calculation and any encapsulation/de-capsulations. The PE’s run BGP that uses a special address family called VPNv4. The Provider (P) routers sit in the core of the network and switch packets based on labels. They do not not contain end-to-end path information and require path information to the remote PE next hop address only. The PE nodes run the routing protocols with the CE’s. No customer running information should be passed into the core of the network. Any customer route convergence events are stopped at the PE nodes, protecting the core. The P’s run an internal IGP for remote PE reachability and LDP to assign labels to IP prefixes in the Cisco Express Forwarding (CEF) table. The idea is to use a single protocol for all VRFs. BGP is the only routing protocol scalable enough and allows extra attributes added to prefixes making them unique. We redistribute routes from customers IGP or connected networks from the VRFs to Multiprotocol BGP (MP-BGP). You do the same at the other connected end and the BGP updates are carried over the core. As discussed, transport between the routers in the middle can be either layer 2 transport, GRE tunnel or MPLS with Label Switched Paths (LSP). Route Distinguishers (RD) and Route Targets (RT) MPLS/VPN networks have the concept of Route Distinguishers (RD) and Route Targets (RT). The RD distinguishes one set of routes from another. It’s a number prepended to each route within a VRF. The number is used to identify which VRF the route belongs to. A number of options exist for the format of a RD but essentially it is a flat number prepended to a route. The following snippet displays a configuration example of a RT and RD attached to VRF instance named VRFSite_A. ip vrf VRFSite_A rd 65000:100 route-target export 65000:100 route-target import 65000:100 A route for 192.168.101.0/24 in VRF Site_A is effectively advertised as 65000:100:192.168.101.0/24. RT’s are used to share routes among sites. They are assigned to routes and control the import and export of routes to VRFs. At a basic level, for end-to-end communication, an export RT at one site must match an imported RT at the other site. Allowing sites to import some RT’s and export others, enables the creation of complex MPLS/VPN topologies, such as hub and spoke, full and partial mesh designs. The following shows a wireshark capture of a MP-BGP update. The RT value is displayed as a extended community with value 65000:10. Also, the RD is specified as the value 65000:10. When you redistribute from VRF into VPNv4 BGP table, it prepends a 64-bit user configurable route distinguisher (RD) to every IPv4 address. Every IPv4 prefix gets 64-bits in front of it to make it globally unique. RD is not the VPN identifier. It is just a number that makes the IPv4 address globally unique. Usually, we use the 2-byte AS number + 4 byte decimal value. Route Targets (RT) (extended BGP community) are not part of the prefix but attached to the prefix. As discussed, the RT attribute controls the import process to the BGP table. It tells BGP to which VRF should it insert the specific prefix. The diagram below displays the generic actions involved with the traffic flow of a MPLS/VPN network. Labels are assigned to prefixes and sent as BGP update messages to corresponding peers. Multi-protocol Extensions for BGP MP-BGP allows you to carry a variety of information. More recently, BGP has been used to carry MAC addresses with a feature known an EVPN. It also acts as a control and DDoS prevention tool, downloading PBR and ACL to the TCAM on routers with BGP FlowSpec. It is a very extensible protocol. Label Distribution Protocol (LDP) is supported solely for IPv4 cores, which means to transport IPv6 packets we need to either use LDPv6, which does not exist or somehow tunnel packets across an IPv4-only core. We need a mechanism to extend IPv4 BGP to carry IPv6 prefixes. A feature known an 6PE solves this and allows IPv6 packets to be labelled and sent over a BGP IPv4 TCP connection. The BGP session is built over TCP over IPv4 but with the “send-label” command, allowing BGP to assign a BGP label (not LDP label) to IPv6 packets – enabling IPv6 transport over an IPv4 BGP session. In this case, both BGP and LDP are used to assign labels. BGP assigns a label to the IPv6 prefix and LDP assigns a label for remote PE reachability. This allows the enablement of IPv6 services over an IPv4 core. BGP is simply a TCP application carrying multiple types of traffic. Conclusion This post introduces the key concepts involved in creating a Service Provider MPLS network. Interested in building your MPLS network – read Part 2 of this article for step by step instructions on how to create a 14 node MPLS network using Ravello’s Network Smart Lab. You can also play with this fully functional MPLS network by adding this blueprint to your Ravello account.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

VSAN 6.1 environment on AWS and Google Cloud

Install and run VSAN 6.1 environment for sales demo, POC and training labs on AWS and Google Cloud With the new release of VSAN 6.1, quite a few people are likely interested in installing this new version, to test out the new features and to showcase their storage management products working with this new release. With Ravello, you can do this without requiring a prohibitive physical test setup (3 hosts, with SSD and storage). You can setup and run multi node ESXi environment in AWS and Google Cloud, then configure VSAN 6.1 and save the setup as a bluerpint in Ravello. If you are an ISV, you can then run your appliances directly on Ravello or on top of ESXi in this setup and build a demo environment in public cloud. You can provide access to this blueprint to your sales engineers, who can then on-demand provision demo lab in minutes. You can also setup VSAN 6.1 virtual training labs for students on AWS and Google Cloud, without the need for physical hardware. Setup Instructions To set up this lab, first we start off with the following: 1 vCenter 6.0U1 windows server 2 clusters consisting of 3 ESXi hosts each If you want to, you could start off with a single cluster of 3 hosts, but this setup also allows us to test integration with products like vSphere replication and Site recovery manager in the future, while also being able to expand to 4 hosts per cluster very quickly to test new VSAN features such as failure domains or stretched clusters. Refer to following blog on how to setup ESXi hosts on AWS and Google Cloud with Ravello. Each host has the following specs: 2 vCPU 8 GB memory 2 NIC (1 Management, 1 VSAN) 4 additional disks on top of the OS disk, 100GB each. One of these disks will be used as flash drive, the rest will serve as capacity disks After publishing our labs and installing the software (or provision virtual machines from blueprints, I have blueprints for a preinstalled ESXi and vCenter which saves quite some time) we can get started on the configuration of VSAN. Starting with VSAN 6.1, the only thing we actually need to do for this is to open the vSphere web client, open the VSAN configuration for the cluster and mark the first disk of each host as SSD. This is because the underlying ravello platform reports the disk to ESXi as spindle storage, and we need at least one flash disk for VSAN to work. If you want to test the all-flash features of VSAN, you’d previously have to either use the ESXi shell/SSH or use community tools to configure SSD disks as capacity disks. With VSAN 6.1, this is all supported from the web client if you have the correct VSAN license. Still, sometimes the community tool can be useful if you have a large amount of hosts or clusters and don’t want to manually mark each disk as SSD. While you could script this yourself through powershell or SSH, the tool of choice for this is the VSAN All-Flash configuration utility by Ravlinson Rivera, published on his blog Punching Clouds. Installation Start by installing vSphere as normal. For vCenter, i’ve chosen to use the windows version since this is the easier one to install, but if you install the VCSA (either nested or by import an existing VCSA as OVF in Ravello) that works equally well. From an installation point of view, there is no difference between the two. As you can see, i’ve created the following setup: By default, VSAN disk claiming is set to automatic. If you want to ensure that new disks are not added to capacity automatically, you’ll have to set this to manual when enabling VSAN. If you do select to automatically add capacity, ensure that your disks are marked as flash and configured correctly before enabling VSAN on your cluster. For automatic assignment, follow the rest of this blog before enabling VSAN on the cluster level. First we have to configure our second interface with a static IP address and mark the interface as usable for VSAN traffic. For each ESXi host, go to the manage tab and open Networking -> VMKernel adapter. Select the "Add Host Networking" option, choose "VMKernel Network adapter", create a new virtual switch and add the second nic (vmnic1) to the standard switch). After this, select "Virtual SAN Traffic" under the header "available services" and configure an IP address. Before we can start using VSAN, you’ll have to mark one (or all) of the disks as Flash. If you want to use the standard VSAN configuration, mark the first disk on each ESXi host as flash by going to the host configuration, then storage->storage devices. Select the last disk and click the “mark disk as flash” (the green square button with the f). Repeat this process for each host that you want to use in your VSAN cluster. After marking a disk as flash on each host, you can enable VSAN. If you’ve left the VSAN settings default, the disks will automatically be consumed to create a VSAN datastore. If you’ve set the VSAN settings to only manually consume disks, you’ll need to assign the disks to the VSAN storage pool. This can be done by going into the cluster VSAN configuration, selecting disk management and clicking the “create a disk group” button for each host. Afterwards, you should see a healthy green status and have 4 disks assigned to a single disk group on each host. Saving your environment as a Blueprint Once you have installed your VSAN environment, save it as a Blueprint. Then, you can share it with your team members in your sales engineering organization, training group, your customers/prospects and partners. They can then, with a few clicks provision a fully functional instance of this environment on AWS or Google Cloud for their own use. You don’t need to schedulde time on your sales demo infrastructure in advance, you can customize your dmeo scenario using a base blueprint, provision as many student training labs on-demand and pay per use.

Install and run VSAN 6.1 environment for sales demo, POC and training labs on AWS and Google Cloud With the new release of VSAN 6.1, quite a few people are likely interested in installing this new...

Big Switch Labs - Running self-service, on-demand VMware vCenter/ESX and OpenStack based Open SDN Fabric demo environments in AWS and Google Cloud

Author: Sunit Chauhan Director, Big Switch Labs At Big Switch Networks, we are taking key hyperscale data center networking design principles and applying them to fit-for-purpose products for enterprises, cloud providers and service providers. Our Open SDN Fabric products built using bare metal switching hardware and centralized controller software, deliver the simplicity and agility required to run a modern a data center network. Through seamless integration and automation with VMware (vSphere/NSX) and OpenStack cloud management platforms, virtualization and networking teams are now able to achieve 10X operational efficiencies compared to the legacy operating models. In addition to product innovation, we are also very focused on enabling our customers and partners to learn about the latest technology advances in the networking/SDN industry and make informed decisions without a ton of time-investment. Towards that end, Ravello Systems has been a great partner; enabling us to achieve that goal through Big Switch Labs – an online portal that lets you try VMware or OpenStack Networking real-time, on-demand and for Free! For the past few months Big Switch Networks has employed Ravello Systems platform to run demos of the Big Cloud Fabric integration with VMware vCenter and OpenStack, exposed through Big Switch Labs. The unique nested virtualization capabilities of Ravello Systems allows us to provision, within few minutes, complete VMware vCenter/ESX and OpenStack demo environments in the public clouds AWS and Google Cloud . The demos are provisioned from blueprints, the term Ravello uses to describe a snapshot of an entire multi virtual machine (VM) application, along with the full specification of the network that interconnects the set of VMs. To experience modern, highly automated data center networking, check out the following, on-demand modules on Big Switch Labs. These modules include access to the production-grade Cloud Management software, Big Cloud Fabric Controller as well as the simulated physical networking topology: VMware vCenter Integration with Big Cloud Fabric Experience the seamless integration of VMware vCenter and Big Cloud Fabric. Users provision virtual distributed switches and port groups in the vSphere Web Interface and observe the automated provisioning of the networking infrastructure from the BCF Controller GUI dashboard. OpenStack Integration with Big Cloud Fabric Get hands-on experience with the seamless integration of OpenStack and Big Cloud Fabric (P+V Edition) using Big Switch’s Neutron plugin. Users provision OpenStack projects and networks using the Horizon GUI and observe the automated provisioning of the physical and virtual networking infrastructure from the BCF Controller GUI dashboard. Explore the latest Big Switch enhancements to the OpenStack Horizon dashboard. I invite you to sign-up and experience the simplicity of managing, provisioning and troubleshooting data center networks in minutes. And yes, its available now and its free! Sign-up for Big Switch Labs

Author: Sunit Chauhan Director, Big Switch Labs At Big Switch Networks, we are taking key hyperscale data center networking design principles and applying them to fit-for-purpose products for...

LISP Leaf & Spine architecture with Arista vEOS using Ravello on AWS

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. This post discusses a Leaf and Spine data center architecture with Locator/ID Separation Protocol (LISP) based on Arista vEOS. It begins with a brief introduction to these concepts and continues to discuss how one can setup a fully functional LISP deployment using Ravello’s Network & Security Smart Labs. If you are interested running this LISP deployment, just open a Ravello account and add this blueprint to your library. What is Locator/ID Separation Protocol (LISP)? The IP is an overloaded construct and we use it to determine “who” and “where” we are located in the network. The lack of abstraction causes problems as forwarding devices must know all possible forwarding paths to forward packets. This results in large forwarding tables and the inability for end hosts to move and keep their IP address across layer 3 boundaries. LISP separates the concept of the host IP to the routing path information. The same way Domain Names System (DNS) solved the local host file problem. Its uses overlay networking concepts and a dynamic mapping control systems so its architecture looks similar to that of Software Defined Network (SDN). Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. LISP framework consist of data plane and a control plane. The control plane is the registration protocol and procedures, while the data plane is the encapsulation/decapsulation process. The Data Plane specifies how EID (end hosts) are encapsulated in Routing Locators (RLOCs) and the Control Plane specifies the interfaces to the LISP mapping System that provides the mapping between EID and RLOC. EID could be represented by IPv4, IPv6 or even MAC addresses. If represented by MAC it would be Layer 2 over Layer 3 LISP encapsulation. The LISP control plane is very extensible and can be used with other data path encapsulations such as VXLAN and NVGRE. Future blueprints will discuss Jody Scott (Arista) and Dino Farinacci (LISP author) workings towards a LISP control and VXLAN data plane, but for now, let's build a LISP cloud with LISP standards inheriting parts of that blueprint. What does LISP enable? LISP enables end hosts (EID) to have the ability to move and attach to new locators. The host has a unique address but the IP address does not live in the subnet that corresponds to its locations. It is not location locked. You can pick up the endpoint and move it anywhere. For example, smartphones can move around from Wifi to 3G to 4G. There are working solutions to operate an open LISP ecosystems (Lispers.net) that allows IP address to move around the data center and across multi vendors, while keepings its IP address. No matter where you move to the endpoint IP address will not change. At an abstract layer the EID is the “who” and the Locator is the “where the who is”. Leaf & Spine Architecture Leaf and Spine architectures are used to speed up connectivity and improve bandwidth between hosts. CLOS (Common Lisp Object System) is a relatively old concept but it does go against what we have been doing in traditional data centers. Traditional data centers have three layers – core, aggregation and access layer with some oversubscription between the layers. The core is generally Layer 3 and access being Layer 2. If Host A needs to communicate with Host B, the bandwidth available to that host depends on where the hosts are located. If the hosts are connected to the same access (ToR) switch, traffic can be locally switches. But if a host needs to communicate to another host via the aggregation or core layer it will have less bandwidth available due to the oversubscription ratios and aggregations points. The bandwidth between the two hosts depends on the placements. This results in a design constraint as you have to know in advance where to deploy servers and services. You do not have the freedom to deploy servers in any rack that has free space. The following diagram displays the Ravello Canvas settings for the leaf and spine design. Nodes labelled “Sx” are spine nodes and “Lx” are the leaf nodes. There are also various computes node representing end hosts. What we really need are equidistant endpoints. The placement of VM should not be a concern. Wherever you deploy a VM, it should have the same bandwidth to any other VM. Obviously, there are exceptions with servers connected to the same ToR switch. The core should also be non blocking so inbound and outbound flows are independent. We don't want an additional blocking element in the core. Networks should also provide unlimited workload placement and the ability to move VM around the data center fabric. Datacenter architectures that are three tiered are not quite as scalable and place additional complexity for provisioning. You have to really think about where things are in the data center to give the user the best performance. This increases the costs as you have certain areas of the data center that are underutilized. Underutilized servers lose money. To build your data center as big as possible with equidistance endpoints you need to flatten the data center build and leaf and spine architecture. I have used Ravello Network & Security Smart Lab to set up a large leaf and spine architecture based on Arista vEOS to demonstrate LISP connectivity. Ravello gives you the ability to scale to very large virtual networks, which would have difficult to do in a physical environment. Implementing a large leaf and spine architecture in a physical lab would require lots of time, rack space and power – but with Ravello, it is a matter of a few clicks. Setting up LISP cloud on Ravello Get it on Repo REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community. The core setup on Ravello, consists of 4 Spine nodes. These nodes provide the connectivity between other functional block within the data center and provide the IP connectivity between end hosts. The core should forward packets as fast as possible. The chosen fabric for this design was Layer 3 but if the need arises we can easily extend layer 2 segments with VXLAN overlay. Kindly see previous post on VXLAN to bridge Layer 2 segments. The chosen IP routing protocol is BGP and BGP neighbors are set up between spines and leaf nodes. BGP not only allows you to scale networks but it also decreases network complexity. BGP neighbors are explicitly defined and policies are configured per neighbor. Offering deterministic design. Another common protocol for this design could be OSPF, with each leaf in a stubby area. Stubby areas are used to limit route propagation. The Leaf nodes connect hosts to the core and they are equivalent to the access layer. They are running Arista vEOS and support BGP to the spine. We are using 4 leaf nodes located in three different racks. XI is the management JUMP host and enabled for external SSH connectivity. It is used to manage the internal nodes and its from here you can SSH to the entire network. The following diagram displays access from XI to L5. Once on Leaf 5 we issue commands to display BGP peerings. The leaf nodes run BGP with the Spine nodes. We also have 4 Compute nodes in three racks. These nodes simulate end hosts and they are running Ubuntu. Individual devices do not have external connectivity so in order to access via local SSH client you must first SSH to XI. LISP Configuration LISP is enabled with the lisp.config file which is one C1, C2, L5 and L6. The software is Python based. It can be found in the directory listed below. If you need to make changes to this file or view its contents, enter Bash mode within the Arista vEOS and view with the default text viewer. None of the Spine nodes are running the LISP software and they transport IP packets with traditional means i.e they do not encapsulate packets in UDP and carry out any LISP functions. Leaf nodes L5 and L6 perform LISP XTR functions and carry out the encapsulation and decapsulation. The diagram below displays the output from a tcpdump while in Bash mode. ICMP packets are sent from LISP source loopbacks of C9 (5.5.5.5) to C11 (6.6.6.6). These IP addresses are permitted by the LISP process to trigger LISP encapsulation. You will need to ping this source and destination to trigger the LISP process. All other traffic flow are routed normally. C1 & C2 are the LISP mapping servers and perform LISP control plane services. The following wireshark captures display the LISP UDP encapsulation and control plane map registers requests to 172.16.0.22. Before you begin testing, determine that the LISP process have started on the C1, C2, L5 and L6 with the command ps -ef | grep lisp. If it does not respond with 4 files, restart the LISP process with the command ./RESTART-LISP. Conclusion LISP in conjunction with a Leaf-Spine topology helps architect efficient & scalable data-centers. Interested in trying out the LISP Leaf-Spine topology mentioned in this blog? Just open a Ravello account and add this blueprint to your library. I would like to thank Jody Scott and Dino Farinacci for collaborating with me to build this blueprint.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

Provisioning and running on-demand ESXi labs on AWS and Google Cloud for automation testing - Managed Services Platform and delivery

Author: Myles Gray Myles is Infrastructure Engineer for Novosco Ltd in the MSP division. Primarily focused on implementing IaaS projects and automation for both self-hosted and private customer clouds. Company Profile Novosco is a leading provider of Cloud Technologies, Managed Services and Consulting. We specialise in helping organisations utilise the unique aspects of emerging technologies to solve business challenges in new and dynamic ways. We operate under managed service or strategic partnership contracts with our major clients. Our Use Case Ravello's ESXi labs in particular is used by the Managed Service Platforms division. In order to support our growth and deliver a consistent managed service quality across our client base we have standardized "checks" we run on environments at differing intervals. This gives us a pro-active element and an ability to see problems before they emerge, or indeed catch them and detail for remediation. There is only so much we can deliver reliably with manual effort, as such, to allow our MSP division to scale clients without needing to scale team size unnecessarily we turned, as any company would, to automation and scripting. We have some very valuable and specialised checks that we have automated, that would either take an individual a considerable amount of time and effort to produce, be prone to error due to complex calculations needed or just not be feasible to monitor at the frequency we require. So scripting was a must, what about testing? That's where Ravello's labs come in (in particular ESXi virtualisation) this allowed us to, at scale, test scripts to a level that are just not feasible within a physical lab, or indeed with the same repeatability and consistency of environment. I'll take one instance as an example, I was asked to produce a script that would automate the testing of host level operations. Obviously at some point, you're going to have to modify the environment. So, environment modification, that's completely automated... Sounds like a recipe for disaster, right? Sure, but if you test it thoroughly with enough configs and hone your permissions down, you minimise risk, and that's what we use Ravello for: Testing scripts that could cause collateral damage before putting them anywhere near production. Ravello allows us to spin up and down environments that are similar, if not identical to customer's or test cases that we think may break our automation. Their Blueprint feature makes it super-easy to spin up and destroy these to test whether a new feature in the code will break under certain environmental conditions. We are building up a library of VM profiles (different ESXi builds or vCenters) and blueprints that simulate these environments and allow us to deploy for very reasonable cost and minimal effort the same conditions these scripts will see in the wild. And the icing on the cake? We don't break our own lab or customer infrastructure, win-win. Repeatable, robust and shareable Obviously when you create a lab of any kind it takes a time investment, so if you are able to have that environment and spin it up many times, for the same initial time investment isn't it a no-brainer? Yep, and that's just what we were able to achieve with Ravello's blueprint feature, I created a single blueprint with a common environment (4x hosts, 1x vCenter and shared storage): Pretty standard, nothing extravagant, but something we see quite often. I was able to spin this up in a few hours (that includes time to create an ESXi template, a vCenter install and shared storage with a NexentaStor CE VM), that's not bad considering most of those components are now reusable in other blueprints. Obviously i'm not the only one that works in the division, so this was shared amongst all the other engineers and now regardless of who it is, if we are collaborating on a development effort, we all now have consistent test conditions that were otherwise impossible to reproduce. Have we found a new factor that may affect how the script works? Run the blueprint, make the environmental change, spin it down, create a new blueprint and there we are - consistent environments for all members with this new variable. Perfect. Summary Overall we've been very happy with Ravello, it fits our need perfectly and I can see it (with the API support) becoming a CI tool for us to have tests run automatically on different environments on milestones, we have a way to go yet, but Ravello have been more than helpful and don't foresee this being a problem!

Author: Myles Gray Myles is Infrastructure Engineer for Novosco Ltd in the MSP division. Primarily focused on implementing IaaS projects and automation for both self-hosted and private customer clouds. Co...

On-demand Cyber Ranges on AWS using Ravello- making cybersecurity development, testing and training affordable & accessible for enterprises

Today’s cyber threat landscape necessitates that your organization base its approach to security on the assumption that the adversary is already inside your network. So how do we prepare your organization to take back your network and to protect your data? SimSpace is proud to introduce our Virtual Clone Network (VCN) technology that provides realistic environments, adversary attack campaigns, and training and assessment tools for your organization’s cybersecurity developmental, testing, and training requirements. With SimSpace’s VCN, you are no longer restricted to small networks or to virtual environments that are not representative of your specific network environment and typical traffic. SimSpace VCN is a first-of-its kind offering because it utilizes capacity from Amazon Web Services and Google Cloud to provide full featured pre-configured and tailorable Cyber Ranges that are deployed on-demand in fully isolated environments - made possible by Ravello Systems’ nested virtualization and software-defined networking technology. A SimSpace VCN can span in size from tens of hosts to several hundred and, for urgent requirements, we have built several easily accessible models including enterprise environments, public utilities, financial institutions, or military networks. Depending on your circumstances, you can customize and extend the existing pre-defined networks or you can start from scratch and generate an entire network tailored to meet your specific organizational needs. We will give you the tools necessary to rapidly create, configure and validate your own customized virtual environment. The process to build and configure your VCN is fully automated; we just need to know your requirements. Leveraging both the advantages of the cloud and Ravello's cutting edge HVX technology, you can spin up the environment of your choosing, for just the amount of time that you need it, and then suspend or delete it when finished. You no longer need a dedicated staff to build, operate, and maintain custom, in-house, and often separate, development, test and training environments. Instead, focus your staff and resources on what you need the most ... being prepared to be effective against the threat. The technology used to build and run our Virtual Clone Networks was developed after a decade of investments by the U.S. Military to provide high-fidelity virtual environments for the DoD testing and training communities. Now SimSpace can offer the same technology that powers the government’s most sophisticated cyber ranges to your business at a more affordable and accessible manner. Uses So what can you do with a Virtual Clone Network? Some examples include: Test and development environments to create the next generation cybersecurity solutions Risk reduction for the introduction of new cybersecurity solutions into production environments Hypotheses testing for real-time responses to cyber incidents Disruptive-capable assessments that complement traditional pen-testing of production networks Comparative analysis of existing or new cybersecurity solutions against competing alternatives Virtual environment for pen-testing risk-reduction analyses Assessment of the effectiveness of pen-testing derived cybersecurity solutions Assessments of individual and team cybersecurity performance Individual training for cybersecurity and pen-testing operators Cybersecurity team training Range-based Cyber exercises Virtual Clone Network Capabilities Predefined or tailored network environments Your network can be chosen directly from a suite of predefined networks or can be tailored by extending or adjusting one of the predefined networks or you can start from scratch to build a custom network to meet your specific needs. The predefined networks range in scale from tens of nodes to hundreds of network machines. These pre-built networks are representative of a variety of organizations: enterprises, the defense industrial base, financial institutions, utilities or military networks. These virtual networks are all self-contained, that is, isolated from the Internet, in order to prevent any accidental spillage or inadvertent attacks on real-world sites. Our intent is to provide a safe environment where you can test and train without the unnecessary consequences. Despite the advantages of being isolated, effective testing and training still require a realistic Internet within our VCNs. To accomplish this, we re-host thousands of sampled web, email, and ftp sites. We also provide root and domain DNS servers and core BGP routing. Within the VCNs, just as a typical network, we run Virtual Routers, full Windows Domain Controllers, Exchange, IIS, DNS and File servers. Linux, Unix and other server and client operating systems are also included along with their popular services. As much as system administrators would like to think that their networks are perfectly constructed and aligned, the reality is that there are many misconfigurations, in addition, to legacy and unwanted traffic. So, we add that in as well. For each of the services, we also include real content in the sites and services so that our virtual users can interact with that content in a realistic manner (e.g. send/receive/open email attachments, click on embedded URLs, etc). We are also able to tailor and reproduce important features of many domain-specific or custom applications and services that are critical to your business area, so they too may be included to fully represent the defensive posture of your organization and challenge your defensive team. We are also able to provide a wide set of operating systems, services, data, and user accounts because we have developed the tools and processes to fully automate both the setup and configuration of those systems. Realistic, host-based user activity To create high-fidelity replicas of networks, we need more than just the hosts, servers, and infrastructure to match the architecture. To be truly realistic, we also need to recreate all the user activity, both productive and unproductive, that we see on a daily basis. Users today mix their personal and professional lives and vary in their level of productivity, focus, application usage, social networking and awareness of cybersecurity threats. To generate this level of realism, SimSpace provides the most advanced user-modeling and traffic-generation capability available to make the VCNs come alive. Each host on the network is controlled by a virtual user agent who logs in each morning and uses real applications like Internet Explorer, Firefox, MS Office and Windows Explorer to perform their daily activities. As every Netizen is like a snowflake, unique in their own way, our virtual user agents are programmed with their own individual characteristics. Each user has their own unique identity, accounts, social and professional networks, daily schedule, operating behavior and preference for which applications to use, when and how often. Just like in the real world, users interact with other users, compose emails, open, edit and send documents to co-workers and external collaborators to accomplish their daily tasks. These virtual users are goal-driven and reactive, which means they can respond to predefined instructions and sense their environment and any changes within it. Therefore, if a particular service or application becomes unresponsive, they can adjust their behaviors and application usage to complete the tasks. This rich and immersive environment generates the daily host and network activity that sophisticated attackers use to hide or obscure their presence. This typical “top cover” allows them to exploit user applications and operating systems (e.g. spear-phishing, drive-by-downloads) to gain a foothold in the network and operate covertly. The challenge for the defensive operators and their tools is to identify and stop attackers who are also operating alongside legitimate users. If successful, of course, your cybersecurity team will prevent the adversary from carrying out its goals and will minimize the disruption to your business operations. Defensive tools and applications Ravello's unique and powerful layer2 network and nesting technology allows us to integrate open-source and commercial defensive and offensive tools into a SimSpace VCN. Ravello is the only cloud provider in the industry with these robust and innovative networking technologies. SimSpace VCNs are preloaded with popular security solutions like pfSense, Security Onion, OpenVAS, Kali Linux and are configured according to industry best practices. Depending on your requirements, these typical cybersecurity tools can be replaced or combined with other more appropriate solutions. By loading your specific configuration files and rule sets, your VCN becomes more tailored to your environment and, in turn, enhances your training, testing, and assessment results. Model sophisticated adversaries SimSpace’s VCNs, regardless of whether they are predefined or tailored, come with some of the most advanced capabilities for simulating real users. But what about simulation of advanced adversaries? To simulate a real advanced threat, you need to simulate advanced tactics. And that starts with zero day emulation. In the Virtual Clone Network, every piece of software has built in memory corruption exploits, with both remote, client, and local exploit options. This offers the most advanced zero-day emulation threat capability against every host in your VCN, regardless of its patch level or operating system. Want to see how well your company responds to a zero day? SimSpace VCNs can put your team to the test! SimSpace Breach is the most advanced penetration-testing tool in existence. With SimSpace Breach, you can enable your Red Teams to not only work more efficiently, but deliver a higher threat capability in a shorter amount of time than ever before. With the same number of red team operators, more threat engagements of higher caliber can be accomplished in a similar time-period. In addition, SimSpace Breach has instrumentation that work within the Virtual Clone Network to allow you to gain better insights on your tooling, people and process. Assessment Tools Now that we have provided you a realistic environment and the ability to recreate sophisticated adversaries, how will your cybersecurity team or the tools they rely upon perform? To answer these questions, we have developed a suite of assessment tools to help. Your VCN is a highly instrumented environment that can provide insights into the defensive effectiveness of your team as well as the impact to your organization’s cyber environment from an attack. Specifically, we can help you understand 1) what were the specific attacker actions and movements performed, 2) how many virtual users experienced service disruptions, 3) what was the response time for the defenders to identify the attacker, repel them from the network, and then, if required, restore business operations, and 4) what was the mission impact during the attack. For each testing or training objective, we are able to capture specific objective performance metrics and allow you to assess your team’s effectiveness and, over time, their rate of improvement. Availability Unveiling the new technology and announcing beta access today. About SimSpace SimSpace’s mission is to measurably improve, in a cost effective way, the cyber capabilities of your enterprise. Who we are: An innovative cybersecurity company leveraging decades of experience working for the U.S. Military and DoD Laboratories to provide next-generation cyber assessments, training, and testing. SimSpace provides high-fidelity simulated network environments, or Virtual Clone Networks (VCN), for tailored, interactive, and scalable cyber events along with specialized software tools for activity replay, mission impact evaluation, and network monitoring. SimSpace focuses on your organization’s entire cybersecurity capability — People, Process, and Technology — successfully integrating and validating testing, training, and assessments for individuals, small-team and large-force training exercises for 100+ operators.

Today’s cyber threat landscape necessitates that your organization base its approach to security on the assumption that the adversary is already inside your network. So how do we prepare your...

How to setup VXLAN using Arista vEOS to seamlessly connect data-centers

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming. The following post explains key drivers for overlay virtual networking and popular overlay technology called Virtual Extensible LAN (VXLAN). It walks through how to set up a working VXLAN design based on Arista vEOS on Ravello’s Networking Smart Lab. Additional command snippets from a multicast VXLAN using Cisco CSR1000V setup are also present. Evolving Data Centre The evolution of the data centre started with cloud computing when Amazon introduced the first commercial product to provide a “cloud anytime and anywhere” service offering. The result has driven a major paradigm shift in networking, which forces us to rethink data centre architectures and how they service applications. Designs goals must be in line with new scalability and multi-tenancy requirements that existing technologies cannot meet. Overlay-based architectures provide some protection against switch table explosion. They offer a level of indirection that enables switch tables sizes not to increase as the number of hosts supported increases. This combats some of the scalability concerns. Putting design complexities aside, an overlay is a tunnel between two endpoints, enabling frames to be exchanged between those endpoints. As the diagram below states, it’s a network that is built on top of another network. The overlay is a logical Layer 2 network and the underlay could be an IP or Layer 2 Ethernet fabric. The ability to decouple virtual from physical drives new scalability requirements that push data centres to the edge. For example, the quantity of virtual machines deployed on single physical servers has increased. Each virtual machine has one or more virtual network card (vNIC) with unique MAC address. Traditional switched environments consist of single server connected end hosts, viewed as a single device. The corresponding upstream switch processes a low number of MAC address. Now, with the advent of server virtualization, ToR switch accommodates multiple hosts on the same port. As a result, MAC table size must increase by an order of magnitude to accommodate the increase scalability requirement driven by server virtualization. Some vendors have low MAC address tables support, which may result in MAC address table overflow causing unnecessary flooding and operational problems. Over the course of the last 10 years, the application has changed. Customers require complex application tiers in their public or private IaaS clouds deployments. Application stacks usually contain a number of tiers that require firewalling and/or load balancing services at edges and within the application/database tier. Additionally, in order to support this infrastructure you need Layer 3 or Layer 2 segments between tiers. Layer 2 supports non-routable or keepalive packets. The infrastructure must support multi-tenancy for numerous application stacks in the same cloud environment. Each application stack must maintain independence from one another. Application developers continually request similar application connectivity models such as same IP address and security models for applications, both for on-premise and cloud services. Applications that are “cloud centric” and born for the cloud are easy to support from a network perspective. On the other hand, optimized applications for “cloud-ready” status may require some network tweaking and overlay models to support coexistence. Ideally, application teams want to move the exact same model to cloud environments while keeping every application as an independent tenant. You may think that 4000 VLAN is enough but once you start deploying each application as a segment and each application has a numerous segments, the 4000 VLAN limit is soon reached. Customers want the ability to run any VM on any server. This type of unlimited and distributed workload placement results in large virtual segments. Live VM mobility also requires Layer 2 connectivity between the virtual machines. All this coupled with quick on-demand provision is changing the provisioning paradigm and technologies choices we employ to design data centres. The Solution – Overlay Networking We need a technology that decouples physical transport from the virtual network. Transports should run IP only and all complexities are performed at the edges. Keep complexity to the edge of the network and let the core do what it should do – forward packets as fast as possible, without making too many decisions. The idea of having a simple core and smart edges carrying out intelligence (encapsulation) allows you to build a scalable architecture. Overlay virtual networking supports this concept and all virtual machine traffic generated between hosts is encapsulated and becomes an IP application. This is similar to how Skype (voice) uses IP to work on the Internet. The result of this logical connection is that the edge of the data centre has moved and potentially no longer exist in the actual physical network infrastructure. In a hypervisor environment, one could now say the logical overlay is decoupled from the physical infrastructure. Virtual Extensible LAN (VXLAN) is an LAN extension over a Layer 3 network. It relays Layer 2 traffic over different IP subnets. It addresses the current scalability concerns with VLANs and can be used to stretch Layer 2 segments over Layer 3 core supporting VM mobility and stretched clusters. It may also be used as a Data Centre Interconnect (DCI) technology, allowing customers to have Multi-Chassis Link Aggregation Groups (MLAG) configurations between two geographically dispersed sites. VXLAN identifies individual Layer 2 domains by a 24-bit virtual network identifier (VNI). The following configuration snippet from Cisco CSR1000V displays VNI 4096 mapped to multicast group 225.1.1.1. In the CSR1000V setup, PIM spare-mode (Protocol Independent Multicast) is enabled in the core. The VNI identifies the particular VXLAN segment and is typically determined based on the IEEE 802.1Q VLAN tag of the frame received. The 24-bit segment-ID enables up to 16 million VLANs in your network. It employs MAC over IP/UDP overlay scheme that allows unmodified Layer 2 Ethernet frames to be encapsulated in IP/UDP datagrams relayed transparently over the IP network. The main components in VXLAN architecture consist of virtual tunnel end-points (VTEPs) and virtual tunnel interfaces (VTI). The VTEP perform the encapsulation and de-encapsulation of the Layer 2 traffic. Each VTEP is identified by an IP address (assigned by the VTI), deployed as the tunnel endpoint. Previous VXLAN implementation consisted of virtual Layer 2 segments with flooding via IP multicast in the transport network. It relied on traditional Layer 2 flooding/learning behaviour and did not have an explicit control plane. Broadcast/multicast and unknown unicast frames were flooding similar to how they flooded in a physical Ethernet network – the difference being they were flooded with IP multicast. The configuration snippet is from the CSR1000v setup and displays multicast group 225.1.1.1 signalled with flag BCx (VXLAN group). Multicast VXLAN scales much better than VLANs, but you are still limited by the number of multicast entries. Also, multicast in the core is undesirable especially from a network manageability point of view. It’s another feature that has to be managed, maintained and troubleshooted. A more scalable solution is to use unicast VXLAN. There are methods to introduce control planes to VXLAN, such as LISP and BGP-EVPN and these will be discussed in later posts. Arista Ravello Lab Setup I set up my Arista vEOS lab environment on Ravello’s Network Smart Lab. It was easy to build the infrastructure using Ravello’s application canvas. One can either import the VMs with the Ravello import utility or use Ravello Repo to copy existing ‘blueprints’. I was able to source the Arista VMs from the Ravello Repo. Once you have the VM setup, move to the “Application” tab to create a new application. The application tab is where you build your deployment before you publish your virtual network deployment to either Google or Amazon Cloud. Ravello gives one the opportunity to configure some basic settings from the right-hand sections. I jumped straight to the network section and added the network interface settings. The settings you enter in the canvas sets up the basic network connectivity. For example, if you configure an NIC with address 10.1.1.1./24, Ravello cloud will automatically create the vSwitch. If you require inbound SSH access, add this external service from the services section. Ravello will automatically give you a public IP address and DNS name, allowing the use of your local SSH client. Once the basic topology is set-up, publish to the cloud and start configuring advanced features via the CLI. VXLAN Setup The Arista vEOS design consists of a 4 node set-up. The vEOS is the EOS in a VM. It’s free to download and can be found here. The two Core’s, R1 and R2 are BGP peers and formulate BGP peerings with each other, simulating the IP Core. There are no VLANs spanning the core and it’s a simple IP backbone, forwarding IP packets. Host1 and Host2 are IP endpoints and connect to access ports on R1 (VLAN 10) and R2 (VLAN 11) respectively. They are in the same subnet, separated by layer 3. The hosts do not perform any routing. Below is a VXLAN configuration snippet for R1. The majority of VXLAN configuration is done under the Vxlan interface settings. The next snippet displays the VXLAN interface for R2. The loopbacks are used for VXLAN VTEP endpoints and you can see that R2 VTEP loopback address is 172.16.0.2. The VNI is set to 10010 used identify the VXLAN segments. The MAC address table for R2 below shows the MAC address for H2 only. The core supports Layer 3 only and the only routes learnt are the VTEP endpoints. Configuration for Arista vEOS are found at this GitHub Account. Conclusion VXLAN is a widely used data-center interconnect (DCI) technology, and can be implemented using Arista vEOS or Cisco CSR1000v to seamlessly connect data-centers. Ravello’s Networking Smart Labs provides an easy way to model and test a VXLAN before it is rolled out into production infrastructure.

Author: Matt Conran Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack,...

Running VMware VMs on Ravello HVX versus installing ESXi hypervisor itself on Ravello

We recently announced general availability of InceptionSX which lets you run the nested ESXi hypervisor on AWS or Google Cloud. But for the last two years we have had customers running their enterprise application environments with VMware VMs and complex networking on Ravello using just HVX. The setup, install and performance is entirely different for the two So here is a quick cheat sheet explaining the differences.   HVX: Running VMware VMs or virtual appliances on Ravello InceptionSX: Running nested ESXi hypervisor on Ravello Useful for Labs for enterprise applications (virtualized workloads) such as Sharepoint, Oracle, custom Java or .Net applications Labs for VMware virtual appliances such as F5, CheckPoint, Arista, Cisco Labs for testing hypervisor features such as long distance vMotion, and related VMware products such vCenter, VSAN, NSX, vRealize Operations etc and VMware partner products such as Veeam, Zerto, Nutanix etc which need the hypervisor itself as part of the lab Architecture You can run all your virtualized enterprise workloads with existing VMware VMs and existing VMware virtual appliances on Ravello HVX, without any VM conversions required.   Ravello has developed a nested hypervisor with software-defined networking, called HVX, that runs in cloud VMs instead of running on bare metal servers. It runs on AWS/Google and exposes the same VMware interfaces, drivers, networking that the application expects. This mode is called HVX. Without Ravello, it is not possible to run ESXi on AWS/Google Cloud. This is because the ESXi hypervisor, at boot time, looks for hardware extensions and instead discovers that AWS is running Xen and Google is running KVM. However, Ravello HVX exposes hardware extensions like IntelVT and AMD SVM in the public cloud so that you can run the ESXi hypervisor on it if required. This is for use cases where you need to test ESXi hypervisor features and this mode is called InceptionSX. Typical use cases Enterprise app dev/test environments Sales demos, POCs for any software environment such as virtual appliances Virtual training labs in the cloud Self-learning/ home labs for enterprise applications, virtual appliances ESXi dev/test environments for hypervisor or software with ESXi modules ESXi sales demos, POCs ESXi or vSphere virtual training labs Self-learning/ home labs for ESXi, vCenter, NSX, VSAN etc Performance HVX is very lightweight and hardly adds any performance overhead. Expect to see the same overhead as going from physical to virtual when going from virtual to Ravello. InceptionSX is running ESXi on HVX on either Xen (AWS) or KVM (Google). The performance is still very good but slightly lower than just HVX due to the extra layer of nesting Installation & licensing No ESXi install or licensing required Install your own vSphere ISO and use your own vSphere license

We recently announced general availability of InceptionSX which lets you run the nested ESXi hypervisor on AWS or Google Cloud. But for the last two years we have had customers running...

How to configure SSH and RDP access for nested VMs running on top of ESXi hosts in Ravello Systems

In this document we describe how to enable ingress connectivity to the nested VMs running on top of VMware ESXi™/ vCenter in Ravello, also referred to as 2nd level VMs. Since Ravelo DHCP does not support nested virtual machines we will use static IP configuration.. For more information on using DHCP, and the additional configuration needed, see this post from Ohad, detailing the steps. The Ravello DHCP servers do not give out IPs to the nested virtual machines, so we need to configure them with static IPs and create a virtual IP so they can be accessed from the Internet or remotely from outside of the Ravello Application Environment. The steps included here: Add an additional NIC to an ESXi node Add an additional NIC to a hosted ESXi node in the Ravello UI . This additional NIC will be used to create an uplink for a segregated vSwitch on ESXi. The nested VMs will use this switch as an external gateway Define a subnet for nested VMs by configuring the newly created NIC with IP/Subnet information. In this example, I am using subnet 10.20.30.x/24 to define my external network:   IP address: 10.20.30.3 Subnet: 255.255.255.0 Gateway: 10.20.30.2 DNS: 10.20.30.1 (optional) The IP address 10.20.30.3 can be used for the first nested guest VM or it can remain a placeholder. The optional DNS Server can be any IP. Ravello will create the DNS server as defined. You may also substitute your own DNS server running within the same Application Environment. his DNS server can then be assigned statically in your nested ESXi VMs. To reserve additional IPs for nested VMs select Advanced > Add and enter a different IP address from the same subnet and use the same gateway address as in step 2. To provide ingress connectivity to a specific TCP/UDP port go to the Services tab and click on Add Supplied Service. Then fill in a port number and select the designated address of the VM (i.e. 10.20.30.10 – as in our example used in the previous step). Common ports to enable are: 22 for SSH 3389 for RDP 443 for HTTPS 80 for HTTP To configure 1:1 network address translation, select “IP” as the protocol in the services. This forwards all traffic from the public IP to the private IP and can be useful when running a nested virtual router or networking virtualization software such as VMware NSX. To use port forwarding without consuming a public IP for a service, configure “Port forwarding” on the virtual interface: When the virtual machine has been started, the port mapping shows in the summary of the virtual machine. In my case, port 22 on the virtual machine is reachable through 104.197.108.68:10001. It is also possible to provision more than one routed subnet on the same physical ESXi interface. This prevents you from having to create a new vSwitch and interface when you wish to configure virtual machines to use a separate subnet. As shown below, I’ve created an additional virtual IP in a separate subnet with a separate router: When we look at the network topology, a separate router is created and traffic can be routed between the virtual machines in different subnets. Using the vSphere Client (shown here) or the vSphere Web Client connect either directly to the hosted ESXi node or vCenter and create a new vSwitch of type “Virtual machine port group” (not VMkernel) on the ESXi node that will run the nested VMs. Bind it to the additional NIC created in step 1. VMs that need external connectivity must have their vNICs assigned to a port group on this vSwitch. Do not assign an IP address or VMkernel IP Stack to the new vSwitch. For each nested VM you want to access remotely, assign a static IP from the pool created in step 2. In the nested VMs assign a static IP configuration. Use an IP address and gateway from step 2: IP: 10.20.30.x Netmask: 255.255.255.0 10.20.30.2 DNS: 10.20.30.1, 10.0.0.1 Test external access by connecting to a known website or pinging a remote server from within a 2nd Level VM. After verifying external access, you can test accessing the VM from an external source. You can find the external IP/hostname on the summary tab of the ESXi node. There is a drop down list for the available NICs with IP information for each additional IP you defined.

In this document we describe how to enable ingress connectivity to the nested VMs running on top of VMware ESXi™/ vCenter in Ravello, also referred to as 2nd level VMs. Since Ravelo DHCP does not...

InfinityDC: How to extend your on-premises vSphere data center to the cloud

In this blog I’m going to focus on how to extend your VMware vSphere on-premises datacenter to the public cloud. With Ravello, you can run ESXi nodes in AWS or Google Cloud, and easily connect it to your data center. Hence, you can spin up as many VMware ESXi nodes as you need, on demand, and simply pay for what you use. We call this, the InfinityDC. Both, the on-premises data center as well as the ESXi nodes running in AWS can be managed using the same VMware vCenter, providing for a seamless, scalable fabric. My data center setup consists of 3 ESXi nodes, 1 NFS server and 1 vCenter appliance to manage the whole deployment. The Ravello application in Google cloud has 3 ESXi hosts and represents a VMware vSphere environment you set up in public cloud. The Ravello “application” in Google Cloud and my data center are completely isolated. I have used pfsense to establish a VPN between these two sites. The vCenter in my data center controls all the 6 ESXi hosts. The VPN is configured using the instructions outlined in this blog. Settings the VPN The VPN is configured as OpenVPN peer to peer (better performance than IPSEC in Ravello) as explained in the VPN blog. The on-premises data center Notes: OpenVPN server is running in this environment. All machines are using static IPs only. In this example, ESXi machines have only one network interface. In this example, vCenter machine has no inbound internet connection (for security reasons - this entire environment is closed for inbound connection (other than the VPN server of course)). I have used console (rather than RDP) to work with it. In order to be able to connect over VPN to remote ESXi machines from the remote datacenter, I had to reduce the MTU of vmk0 interface in all 3 ESXi hosts running in on-premises DC (see “IP fragmentation/Jumbo packets/MTU” section in the VPN blog). I have reduced the MTU (from 1500 to 1300) by running the following command esxcli network ip interface set -i vmk0 -m 1300 Ravello application running on Google Cloud Notes: OpenVPN client is running in this application environment. All machines are using static IPs only. In this example, ESXi machines have only one network interface. In order to be able to connect over VPN to remote ESXi machines, I had to reduce the MTU of vmk0 interface in all 3 ESXi hosts running in Google (see “IP fragmentation/Jumbo packets/MTU” section in the VPN blog). I have reduced the MTU (from 1500 to 1300) by running the following command esxcli network ip interface set -i vmk0 -m 1300 Running VMs in the remote datacenter In order to deploy VMs in the Ravello ESXi environment, you have several options. The straightforward way to accomplish that is by: Copying the needed files (either ISO files or actual VMs (vmdk+ovf)) to the remote NFS server. You can do this using SCP for example or using the vSphere client plugin. Once the files are located in the remote NFS machine you can normally deploy/install the VM and run it. Long distance vMotion Since vCenter 6.0 version, VMware supports long distance vMotion. You can perform such vMotion and move running VMs from your on-premises datacenter to your Ravello remote datacenter without any downtime.

In this blog I’m going to focus on how to extend your VMware vSphere on-premises datacenter to the public cloud. With Ravello, you can run ESXi nodes in AWS or Google Cloud, and easily connect it to...

Preparing Linux Images for Ravello

One of the great things about building and running applications in Ravello is that you can, almost literally, drag and drop in machines from a range of virtualization platforms - VirtualBox, VMWare, Qemu-kvm, pretty much anything really. Then you can use these images to build your environment and run them without caring about the underlying cloud provider. In this blog, I will describe how to build a machine image for Ravello and some of the key modifications that need to be made both to a generic image and to a RHEL7 image in order for it to work properly. To get started, you only need a disk image and machine information. However,it is incredibly important to remember that the machine image in question is, by necessity, going to be copied and cloned when you actually use it in a blueprint in Ravello. As everyone who has done clones in a virtualization environment knows, if a machine is going to be cloned or copied you need to disable anything that was designed to lock a physical NIC to a machine. Almost universally this includes udev, with a popular technique being to drop a directory in udev (IE: mkdir -p /etc/udev/rules.d/60-net.rules). RHEL7, and thus CentOS 7, adds a couple additional wrinkles, because of course it does. The changes introduced in the jump from EL6 to EL7 may very well be the most radical of any linux distribution update ever. In addition to normal things like software revisions we have: SystemD replacing init. A major version kernel update grub and anaconda changes NetworkManager replacing /etc/init.d/network biosdevname now compliments udev And at one point in the beta, net-tools was not installed as part of core. Ubuntu, Debian, and Fedora have / had similar changes but they’ve been staggered across multiple releases, Red Hat got them all in in one. Anyway, in a world where init has gone away, the key changes here for virtual and cloud environments ironically come down to the network. Udev pinning interfaces to MAC addresses was a problem with cloning virtual hardware but fairly simple to resolve: drop a directory or device into udev(mkdir -p /etc/udev/rules.d/60-net.rules). The change to NetworkManager* however, requires some fiddly bits and biosdevname / net.if_names requires that you pass flags in to the kernel at boot. * NetworkManager, strictly speaking, does not have to be disabled, but a number of automation tools or modules make a lot of assumptions around networking and so break on EL6 -> EL7 - often because of interface names and NetworkManager (IE: PackStack). It’s easier just to turn it off for now. We have a fork here; if you’re kicking the machine you can add the options to the bootloader section (“bootloader --location=mbr --boot-drive=sda —append “net.ifnames=0 biosdevname=0”). If you’re working on an existing image, you’ll need to update the grub config: sed -i -e 's/quiet/net.ifnames=0 biosdevname=0 quiet/' /etc/default/grub grub2-mkconfig -o /boot/grub2/grub.cfg   And to disable network manager: systemctl disable NetworkManager systemctl enable network   After doing both of these changes you’ll need to manually configure the interface eth0 ala an EL6 machine: rm -f /etc/sysconfig/network-scripts/ifcfg-e* cat

One of the great things about building and running applications in Ravello is that you can, almost literally, drag and drop in machines from a range of virtualization platforms - VirtualBox, VMWare,...

How to setup VPN for environment running in Ravello from a vanilla pfSense image

Ravello’s nested virtualization and overlay networking technology allows for fast application development and testing by encapsulating entire application environments in cloud agnostic capsules. This capability makes it easy to quickly spin hundreds of versions of the capsules in the cloud, which is typical of a continuous integration setup. There is often times a need to connect the Ravello environment to another public cloud, or an on-premise private cloud servers, databases, repositories, etc. For example, in a continuous integration setup where the code repository is on premise, there needs to be a connection to the Ravello environment in the cloud via a secure tunnel. The goal of this article is to showcase how to setup a secure VPN between two Ravello environments, one in AWS EC2 and one in Google Cloud. This setup mocks a scenario where, while one environment is running Ravello in either Google or AWS, the other could be an on-premise data center or customer’s VPC in AWS or some third party data center. IPSEC or OpenVPN? The two options we will describe here are ISPEC and OpenVPN. Each has its known advantages and disadvantages as described here. In addition to those known advantages and disadvantages, there are also Ravello’s specific advantages and disadvantages. Here is a list: In order for IPSEC to work properly, you need a static public IP. In Ravello you therefore need to use “elastic IP”. Currently, when using elastic IP, you must use Ravello’s proxy/router which has some performance bursts (this is scheduled to be fixed in the next few months). So please be aware that using IPSEC in Ravello might have some performance bottlenecks every once in a while. OpenVPN is using a client-server topology in which only the client needs access the server. This is great for an on-premise OpenVPN client as you do not need to open any firewall in your on-premise site. OpenVPN is works better in Ravello with port forwarding (rather than “elastic IP”. However, when working with port forwarding in Ravello, the destination port might change in some cases (for example adding/removing some additional supplied services). Such a port change will force a matching configuration change in the VPN Client. IPSEC The following settings will need to be adapted to your environment, in order to route the traffic through the pfSense gateway, which will be one of the endpoints of the VPN tunnel. Ravello GUI WAN Static IP: 10.10.10.1 (the other node will use 10.10.10.2) WAN Elastic IP: 85.190.178.133   LAN: 192.168.1.1 (the other node will use 192.168.2.1) Supplied services (on WAN interface)-   UDP ports needed are 500(phase 1 IPSEC), 4500 (Phase 2 IPSEC) and 88 (Kerberos). TCP ports needed are 80 and 443 (for web interface). pfSense GUI The VPN IPsec tunnel setting will be created through the web interface of the pfSense virtual appliance. Copy and paste the Elastic IP of your pfSense appliance into your browser (use https:// prefix) to access the Web UI admin page (85.190.182.122) Login credentials are: username: admin password: pfsense (unless you have changed it yourself earlier) The setup of the tunnel is comprised of two phases: Phase 1 specifies how the tunnel connects to its remote peer (the other end of the tunnel). Phase 2 specifies which local network traffic/subnets should be sent through the tunnel. This division makes it possible for the tunnel to handle requests from multiple local subnets. In this example, there will only be one local subnet: 192.168.2.0/24 connecting to 192.168.1.0/24 through the tunnel that has the to endpoints 85.190.182.122 and 85.190.178.133 Before configuring the IPSEC, It is crucial not to forget to add firewall rules in the WAN interface to allow incoming connections (3 rules for UDP ports 88,500,4500): Also, do not forget of course to enable IPSEC: Phase 1 configuration Navigate to VPN > IPsec from the top navigation. Double click the first row in the table. The main components of the Phase 1 configuration are: Field Usage Interface Which interface would be used to communicate with the other end of the tunnel Remote Gateway Enter the elastic IP of the other end of the tunnel (85.190.178.133) My identifier This is being used by the IPsec protocol to identify one of the ends of the tunnel Peer identifier The identifier of the other end of the tunnel Pre-shared key The password which needs to match on both ends of the tunnel Security algorithm This is used to determine how to encrypt and hash the data being sent. It’s important to ensure that these settings are mirrored on the other end of the tunnel. Phase 2 configuration Click Save to navigate back to the main VPN IPsec configuration page. Click the + sign under the first row in the table to see the Phase 2 summary. Double click the row to make changes. The main components of the Phase 2 configuration are: Field Usage Local Network Specifies which local network should be routed through the tunnel. Remote Network Specifies which is the remote network that the tunnel connects to (for outbound traffic) Key Exchange Algorithm This is used to determine how to encrypt and hash the data being sent. It’s important to ensure that these settings are mirrored on the other end of the tunnel. The 2nd IPSEC peer If you are using another pfSense machine in another application, you will need to perform the previous steps in a similar way (just change the source and destination addresses - you will also need to change the address in LAN interface itself). If connecting your Ravello environment to a private data center, you will need to configure the VPN IPsec settings in your specific router. The steps and terminology involved are similar, but the navigation/UI differs from vendor to vendor. Establish the VPN IPsec tunnel In order to establish the VPN IPsec tunnel, navigate to Status > IPsec and click the “Connect” button on the right of the row. A successful tunnel will display similarly to the screenshot below: Using OpenVPN instead of IPSec As written above, For improved performance in Ravello, OpenVPN peer to peer should be used instead of IPSEC. This is because with OpenVPN you can control the port of the remote VPN endpoint and by that you can use port forwarding instead of elastic IP. Elastic IPs along with VPN can result in performance overhead due to external Internet routing infrastructure used. To configure pfSense as OpenVPN Peer to Peer with a shared key read this. It is crucial not to forget to add a firewall rule in the OpenVPN server to allow incoming connections: So after setting OpenVPN client and server, set the OpenVPN server WAN interface to use port forwarding and add a supplied service to UDP port 1194. Then, in the OpenVPN client, you will need to set: Server host or address to the hostname of the VPN server. Server port to the port used for OpenVPN port (UDP 1194) forwarding in the OpenVPN server (usually 10002 - note that this port number might change after OpenVPN VM restart). So all in all, the server configuration should look something like this: and the client should look something like this: LAN testing Allowing traffic in OpenVPN/IPSEC interface In order to allow traffic between 2 sides, you need to add a rule in the firewall. In my example, I allowed all OpenVPN traffic for any protocol on both sides (this includes ping/ICMP packets). A similar thing should be done when using IPSEC instead of OpenVPN. Testing traffic through the tunnel Finally, in order to ensure that the local traffic is routed through the tunnel, navigate to Diagnostics > Ping and ping a local IP address from the other end of the tunnel (note - make sure “Source Address” is set to “LAN”). Networking configuration of other machines/Vms in the environment configured with VPN If no inbound internet access is required to the Vms in the Ravello application environment This is the recommended settings for your VMs. You probably just need to be able to access the internet from those VMs and to be able to access those VMs from other VMs in your application or in the remote site of the VPN. If you for some reason do need inbound access from the internet to VM, please go to this section. So, in order to configure your machines to properly use the VPN, You need to: Give those machines a static IP address (In this example I will give 192.168.2.2). In case you are using the 192.168.1.x or 192.168.2.x networks (this is how we recommend in this blog), you need to set the Netmask to 255.255.255.0. In case you are using a different network, you will need to assign the matching netmask. Set the Gateway to the VPN’s machine address (In this example I will give 192.168.2.1 as this is the VPN’s address). No need for DNS. So all in all, your network configuration should look something like this: If the Vms in the Ravello application environment are configured with DHCP address This requires additional networking configuration and should be avoided for now, when using with VPN IPSec tunnel. If inbound internet access is required to the Vms in the Ravello application environment In case you need inbound traffic from the internet to the VM, it is recommended that you will add another network interface in the VM for that using a new network in your application (10.20.0.x in my example here). You need to configure the routing table in your VM in such a way that the connections to the remote application through the VPN will be routed via the VPN as the gateway and other traffic will be routed using a new gateway defined in Ravello. Hardening Make sure using only the following Published Services: TCP ports needed are 80 and 443 (for web interface). VPN ports: For IPSEC - UDP ports needed are 500(phase 1 IPSEC), 4500 (Phase 2 IPSEC) and 88 (Kerberos). For OpenVPN, UDP 1194 is needed. Change the “admin” password: In the web interface, Click “System”->”User Manager”   Click the “e” icon to edit Enter a new password twice (one for confirmation) and click “Save” If for some reason you use HTTP and not HTTPS, use HTTPS web access Click “System” -> “Advanced”   Change “HTTP” to “HTTPS” and click “Save” (at the bottom of the page). Please note that depends on your certificate, you might get a warning message such as this the next times you enter the web interface: Important Notes IP fragmentation/Jumbo packets/MTU When a packet is nearly the size of the maximum transmission unit (MTU) of the physical port, and it is encapsulated with IPsec headers, it probably will exceed the MTU of the interface. This situation causes the packet to be fragmented after encryption (post-fragmentation), which requires the IPsec peer to perform reassembly before decryption. In addition, sometimes fragmented IP packets might cause other problems and even get lost in the way. Moreover, in Ravello, IPSEC VPN needs Elastic IP (see here) which is causing another encapsulation and therefore increases the size of the packet. This scenario is even more common with jumbo packets which are split to several maximum-MTU packets. It is therefore highly recommended to reduce the MTU size of the VMs running in Ravello to something much smaller than 1500 (1300 to be 100% safe). If you will not reduce the MTU size of your VMs, packets might get lost. Reverse DNS resolving Ravello’s DNS sometimes uses the cloud provider’s DNS for resolving host names into IPs and for reverse resolving IPs to host names. When connecting two applications in Ravello using DNS, you might have a routable IP address on remote application with an internal IP - like for an example 192.168.1.2. If you try to nslookup this IP, you might get an internal cloud provider machine. Just be aware of this situation when using DNS.

Ravello’s nested virtualization and overlay networking technology allows for fast application development and testing by encapsulating entire application environments in cloud agnostic capsules....

How to run EMC ISilon OneFS simulator in VMware ESXi environment on AWS and Google cloud for user trials, demos and training

Isilon OneFS is a scale out NAS storage solution from EMC and uses intelligent software to scale data across vast quantities of commodity hardware. It replaces the three layers of traditional storage model - file system, volume manager and data protection and provides a unified clustered file system with built-in scalable data protection, and obviating the need for volume management. EMC makes available for download, the Isilon OneFS 7.2.0.1 Simulator at no charge for non-production use. In order to install it in a real data center like environment so as to get a feel of user interface and administrative tasks requires ESXi infrastructure. Now, one could invest in hardware and setup a multi host ESXi lab environment to install the EMC Isilon OneFS simulator. The other alternative is to leverage the public clouds like AWS and Google Cloud infrastructure to setup ESXi lab and install and configure EMC Isilon OneFS simulator modules. AWS and Google Cloud natively do not support nested virtualization, but Ravello platform HVX(runing on top of AWS and Google Cloud) implements Intel VT/ AMD­V (including NPT nested pages support) in software which enables users to run ESXi with hardware acceleration in AWS or Google. This enables you to build complex and large ESXi lab setup which mimics your data center and run and test EMC Isilon simulator components on top of it. The entire lab can be provisioned on-demand, is available across the world and you pay for only when you use it. In this blog we will cover installing and configuring an EMC Isilon OneFS Simulator 3 node cluster as a second level guest running on a Nested Ravello ESXi hosts managed by VMware vSphere 6.0. Once we have the first Isilon node deployed within vSphere, we will save it as a vCenter template so that additional nodes are much easier to add to the Isilon Cluster. Prerequisite A Ravello account. If you don’t already have one, you can start your free trial. A working VMware vSphere 5.5 or 6.0 environment running on your Ravello Cloud. More information can be found in this post. One or more ESXi nodes setup with access to external network as found in this post. Virtual infrastructure Requirements VMware ESXi 5.5.x or 6.0 VMware vCenter 6.0 (Windows or Linux) VMware vCenter Converter Standalone 5.5 or 6.0 Disk Space 17 GB of free space for the first node 16 GB per additional node Additional VMs Windows Server Jump host within the Ravello environment/application High level overview of the steps involved Download the EMC Isilon VM from the EMC trialware site. (An EMC Support account is not required) Deploy the VM using VMware vCenter Converter Standalone Convert the VM for use with VMware vSphere Deploy the VM to vCenter Server Save the VM as a vCenter template and/or to the vSphere Library (for vCenter 6.0 only) Configure the first node in the cluster Configure subsequent nodes in the cluster Ravello Environment VMware vSphere 6.0a ESXi 6.0a vCenter Server for Windows 6.0a Each ESXi node requires 3 NICs: 1x External Ravello Network 1x Internal vCenter Only Private Network 1x Bridged Internal Private Network to External Ravello Network Network Template Cluster Name: ISI-01 Reserve a min of 3 IPs for each network Internal Only Isilon Network Int-A Low 192.168.0.101 NIC1 Int-A High 192.168.0.103 NIC1 Netmask: 255.255.255.0 External Network Ext-1 Low 20.0.0.101 NIC2 Ext-1 High 20.0.0.103 NIC2 Netmask: 255.255.0.0 Gateway: 20.0.0.2 Ravello Networking Overview   ESXi vSphere Networking configuration For vSphere networking, in addition to the vswitch configuration found in this post. Add an additional vswitch to each node for the internal Isilon network. Each node must be configured with local storage. If you have followed the instructions in this post, there should be a 100GB local disk attached to your ESXi node. Within the vSphere Client, add this disk to each ESXi node. ESXi Disk Configuration     ESXi NIC1 Network Configuration     ESXi NIC2 Network Configuration - Note that the Gateway is not defined. This creates our internal only network for vSphere guests.   ESXi NIC3 Network Configuration - This is the internal vSphere to external Ravello network allowing access to the vSphere guest VMs from the Jump host     External Service to allow access to the Isilon management console[/caption] 1. Download the EMC Isilon OneFS Simulator Download the EMC Isilon OneFS Simulator from EMC’s trialware site. 2. Deploy the Isilon VM using VMware vCenter Converter Standalone Using the Converter Standalone version that matches your vSphere environment. Follow the step by step instructions from the “Running virtual nodes on ESX” section of the Virtual Isilon Install Guide PDF available in the Simulator download from EMC’s trailware site. More information about VMware vCenter Converter can be found on VMware’s Pubs Site. Ravello specific configuration Before you save as a template or power on the VM set the Typematic Rate to 2000000: keyboard.typematicMinDelay = 2000000 3. Save the VM as a vCenter template and/or to the vSphere Library (vCenter 6.0 only) 4. Configure the first node in the cluster Follow the step by step instructions from the “Install the virtual Isilon cluster” section of the Virtual Isilon Install Guide PDF available in the Simulator download from EMC’s trailware site. Ravello specific configuration Deploy the Isilon template to the local storage of the first ESXi node 5. Configure subsequent nodes in the cluster Follow the step by step instructions from the “Add the rest of the nodes to the cluster” section of the“Virtual Isilon Install Guide.pdf” available in the Simulator download from EMC’s trailware site. Ravello specific configuration Deploy the Isilon template the local storage of the remaining ESXi nodes Accessing the management console To access the management console, open a browser to https:<Ext Network IP of node 1> Verify the nodes are connected   Cluster Overview   Special Notes Make sure that for Islon guest VM running on top of ESXi in Ravello, The sum total guest CPUs should be =< ESXi host CPUs and same for memory. We have not tested this setup for heavy duty file scanning and other Isilon functional tests. This setup is meant to be used for user trial, demo, training etc. In order to setup a lab with more intensive EMC Isilon OneFS functional operations, would need performance optimizations on underlying ESXi hosts running in Ravello.

Isilon OneFS is a scale out NAS storage solution from EMC and uses intelligent software to scale data across vast quantities of commodity hardware. It replaces the three layers of traditional storage...

How to setup vCenter 6.0 on Windows in nested ESXi on Ravello

Background This guide will show you how to install the vCenter 6.0 Windows version in your nested ESXi lab on Ravello. This does not cover VCSA 6.0 - that’s coming next. If you haven’t yet installed vSphere we suggest you start with: How to create vSphere 6.0 image on Ravello As most of you probably know besides implementing a hypervisor merely capable of running regular VMs, we’ve also implemented CPU virtualization extensions called VT-I for Intel or SVM for AMD cpus. These extensions, allow running other hypervisors such as KVM or VMware’s ESXi within the Ravello Cloud environment. f his blog will focus on installing and configuring VMware vCenter server 6.0 in a Ravello’s public cloud which can be extremely useful for testing and/or running ESXi enabled virtual labs. I will show here how to create your own VMware vCenter 6.0 image in the library for easily importing vCenter into any Ravello application. VMware vCenter has 2 different installation options, Deploying the Linux based appliance or installing on Windows Ravello’s recommendation is to install vCenter 6.0 Windows version. Installing vCenter server 6.0 Login to your Ravello account and create an new Application (do not use a pre-existing blueprint) and give it a name. From the library, Add the Windows VM previously uploaded/created. You can upload an existing windows VM image into your library from DC or upload windows ISO, take an empty Vm from Ravello library and install it Please note the following requirements: You must have at least 8 GB RAM. You must have at least 18 GB free disk space. It is highly recommended to use 4 vCPUs to increase performance of this VM. Enable and configure RDP as a published service.within the Application: Select RDP under services tab for this VM in Ravello Check it as external Make sure RDP service is running in Windows Publish the Application and wait (~5 minutes) until the VM is available. RDP to the Wndows VM. Navigate to VMware downloads website and download the vCenter 6.0 for Windows ISO to the Windows VM. You will need an active myvmware account to login with and download. Accepting the EULA when prompted. This way a matching valid trial license is provided to you. Download the ISO file. As the file is ~3GB, it takes a few minutes to download. Mount the ISO file. Run the installation EXE and follow the steps. The installation takes ~15 minutes to complete. Verify the installation RDP to the Windows VM. Open a web browser and browse to http://localhost Login to vCenter using the SSO username and password used during the installation (usually the username is administrator@vsphere.local). If you managed to login successfully, Congratulations!! Installation was successful. Saving vCenter server to the library Once the vCenter server is properly installed and configured, you are welcome to finally save it to Ravello’s library for future usage. To save the vCenter server VM to Ravello’s library, read here. Note: If your application includes multiple ESXi serevrs, please make sure you start them separately, one by one, and not all at the same time.

Background This guide will show you how to install the vCenter 6.0 Windows version in your nested ESXi lab on Ravello. This does not cover VCSA 6.0 - that’s coming next. If you haven’t yet installed...

How to setup DHCP for 2nd level guests running on ESXi in Ravello

As most of you probably know besides implementing a hypervisor merely capable of running regular VMs, we’ve also implemented a CPU virtualization extensions called VT-I for Intel or SVM for AMD cpus. These extensions, in essence, allow running other hypervisors such as KVM or VMWare’s ESXi on top of Ravello. In this blog I’m going to focus on using DHCP for the 2nd level guests running on ESXi. This blog is optional in case you do not want to use only static IPs for 2nd level guests. Overview This article describes how to use DHCP for the 2nd level guests running on ESXi in Ravello. In Ravello, DHCP is not available for 2nd level guests by default. The reason for that is that Ravello system is totally unaware of those 2nd level guests and those guests cannot reach Ravello’s built-in DHCP server by broadcasting DHCP DISCOVER packets. DHCP requests from i 2nd level guests, will not be answered and therefore the guest OS will not receive a DHCP address. In order to use DHCP in the 2nd level guests, you will need to: Define the networking in your Ravello Application and vSphere environment for supporting another DHCP server. Install and configure your own DHCP server VM as a 2nd level guest to service the other 2nd level VMs. There are few ways how to do those. This blog describes how to do both the easiest way. Defining the networking in Ravello to support another DHCP server There are two important factors that need to be in mind when defining the networking: The new DHCP server should respond only to the the 2nd level guests. The 2nd level guests should get responses only from the new DHCP server. Here is an example how to do so: For each ESXi node in your Ravello Application, add (at least) one NIC dedicated to a separate network that will be used only by those 2nd level guests: Set a reserved DHCP IP or a static IP for the NIC on the ESXi running guests. Do this for all ESXi running guests in such a way all IPs will be in the same network. In my example, I have 2 ESXi machines with 2 NICs. The first NIC is using the default Ravello’s network (10.0.x.x). The 2nd guest is using a new network (20.0.x.x) because I have set the reserved DHCP IP of the 2nd NIC to 20.0.0.3 and 20.0.0.4. You can look at the settings of the NICs here: For each ESXi, add another VM network for this NIC: Set the guest networking to use only this NIC: Installing your own DHCP server You can use any DHCP you prefer. In this blog I will describe how to install ISC DHCP server on a vanilla Ubuntu machine. In addition, due to the network topology I selected in this example, the new DHCP server will be another VM in Ravello’s application connected only to 20.0.x.x network. Deploy a new Ubuntu machine in your Ravello Application and give it a static IP/reserved DHCP IP in the 20.0.x.x network. In my example I have used reserved DHCP IP 20.0.100.100. Make sure your repositories are updated sudo apt-get update Install the ISC DHCP server sudo apt-get install isc-dhcp-server Enable packet forwarding - sudo vi /etc/sysctl.conf and remove the comment from net.ipv4.ip_forward=1 Reboot the DHCP server machine to enable packet forwarding sudo reboot Make your DHCP server to act as a DNS as well - sudo apt-get install bind9 Edit the DHCP settings - sudo vi /etc/dhcp/dhcpd.conf and perform the following (note that in my example here the IP of the DHCP server is 20.0.100.100): subnet 20.0.0.0 netmask 255.255.0.0 { range 20.0.1.10 20.0.1.100; # you can set any range in the network as you prefer option routers 20.0.100.100; } option domain-name-servers 20.0.100.100; Restart the DHCP service sudo /etc/init.d/isc-dhcp-server restart The overall networking of your Ravello application should look something like this:

As most of you probably know besides implementing a hypervisor merely capable of running regular VMs, we’ve also implemented a CPU virtualization extensions called VT-I for Intel or SVM for AMD cpus....

Migrating dev & test workloads from VMware to AWS

I was on the phone with Chris Porter today (side note - I was quickly impressed with his knowledge) and he made an interesting point about running dev/test workloads in the cloud. “If my production is on AWS, I’d certainly want to run my dev & test there”, he said. “But if my production is on premises on VMware, I’d like dev/test environments in the cloud that can be turned on and off on demand but the problem is they need to be very very similar to my on prem production deployment” . It sparked an interesting conversation on migrating from VMware to AWS, on cloud migration tools, cloud networking constraints, when to re-architect an application for a full-fledged production application migration process and how to approach the problem when it’s for dev/test only. I’m a huge advocate of the no-migration approach to moving dev & test workloads to the cloud- in fact I may be guilty as charged for coining the phrase “Migration is for the birds, my VMs are nested” but the fact remains that when you need to move your dev & test workloads to the cloud, you do need to consider the various scenarios and migration options out there. Ravello’s nested virtualization approach puts us in an entirely different category - we are a cloud provider that provides VMware-like or KVM-like environments in AWS/Google Cloud. As a result, we haven’t had too many conversations with customers about some of the migration tools out there such as Racemi, CloudVelox, Hotlink but we do guide customers to use the AWS import utility to convert VMs, re-do their networking and re-think their application architecture if they are migrating their entire production application to AWS. But when they are looking at migrating dev & test workloads to the cloud, we (obviously) strongly recommend Ravello. To summarize, here is a quick cheat sheet I came up with: If you run your production on premise - say on VMware for example - then you have ample reason to migrate your dev & test to the cloud. The promise of just-in-time environments that can be created and destroyed on demand, coupled with never having capacity constraints (yes, no more QA environment bottlenecks as you get closer to release) is sufficient justification. However the challenge has always been the issue of high fidelity dev & test environments. How do you have confidence in your test results if the environment in the cloud looks very different from your on-premise production environment? This is why increasing number of VMware customers are turning to other VMware based clouds such as vCloud Air, or vCloud partner cloud providers so that they can get similar environments on premise and in the cloud. Some would argue that it’s not the same as having the price, reliability and flexibility of some of the leading clouds in the world such as AWS & Google Cloud. And I would argue that Ravello has stretched the “sameness” frontier by recreating VMware and KVM like environments in AWS & Google by using nested virtualization. Majority of Ravello’s customers such as 888 Holdings are running their production on either VMware or KVM in their private data center and instead of converting their VMs or modifying the networking in their dev/test environments to run them on AWS, they simply “nest” them as-is on Ravello. If your production and dev/test is already in the cloud - migration is a moot point isn’t it? And if your production is on-premise and dev/test is already in cloud, you seem to be on the right track as long as you haven’t changed your dev & QA environments to “fit” them to the cloud of choice. And finally if your production is in the cloud and for some strange reason your dev/test is on premise, you better have a very good justification because in terms of capacity utilization it’s not ideal to have your prod running 24x7 in the cloud while your dev & test workloads which tend to be more bursty are running in house. In any case, I'm eager to hear your thoughts on running production on premises on VMware and your dev/test workloads on AWS. How did you approach the problem? And shameless plug..but you do get a 14-day free trial if you'd like to try Ravello for dev/test.

I was on the phone with Chris Porter today (side note - I was quickly impressed with his knowledge) and he made an interesting point about running dev/test workloads in the cloud. “If my production...

How to emulate DCs in public cloud and connect them using Cisco CSR 1000v?

Author: Mirko Grabel Mirko Grabel is a ‘Kick-Ass’ Technical Marketing Engineer with Cisco, and an active CCIE certified speaker at Cisco Live. He has global technical responsibility for ISR, CSR and ASR product lines. Cisco’s Cloud Services Router (CSR) running IOS-XE is a very popular network appliance used in a variety of scenarios – as a VPN gateway, as a MPLS termination, to connect DCs, to provide an internet split-out for branches, to connect branches to HQ. Coupled with LISP, CSR can also be used to extend the DC with hybrid cloud infrastructure. There are numerous ‘how-to’ articles on the web that articulate how CSR can be used to connect cloud infrastructure to DC or HQ to secure hub-to-spoke or spoke-to-spoke traffic with DMVPN and IPSEC. To create topologies for these use-cases, however, one requires infrastructure to run the CSR routers and networking interconnect to connect them. In most organizations, it takes weeks to months to procure and deploy new hardware – servers, racks, switches, and it is expensive. CSR’s Amazon Machine Image (AMI) offers an alternative to try out some of CSR features without having to invest in physical hardware. However, due to networking limitations on AWS (e.g. broadcast and multicast packets are heavily filtered, L2 unavailable), many CSR features are unsupported without tunneling on the AMI (e.g. OSPF, IGMP, PIM, OTV, VxLAN, WCCPv2, MPLS, EoMPLS, VRF, VPLS, HSRP). Further, only one network interface can be configured as DHCP. This presents a difficulty in creating full-featured CSR environments that I need as a CCIE to play with different features, and mock-up PoC environments. This is where Ravello helps. Ravello is a SaaS solution powered by nested virtualization and software defined networking overlay. Ravello enables networking professionals like me to create full-featured networking labs with a multitude of networking & security of VMware or KVM appliances (including Cisco’s CSR 1000v!) on top of public cloud (AWS or Google Cloud), and benefit from unlimited capacity and usage based pricing. Ravello’s software defined networking overlay exposes a clean L2 network – just like a DC environment – and offers built-in network services such as DNS, DHCP, Firewall, Virtual Switch and Virtual Router – should I need it. Further, it allows me to bring in my own router (CSR 1000v in this case), if I want specialized network functionality as a part of my environment. Connecting DCs in the Cloud A little skeptical of the tall claims made by Ravello, I decided to put Ravello’s Network Smart Lab to test. (Data-center functionality – running VMware & KVM VMs with L2 access – at public cloud prices and flexibility just seemed too good to be true!) I embarked on creating a CSR deployment connecting two different data-centers through VPN tunnel on Ravello. To emulate a DC, I added some LAMP servers and pointed CSR to be their Gateway on Ravello. With a click, I made two copies of this setup and associated public IPs to CSR’s external network interfaces. Using Ravello’s ability to run VMs in multiple clouds, I published one of the copies of my setup to run on AWS and another in Google. Once both environments were up and running, I configured each CSR instance in AWS and Google cloud to point to the other’s public IP, and voila – my two DCs running on AWS and Google were securely connected! The rest of this article details configuration to get my multi-DC environment connected through CSR 1000v on Ravello. Environment Setup Getting this environment set up on Ravello involved 4 simple steps – Uploading my CSR 1000v and LAMP servers to Ravello Configuring networking on CSR VM Publishing the environments on AWS and Google Configuring the CSR 1. Uploading VMs I used the Ravello Import Tool to upload all 3 of my VMs. Ravello’s VM uploader gave me multiple options - ranging from directly uploading my multi-VM environment from vSphere/vCenter to uploading OVFs or VMDKs or QCOW or ISOs individually. I uploaded my CSR1000v as an QCOW image. 2. Configuring Networking Upon uploading the CSR 1000v, Ravello asked to confirm the resources (CPU, Memory, Storage) for my VM. Since Ravello had already identified the resources, it was more of a verification exercise. Clicking on the Network tab, I added two network interfaces to the CSR – one each for internal and external networks. I configured a static IPs on the interfaces and chose “VirtIO” as the device type. I also associated ‘Elastic IP’ to the external interface so that I could access it from anywhere. To enable SSH access to my VMs, I opened port 22 in the “Services” tab. Next, I created an ‘Application’ on Ravello. An ‘Application’ in Ravello’s terms is essentially a multi-VM environment. To create my DC, I dragged and dropped my CSR and LAMP VMs on the application canvas. Next, I saved my application as a ‘Blueprint’ – which was similar to taking a snapshot of the entire multi-VM setup. Blueprint enabled me to make additional copies of this ‘application’ environment. 3. Publishing environment on Google & AWS With blueprint of this environment in hand, I was able to create two application copies. I modified the IPs on the second copy so that it didn’t conflict with the first copy. Next, I published an instance of each to run in Google Cloud and AWS respectively. Publishing the application was a piece of cake – a one-click action. 4. Configuring CSR With my DC environments set up in AWS and Google cloud, the next step was to configure the CSR to connect the two environments through a VPN tunnel. Here is how I configured the CSR for this environment using CLI. Defined the hostname first hostname CSR1000V-AMAZON For convenience’s sake if I mistype something, I disabled domain lookup. no ip domain lookup To get SSH access, I needed a domain name, so I set one up. ip domain name ravellosystems.com Next, I generated SSH key pair crypto key generate rsa Then configured a username & password for CSR username admin privilege 15 password 0 XXXXXX Next I allowed SSH on the incoming lines line vty 0 4 login local transport input ssh To give out IPs to my client VMs, I also set up a local DHCP Server ip dhcp excluded-address 10.0.2.0 10.0.2.9 ip dhcp pool DHCP network 10.0.2.0 255.255.255.0 default-router 10.0.2.1 dns-server 8.8.8.8 And here comes the tricky part – how to get the IPSEC tunnel to fly. Here is the full config required for this. Details for each command can be found in the Cisco config guides. crypto isakmp policy 1 encr aes authentication pre-share group 2 crypto isakmp key PASSWORD address 0.0.0.0 crypto ipsec transform-set TRANSFORM_SET esp-aes 256 esp-md5-hmac mode tunnel crypto ipsec profile 1 set transform-set TRANSFORM_SET interface Tunnel0 ip address 192.168.255.2 255.255.255.252 tunnel source GigabitEthernet1 tunnel mode ipsec ipv4 ! The tunnel destination is the elastic IP of the other side! tunnel destination 85.190.189.58 tunnel protection ipsec profile 1 Here is the simple IP interface configuration, nothing fancy... interface GigabitEthernet1 description EXTERNAL ip address 192.168.0.3 255.255.255.0 negotiation auto interface GigabitEthernet2 description LAN ip address 10.0.2.1 255.255.255.0 negotiation auto My default route points to the internet (required to find the other Elastic IP) and my inter DC traffic (just traffic going to 10.0.1.0/24) points to the other side of the tunnel: ip route 0.0.0.0 0.0.0.0 192.168.0.1 ip route 10.0.1.0 255.255.255.0 192.168.255.1 The tunnels are on-demand tunnels so they only come up once traffic is present. Also to see some counters increase, I created 2 SLAs – one that works just in the tunnel and one that pings end-to-end from one LAN to the other. ip sla 1 icmp-echo 10.0.1.1 source-ip 10.0.2.1 frequency 10 ip sla schedule 1 life forever start-time now ip sla 2 icmp-echo 192.168.255.1 source-ip 192.168.255.2 frequency 10 ip sla schedule 2 life forever start-time now To check if my CSR sees the right interfaces I have to type this tedious long command. So, I made an alias for it – alias exec sps show platform software vnic-if interface-mapping Upon doing a similar configuration on the CSR running in Google Cloud (and pointing it its peer in Amazon), the two DCs were securely connected through a VPN tunnel. Verifying it works To verify that this setup works, I typed in sh ip sla stat. Increase in the number of successes for the counters confirmed that my VPN was setup and active (hurrah!). Conclusion Ravello’s Network Smart Lab offers an unique and simple way for networking professionals to create full-featured CSR environments for PoCs and network design (without needing hardware investments) on AWS & Google. Drop me a line if you would like to play with my CSR Blueprint setup.

Author: Mirko Grabel Mirko Grabel is a ‘Kick-Ass’ Technical Marketing Engineer with Cisco, and an active CCIE certified speaker at Cisco Live. He has global technical responsibility for ISR, CSR and ASR...

My Virtual Way: Lab to migrate a VM with vMotion (and still without hardware)

The last time I started learning what VMware was all about, I stopped at the high-level theoretical overview of the availability, scalability, management and optimization challenges that VMware technologies help organizations overcome. Having no physical servers at my disposal, the first time I went through the long list of VMware technologies vMotion, High Availability, vFlash and all the others - I didn’t do anything. This time, however, I used my ESXi lab set up on Ravello to try to get something done. The result: I migrated a VM using vMotion from one ESXi host to another. I won’t bore you with my summary of the study guide I’m using to understand the differences between Fault Tolerance and High Availability or my (hopefully effective) ways to remember what DPM or SIOC stand for (if you’re studying for the VCA-DCV, drop me a line if you care to share notes). Instead I wanted to share what I needed to know to migrate a VM using vMotion and how I did it. First - while I did learn what vMotion is supposed to do - I had no idea how it is actually done. What needs to be configured where, and how. I started with knowing that I will need a set up that consists of (at least) two ESXi hosts, so that I can migrate a VM from one to another. My ESXi lab For this I used my basic lab, consisting of two ESXi nodes, vCenter, an NFS server and a Windows client running my vSphere client (I could have used the vSpere web client, but this was my basic set up so went with that). I previously created and saved his whole ESXi lab as a blueprint in my Ravello library, so I didn’t have to upload, install or configure anything. I used the blueprint, and published an application from it - basically running a nested ESXi lab on Google Cloud. A few minutes after I hit publish - I could console into my Windows Client and run vSphere, where I had my two ESXi hosts already configured, as well as several VMs that I created there in the past. One of them - ubuntu cloned VM (not the best name, yes) - was already running. Poetry in vMotion Since I didn’t know anything about how to actually use vMotion, I searched for some resources and found a video from VMware that was fairly useful in pointing me to the “migrate” option on the VM. However, when I tried to do that, the vMotion option was greyed out, saying that the host the VM was running on wasn’t enabled for vMotion. With a little digging around and a little help from my friends, I realized that I needed to configure the switch on the hosts to enable vMotion. As you can see from the following set of screenshots, once I enabled vMotion on the host, I was able to migrate my running VM using vMotion. I celebrated with this song. VM on ESXi host 1 Change host New location Migrating VM VM running on ESXi host 2 Next time - a deep dive into vSphere core components. If you are also working on your VCA or VCP certification and have cool tips, useful guides and especially - if you have ideas for good hands-on exercises or are using Ravello for your VCA, VCP or VCDX study labs - let me know!

The last time I started learning what VMware was all about, I stopped at the high-level theoretical overview of the availability, scalability, management and optimization challenges that...

Skyhigh Networks - On-demand customer PoC environments on AWS & Google Cloud

Author: Dr. Nate Brady Systems Engineering Manager at Skyhigh Networks Nate is a Systems Engineering Manager at Skyhigh Networks managing a growing team of SEs based out of US. Nate is an expert in Networks, Security and Risk Management domains Skyhigh Networks Skyhigh Networks is the leading Cloud Access Security Broker (CASB) facilitating secure adoption of cloud-based services. Our solution is two fold: Skyhigh for Shadow IT enables enterprises to detect over 13,000 cloud services that employees may be using and complement usage statistics with a 50-point risk assessment for each service as well as a full workup on usage, anomalous usage, and integration with existing perimeter devices to block access or warn users of impending danger. Skyhigh for Sanctioned IT facilitates cloud adoption by extending traditional controls, such as sharing policies, audit logging, and data loss prevention (DLP) to common cloud services such as Salesforce.com, Box, Office 365 and many others. Our frictionless deployment model has helped us to gain a strong customer base comprising of over 300 enterprises many of which appear on the Fortune 500 list. Ravello to the rescue: SE Training and Customer Mock-Ups As a SE manager at a company growing as fast as Skyhigh, I needed a way to let new systems engineers immerse themselves both in Skyhigh’s own technologies as well as those commonly deployed at our customers. For example, Skyhigh for Shadow IT integrates with common proxy and firewall technologies to add intelligence about cloud service risk to existing policies. Additionally, Skyhigh for Sanctioned IT allows customers to extend the reach of common DLP systems, such as Symantec and McAfee, to the cloud. While these integrations make life easy for our customers, it means that our SEs need to have a working knowledge of many different technologies. This is where Ravello has been our savior. Using Ravello, we can build complex environments either from templates or entirely from scratch. This means that new SEs can quickly come up to speed by deploying use case specific training templates and seasoned SEs can create mock-ups of customer environment in almost no time. With Ravello, we are able to provide new SEs with a “lab in the cloud” on their very first day. All they need is a web browser and an Internet connection and they can build anything from a simple proxy server to a complex security infrastructure including firewalls, proxies, DLP and SIEM as well as server and desktop infrastructures. In some cases, customers ask see our product deployed in very specific circumstances but struggle to provide a dedicated lab environment for us to conduct a proof of concept. Ravello allows us to create these environments quickly and securely on behalf of our customers, saving them time and resources while also helping to shorten the sales cycle significantly. While the environments vary from one enterprise to another, typical customer deployments feature at least a firewall, proxy, active directory, DLP, and some Windows servers and workstations – adding up to upwards of 14 CPUs and 32GB RAM. Then it all has to be connected in the typical route/switch environment that is familiar to the datacenter but foreign to IaaS providers. With Ravello – our engineers can have their own lab in the cloud which allows to them to learn new technologies and mock up customer use cases on without having to contend for resources - in a cost effective manner. This is phenomenal! Dr. Nate Brady Manager of Systems Engineering, Skyhigh Networks Once again, Ravello to the rescue! Not only does Ravello make the creation of these environments a snap, it’s far more cost effective. By running an extra layer of virtualization, Ravello can fit several small virtual machines onto a single large instance. Skyhigh Networks’ Requirements To provide a great toolset to our engineering community as well as make life easier for our customers, we had some very special infrastructure needs: No CapEx investments – We experience variable workloads depending the SE and customer. As a SaaS company, we wanted to avoid maintaining a fixed-capacity datacenter and cost-effectively leverage the utility of the cloud. Scale on-demand – To accommodate for spikes demands, we wanted a platform that could scale without impacting the customer’s PoC experience. Zero change deployment – A large number of our technology partners provide VMware appliances. It was extremely important to be able to deploy these systems in the public cloud without any changes. Infrastructure templatization - To facilitate quick development of training and mock-up environments, we needed a way to templatize common environments to be used as a starting point for customization Ease of collaboration - We wanted a platform that made it easy to collaborate with prospects so that they could verify that the mock-up was configured to their specifications. Usage-based costs – To reduce the overall cost of sales, the Skyhigh team was looking for a strictly usage-based pricing model. We considered several options that partially satisfied some of these requirements. We looked into developing ‘PoC hardware kits’, but quickly moved away from this idea due to logistical challenges associated with shipping hardware to every SE in the company (and potentially some prospects, too!) Next, we evaluated using private cloud providers to host these environments but could not justify the high fixed costs involved. Public clouds were also considered, but the inability to run native virtual appliances, limited access to virtual machine consoles, and a lack of availability of L2 networking features made this unfavorable. Ravello is really flexible - we can take any of our VMs and run them on public cloud and access them just like we would in our own labs - down to the console access. These capabilities allow us to use our existing knowledge about virtualization platforms like VMWare and apply them to cloud instances rather than relearning a new technology – drastically reducing learning curves for our engineers. Dr. Nate Brady Manager of Systems Engineering, Skyhigh Networks Ravello - a perfect match for Skyhigh’s needs We tried Ravello for one of the PoC setups, and were very happy that Ravello delivered on all our needs. Since Ravello runs on AWS & Google cloud, we were able to create PoC environments without investing in hardware or building our own datacenter. Ravello runs on Tier 1 clouds where there is no shortage of capacity, quota or overage concerns – we were able to scale our PoC environments on-demand – spinning as many environments as needed. Further, Ravello’s HVX (high performance nested hypervisor) and networking overlay allowed us to run our existing VMware appliances and VMs without making any modifications – this really helped in reducing the learning curve. Quick deployment of training and PoC environments is crucial to our sales cycle. Thanks to Ravello’s blueprint feature we were able to ‘templatize’ several environments, and use it as a starting point for customization for PoCs – saving us days of work. With Ravello’s network overlay, we were able to recreate the same network topology as our prospects production environment down to the very subnets and IP addressing. Further, with Ravello’s user-permissions feature we were able to selectively share applications with our prospects and collaborate on developing PoC environments quickly – speeding up the sales process. Finally, with Ravello’s usage based pricing, we were paying only for the duration when our PoC & training environments were up – which significantly helped in containing costs. Overtime, we have standardized on using Ravello for PoC environments for a vast majority of deployments. Benefits realized with Ravello I’d like to highlight some of the benefits realized through Skyhigh’s adoption of Ravello as the platform of choice for our customer PoCs Shortened sales cycle - With an on-demand access to Ravello’s environment with rich networking capabilities, the Skyhigh team does not have to rely on prospect’s environment to showcase its service in action. Eliminating this dependency has helped compress our sales cycle. Reduced effort for lab environments - With re-usable topologies (blueprints) being used as a starting point for lab environments, we have been able to eliminate many of the repetitive tasks gaining improvement in efficiency for setting up new labs and customer mock-ups. Effective sales engineer training & on-boarding - Encouraged with the success in using Ravello for training our SEs, we have begun using Ravello to create mock-up environments for customers who request them. Easy access to ‘disposable training labs without penalty’ has enabled our SEs to hone their skills, and has also helped in onboarding new SEs quickly.

Author: Dr. Nate Brady Systems Engineering Manager at Skyhigh Networks Nate is a Systems Engineering Manager at Skyhigh Networks managing a growing team of SEs based out of US. Nate is an expert in...

Splunk demo, PoC, training environments with L2 networking on Google & AWS

Splunk is a SIEM market leader with an active ecosystem of resellers, application developers, partners, customers – all of which need a Splunk lab for sales demos, customer PoCs, training and development testing. Ravello's Network & Security SmartLab presents an option to set up Splunk labs on public cloud - Google & AWS at costs starting $0.14/hour. Splunk – a SIEM market leader Gartner has identified Splunk as a market leader in Security Information & Event Management (SIEM) segment which has seen a phenomenal growth (16% Y/Y) in recent years. The key reasons for this growth –  an increase in disparate machine data present in enterprises, and recent cyber attacks & data breaches. Enterprises are struggling piece together machine data, logs, events from multiple sources to gain meaningful insights into the state of their systems, network and security. Integrating with multiple third-party technologies, this is where Splunk shines. By analyzing everything from customer clickstreams and transactions to security events and network activity from a wide variety of nodes and network & security appliances, Splunk paints a holistic picture for IT Ops. Everyone needs Splunk environments Splunk has a large ecosystem of loyal customers, resellers, partners and application developers. Many network, security, and information system ISVs integrate with Splunk’s solution. Splunkbase – Splunk’s App Repository – reveals 762 specialized applications that cover a wide range of functions ranging from Application Management, IT Ops, Security & Compliance, Business Analytics to Internet of Things. Each of these application developers/ISVs require a fully functional Splunk environment comprising of multiple appliances, LAN hosts, network nodes (log/event sources), complete with Splunk Enterprise & Data Collection Machines for their sales demo and development test environments. Splunk resellers and partners also need environments to deploy multi-tier, multi-node hosts to showcase the power of Splunk in a ‘real-world-like’ setting. Customers looking to purchase SIEM tools also need PoC environments where they can evaluate the capabilities of the Splunk in a production-like environment before making a buying decision. Where can I setup my Splunk lab for demos, PoCs, training? ISVs, resellers, enterprises have explored provisioning their data-centers to run these transient workloads for sales demo, PoC, training, upgrade & development test environments, and have experienced a sticker shock – it is expensive! In addition, it takes weeks to months to procure, provision the hardware, and get the environment running, and then there are opportunity costs associated when the environment is not being used. Public clouds, such as AWS and Google are ideal for such transient workloads – providing the flexibility to move to a usage-based pricing to avoid these opportunity costs. Splunk has an AMI (Amazon Machine Image) that allows it to run on AWS. And while it is excellent choice for ‘cloud native’ deployments, it doesn’t lend itself very well to mocking-up a production datacenter environment –  a requirement for Splunk demos, customer PoCs, application development & testing use-cases. AWS networking limitations (e.g. lack of support for Layer 2 networking, multicast and broadcast etc.) make it impossible to mirror data-center environments on public cloud natively. A nested virtualization platform with software defined networking overlay – such as Ravello – brings together the financial benefits of moving to cloud while avoiding technological limitations. Running workloads on Ravello Network & Security SmartLab brings all the benefits of running in datacenter  – one can use the same VMware and KVM VMs with networking interconnect. And, since Ravello is an overlay cloud running on top of AWS & Google, one also reaps the economic and elastic benefits of Tier 1 public cloud. In essence, using Ravello, Splunk and its ecosystem of application developers/ISVs and customers can run sales demos, PoCs, training environments in datacenter-like environments without investing in hardware resources. Steps to create Splunk environment on Ravello I used 3 VMs to create a representative Splunk environment on Ravello – first Windows 2012 to install Splunk Enterprise (indexer), second Windows 2012 to install Splunk forwarder for data collection, and a third VMware Data Collection Node (DCN) node. Uploading the 3 VMs to my Ravello Library using Ravello Import Tool was simple. Ravello VM uploader gave me multiple options - ranging from directly uploading my multi-VM environment from vSphere/vCenter to uploading OVFs or VMDKs or QCOW or ISOs individually. I chose to upload my Windows VMs and DCN as an OVF.   Verifying settings 1. Verification started by asking for a VM name for the Windows VMs 2. Clicking ‘Next’, I validated the amount of resources (VCPUs and Memory) that I wanted my vSRX to run on. 3. Clicking ‘Next’, I was taken to the Disk tab. It was already pre-populated with the right disk-size and controller.   4. Next I verified network interface for the Windows 2012 Server. I chose it to have a DHCP address. Ravello’s networking overlay provides an inbuilt DHCP server.   5. Clicking ‘Next’, I was taken to the Services tab. Ravello’s network overlay comes with a built-in firewall that fences the application running inside. Creating “Services” opens ports for external access. Here, I created “Services” on ports 3389 and 8000 to open ports for RDP and Web access to Splunk web interface 6. I went through the steps 1-6 for the other VMs   Publishing the environment to AWS 1. With my application canvas complete, I clicked ‘Publish’ to run it on AWS. I was presented with a choice of AWS Regions to publish it in, and I chose AWS Virginia. My environment took roughly 5 minutes to come alive.   2. Once my VMs were published, I installed Splunk Enterprise on first Windows 2012 and Splunk Forwarder on second Windows 2012. Upon installation, I could login to the Splunk interface to be able to configure my data sources and get Splunk forwarder to send data to Splunk indexer   3. Once Splunk had finished indexing, I was able to see dashboards and execute searches.   Conclusion Ravello’s Network & Security SmartLab offers an unique and simple way to create data center representative Splunk environments (without hardware investments) on AWS & Google. Just sign up for a free Ravello trial, and drop us a line. Since we have gone through the setup recently, we will be glad to help you create your own Splunk lab on Ravello.

Splunk is a SIEM market leader with an active ecosystem of resellers, application developers, partners, customers – all of which need a Splunk lab for sales demos, customer PoCs, training and...

Continuous integration testing with ESXi labs on AWS and Google cloud for StacksWare VDI software asset management product

Author: Vivek Nair Interests include geeking out about virtualization & building complex distributed systems. Previously worked at Asana and on the Local Law Enforcement team at Palantir Technologies. Currently a lecturer in the CS department at Stanford University   This post summarizes how StacksWare, an agentless software asset management product for VDI, has implemented CI testing of their product on ESXi infrastructure test beds. StacksWare development team has modeled complex production ESXi infrastructures in Ravello service, without the overhead of buying their own rack servers or paying for expensive managed data centers. StacksWare Introduction StacksWare is an agentless software asset management solution for virtual desktop infrastructure (VDI). Our product allows organizations to track their application usage and determine whether they are in compliance with their license agreements. Our software passively gathers application usage via VMware’s ESXi hypervisor and Horizon View, eliminating the need to install an intrusive agent on every guest OS. Why should you care if your organization’s virtual infrastructure is out of compliance? Audits by software vendors have been steadily increasing. According to recent research, over half of all enterprise organizations have been audited in the past two years. If found out of compliance, organizations can be subjected to large penalty fines or even heavy litigation. With StacksWare, organizations can easily monitor and ensure their compliance. Currently, existing license management solutions require organizations to install a monitoring agent on each and every guest OS to collect application data. This introduces data privacy and infrastructure concerns. StacksWare only requires organizations to install a virtual appliance, the StacksWare Internal Monitor (SIM), into their VDI environment. SIM allows organizations to track application data within minutes. Our software is currently the only agentless license management solution on the market. Search for scalable, on-demand VMWare ESXi lab/test environment Enterprises of varying scale rely on StacksWare to track their application usage. For example, some of our customers provide several thousand desktops to their employees. To support both SMBs and large enterprises, we needed a flexible solution for benchmarking StacksWare’s performance. Before Ravello we rolled our own commodity rack servers to run VMware ESXI. Aside from the headache of setup and maintenance, we couldn’t easily automate and schedule our own hardware. Providing a flexible test infrastructure to benchmark SMB infrastructures (~20 ESXI hosts) and then against large infrastructures (>75 ESXI hosts) was totally infeasible due to memory and compute constraints. After that painful experience, we decided to try out Rackspace’s managed colocation service for ESXi. Though they provided services for quick hardware scalability in their SLA, the price tag was prohibitively expensive. We also found that managed colocation solutions were often hosted in multi-tenant environments that degraded performance and stability. Running VMware ESXi hosts on AWS and Google Cloud with Ravello Systems I heard from fellow entrepreneurs that a Sequoia-backed company Ravello Systems provided nested virtualization with a flexible pay-as-you-go model for VMware ESXi labs on public clouds. At first I was nervous that the ESXi beta version would not be full-featured enough to model mature production ecosystems like the VDI environments of major universities with thousands of nodes. I had plenty of questions for Ravello’s infrastructure. What if our customers are using hosts with VMXNet3 network drivers? Does the product have the tooling to support multiple drivers? How can we model a tight firewall network in their cluster? How difficult is it to configure shared storage systems like NFS or iSCSI or vSAN across the cluster? Within minutes of playing around with its functionality, I learned that Ravello provides a rich ecosystem for modeling sophisticated production environments in minutes. This was definitely the solution that we were looking for. Our Current Continuous Integration Testing Process Our team went to work building a development pipeline in Ravello that made sense. With their development API (https://www.ravellosystems.com/developers/rest-api), we developed an efficient process to test any additional features for our virtual appliance with just a few Python scripts. When we create a commit in our Github repository, our continuous integration service creates a job to spin up a Ravello blueprint, using their Python SDK. Once the ESXi hosts in Ravello have fully booted up from the blueprint, we use the vSphere APIs to programmatically deploy a new StacksWare internal monitor through the vCenter that’s managing these ESXi hosts. Once the build finishes, we propagate any errors to our continuous integration service and spin down the blueprint. This ensures that we don’t use any more resources than we need. If there are no errors in the build process, we then repeat steps 2 and 3 with the “next tier” blueprint. More information about blueprint tiering in the next section. Blueprint Tiering for multiple test scenarios We created different tiers of Ravello blueprints to test basic functionality and performance during this build process. The first blueprint sanity checks the basic Ravello functionality across three different ESXi hosts. We catch most bugs without expending time and money with this configuration. The second blueprint models a standard SMB infrastructure with 20 ESXi hosts. The third blueprint models a larger enterprise infrastructure with 50 ESXi hosts. If the build system successfully builds the latest StacksWare commit against these three blueprints, the code is merged to our master Github branch and the virtual appliance is ready for deployment! Summary StacksWare is an agentless software asset management tool for virtual desktop infrastructures such as VMware ESXi. We use Ravello Systems service to model complex production ESXi infrastructures without the overhead of buying our own rack servers or paying for expensive managed data centers. With Ravello’s full-featured developer API, we’re able to automate and schedule our build process to test for optimal functionality and performance across infrastructures of varying scale such as SMB ESXi clusters and sophisticated enterprise data centers. Set up your ESXi lab on AWS by signing up for the beta and you will be able to run your own test labs right away.

Author: Vivek Nair Interests include geeking out about virtualization & building complex distributed systems. Previously worked at Asana and on the Local Law Enforcement team at Palantir Technologies....

Software training portal: creating courses, classes and student labs using Ravello

To make the set-up, administration and delivery of classroom training, instructor-led training and self-paced training sessions, Ravello created an easily configurable training portal. The training portal is the hub for setting up and administering as many courses and specific instances of these courses as required, by accessing pre-created application blueprints, and providing each student an isolated environment of the relevant applications. In this post I will quickly go through the steps for creating and delivering training using Ravello. Accessing the training portal The training portal is a VM running in Ravello. Currently users who want to use it can let Ravello’s support know that they require it via a support ticket and the VM is then copied to the user’s library. Very soon the training portal VM will be made available publicly without needing a support ticket. To set up your own portal - drag and drop the training portal VM onto the canvas and publish it. You will need the this VM to run for the duration of any class that is running using the portal. Once the portal VM (application is published you can browse to it, by using the URL that is provided for it in the summary tab of the VM. Giving a trainer access The next step is to create a trainer entity (you can have several of those). Using the admin credentials that are defined on the training portal VM (and are communicated in the description of the VM in the UI). For the trainer’s Ravello credentials you will need to use an existing Ravello user account. Creating a course The next step is creating a course. Using the trainer’s credentials you will set up the flow of a training, the set of blueprints that will be available to students. In this step, it is this set of blueprints that is used as the basis for creating the different labs in the course. For example, you can use several pre-configured blueprints of the environment in different stages to create the sections of the full flow of a course. Creating classes (instances of the course) Finally it’s time to create a class. A class is the set of students who will go through the course. That means that if we have a course, say NetScaler VPX 101, we can create as many classes around it as we’d like. For example: NetScaler Channel Partners June 2015, VPX End Users July 2015, and so on. Each class is defined by the students that are a part of it. When you create the class with the student entities in it, you can also configure the permissions each student will have for each of the blueprints included. When it’s time for the training session itself - you will need to start the relevant student applications. Running student labs Now that a class is defined with all relevant students, the students can log in and spin up the first section of the class using the relevant pre-configured blueprint that is available to them through the training portal. Use cases Ravello users utilize the training portal for two main use cases. The first is the classic (virtual) training class, or the instructor led training (ILT). Another important use case is user conferences and channel partner trainings, where software and virtual appliance vendors run hands on sessions for their channel partners, end users, sales engineer trainings and more. If you need help creating your first course and class - drop us a line, and we'll get you started.

To make the set-up, administration and delivery of classroom training, instructor-led training and self-paced training sessions, Ravello created an easily configurable training portal. The training...

Ravello VM Import Tool - Best Practices and Tips

Ravello’s VM Import Tool Ravello offers an import tool that enables organizations to easily upload their VMware and KVM VMs to Ravello’s platform in a variety of ways - directly from vCenter or vSphere, as an OVF, or by uploading disk files and images (ISOs, VMDKs, QCOWs). Factors affecting upload time As one may expect, the time taken to upload the VM by the import tool depends on - Size of VM - larger VMs take longer Bandwidth available - more bandwidth means faster upload Link characteristics - lossy links take longer We recently ran some tests using Ravello’s VM import tool to characterize the upload time taken based on the type of link. We emulated the WAN link using the popular tool - WANem - and uploaded a 78 MB ISO file for our tests. As expected, the upload time was small for links with higher bandwidths (e.g. OC-9, FDDI etc.) and large for links with lower bandwidths (e.g. T-1, ADSL etc.). The following chart should give one a rough estimate of the time it would take to upload a similar sized VM based on the link available (note the logarithmic scale for the bandwidth axis). Next, we looked into the impact of link characteristics on the upload time. Keeping the bandwidth constant at 10 Mbps (CAT-3), we uploaded the 78 MB ISO file for our tests at different levels of packet loss. As expected, the upload time increases with the increase in packet loss. At greater than 20% packet loss, the upload time increase exponentially - thanks to multiple re-transmissions. It is interesting to note that with packet losses less than 15%, the VM import tool is able to gracefully recover keeping the upload time fairly constant. Having difficulty uploading? If you are facing challenges with Ravello Import Tool that cannot be explained by the bandwidth and link characteristics mentioned above, here are some things to look into - Is system clock of machine running Ravello VM import tool in-sync with NTP? Amazon’s S3 - home to the VMs in your VM library is sensitive to machine clock timestamps. If the system clock of the machine that is using the upload tool is not set accurately the upload will fail on 1% progress. To get around this issue, sync the system clock to pool.ntp.org Do you need a proxy to connect to internet? If your environment requires a proxy to be able to access the internet, you will need configure proxy’s IP in Ravello’s VM Import tool before you can upload. Instructions on how to setup proxy is available in Ravello’s knowledge base. Are you unable to login to the VM Import tool? Ungraceful shutdowns, hibernation of the machine running VM Import tool in the middle of an upload has the potential to corrupt the tool database. This can lead to user being unable to login to the tool. To work around this issue, one needs to reset the tool by - Stopping the Ravello’s VM Import Tool service - On Windows, run services.msc and stop the Ravello VM Import Tool service. On Mac, close the ravello-vm-import-server. Browse to following folder location and delete all the files with json extension (*.json) Windows - C:WindowsTemp.ravello Mac - /Users/<name>/.ravello/ Restart the Ravello’s VM Import Tool service Is the progress on VM Import Tool stuck at 1%? The VM Import Tool uploads in chunks. Until the first chunk is fully uploaded the progress bar shows 1%, and as more chunks are uploaded the progress bar gets updated. On slow links, the progress bar show 1% for a long time until the first chunk is uploaded - and then suddenly jump to a higher percentage. If you are concerned that VM Import is not uploading properly, take a peek into the logs to confirm that upload is in progress. VM Import Tool Logs Feeling adventurous, and want to explore what is going on behind the scenes? Read on. 1. With your Ravello VM Import Tool running, type http://127.0.0.1:8881/hello in your web-browser 2. Your browser should display the location where the log file for VM Import Tool is being stored 3. On Windows the default location is C:WindowsTemp.ravellostore.log and on Mac OSX the default location is /Users/<name>/.ravello/store.log 4. Open the store.log in Notepad (Windows) or Console (Mac). Following snippet indicates that upload has started 5. Following snippet indicates that upload has completed If your upload issues persist, please reach out to Ravello Support team with a copy of your store.log, and we will be happy to help.

Ravello’s VM Import Tool Ravello offers an import tool that enables organizations to easily upload their VMware and KVM VMs to Ravello’s platform in a variety of ways - directly from vCenter or...

VirtualBox to Vagrant to Ravello Smart Labs - Software Development Infrastructure Evolution

VirtualBox and Vagrant are popular tools in the development & test community because of their ease of use and simplicity - but developers want more. This post discusses when to use Virtual Box, how Vagrant fits into the picture and how SmartLabs is the next step in that evolution. VirtualBox VirtualBox is popular as a means to create a standard development environment. Using VirtualBox one can package a virtual machine complete with OS, tools, compilers, environment settings etc. that developers can download onto their laptops and start building applications. Thanks to a standardized development environment, software developers don’t run into scenarios that merit saying “...not sure what’s wrong here, but it was working fine on my laptop when I wrote that code!”. While VirtualBox is great for simple applications that can be deployed on a single machine, it falls short when the application is spread across multiple VMs. VirtualBox alone doesn’t have the capability to define the configuration and networking interconnect for a multi virtual machine environment. This limitation also hampers standardization of test environments by simply deploying VirtualBox. Most test environments need at least two machines - a DUT and a test generator. Vagrant - builds on VirtualBox Vagrant compliments VirtualBox in such multi-VM scenarios. Vagrant is a wrapper around virtualization technologies such as VirtualBox (and more recently VMware) that helps configure virtual development environments that are spread across multiple VMs. Vagrant has many features that make it a popular tool in the development community. It is - Simple. To start a multi-VM environment, just type ‘vagrant up’ Automates setting up Layer 3 networking between VMs Creates ‘disposable’ development / test environments on servers with ease Centrally controls the configuration for all VMs Enables source-control of environment configuration - Vagrant settings are in a text file Integrates with Chef & Puppet for VM provisioning Vagrant is great, but developers need more I get to engage with developers and testers on a daily basis. The overwhelming feedback from developer community is vagrant is great tool and mitigates many a developer pain-points. However, many vagrant aficionados also highlight some shortcomings. They say - Vagrant has the potential to be a truly ‘stellar’ tool if it could also do the following - Snapshot the entire environment along with VMs, their configuration and corresponding network configuration Source control the entire environment (not only the configuration) - without having to use external tools Create exact replicas of production environment, and use for development & testing. No more nasty surprises when code is deployed in production. Spin up as many copies of this environment as they wanted - with just a click. Onboarding a new dev or test resource would become very easy. Create ‘disposable copies’ of this environment on public cloud and pay only for what they use - contain costs Attach a copy of ‘disposable’ production environment to each bug fix, and share the entire environment along with the fix with the testers - no more ‘the-fix-was-working-in-my-dev-environment’ conversations Have access to a clean Layer 2 networking in addition to Layer 3 - build cool application features (e.g. auto-discovery & high-availability protocols) that rely on this Ravello - a fresh technology perspective on enabling software development life cycle Ravello’s Smart Labs platform - powered by nested virtualization and software defined networking overlay does all this and much more. In fact, it has the potential to accelerate the entire software development lifecycle. As an example, early in the product development lifecycle one is typically busy prototyping a concept and needs a flexible environment for the ‘skunkworks’ project. It is difficult to get resources for prototyping at most companies - provisioned resources are already spoken for by the approved projects. In this phase, Ravello’s Smart Labs can help get your prototype lab running quickly on the public cloud without investing in hardware resources - limiting the cost risks associated with prototyping phase. Once the concept prototype has matured to a planned project, Ravello’s Smart Labs can help the development and test teams to stay in sync and collaborate productively.  Smart Lab serves as an excellent environment for design, development and QA - development team can incrementally build, and pass the developed software complete with corresponding environment to the QA teams for testing. Smart Lab makes it possible to attach an instance of lab environment to each bug/feature. Also, thanks to elasticity of the public cloud - teams can easily accommodate for the burstiness in demand for new dev/test environments closer to the release date. Before release, one  also needs to validate the functionality of the application in a variety of deployment scenarios. Ravello’s Smart Lab simplifies creating deployment scenarios using ‘disposable labs’. If something goes wrong during the upgrade or deployment testing - one can ‘destroy’ the ‘botched up’ lab, and create a new one with a click. It is easy to resume from the last ‘good-state’ by using a snapshot ‘blue-print’ to spin up the new lab. Once the software is released, the sales team needs environments for sales demos and customer PoCs. Using Smart Labs, it is easy to create repeatable demos in a cost effective manner. Simply set up demo on Smart Lab, take a snapshot ‘blueprint’ and spin up copies of this environment on AWS or Google every time a sales engineer needs to do a demo. As the opportunity matures, and one needs a PoC environment - the same demo environment can be shared with customer and customized for planned deployment with ease. To train customers, partners, resellers on the product functionality - copies of training environments can be spun off in AWS & Google cloud at locations closest to the trainees, providing a local user-experience. Also, classroom training leads to burstiness in resource demand that can be easily accommodated using elasticity of public cloud. Ravello’s Smart Lab also proves to be a great tool to collaborate with customers when troubleshooting issues. Using Ravello, customers can create a high fidelity copies of their production environment that can be used for debugging issues without having to actually provide access to real production environment - mitigating the risks associated. Conclusion Ravello’s Smart Lab offers the benefits associated with Virtualbox & Vagrant - and more by taking a fresh technology perspective that enriches every stage of the software development lifecycle. Interested in trying Ravello Smart Labs? Just sign up for a free Ravello trial, and/or drop us a line and we help you get started.   Note - Virtualbox on Ravello is not supported. Ravello Smart Labs offer many enhanced capabilities that offer additional benefits compared to Virtualbox.

VirtualBox and Vagrant are popular tools in the development & test community because of their ease of use and simplicity - but developers want more. This post discusses when to use Virtual Box, how...

My Virtual Way: VCA and VCP certification exam prep: Lab setup - Getting started

There are many great guides out there to study for the the VCA and VCP exam, but many of us don’t have access to a proper lab setup to train on. Especially now with VCP6, often times it’s tough to meet the vSphere 6 requirements in the home lab. I’m not as far along in the certification process just yet, but I already have my own ESXi lab. Instead of purchasing hardware or using a hosting provider, I used set up my ESXi lab on AWS. I put together this quick outline hoping to help fellow exam takers. If you’ve read my previous posts in this series, first of all thanks! Second of all, as many of you are going through the VCP certification process, you’re sending some great questions about running ESXi on AWS using Ravello. To answer some of these questions, I wanted to share some of the Ravello ESXi Smart Lab basics, so that you can have your lab setup up and running and be on your faster way to the VCP certification. We recently held a webinar discussing how to build ESXi labs on AWS/ Google Cloud. Enjoy the webcast and slides... [video url="https://www.youtube.com/watch?v=h9byjFw5omQ"] [slideshare id=48986275&doc=20150602esxiwebinar-150604114506-lva1-app6891] So, to the basics: You will be running ESXi in the public cloud but you will install, configure and use it exactly as if you were running on hardware servers in your home. For all intents and purposes, it’s like you’re running on infinite capacity (vCPU and GB RAM) if you need to (I’m pretty sure you don’t, though). You need to bring your own license to build your lab setup. Pricing is usage based. That means that you will be paying for hours of vCPU and GB RAM consumed. This is perfect for your lab, since you don’t need to buy hardware, and you only pay for what you use. You can see a detailed example of pricing for a home lab or go to our calculator to figure out the cost of your setup. You don’t need cloud credentials to set this up, your dealings are only with Ravello I think this almost covers the getting started part. Now just for the outline to get your ESXi lab into shape (the links include easy to follow step-by-step guides, including screenshots and all): How to install ESXi on AWS and Google How to set up vCenter on AWS and Google How to create an ESXi cluster on AWS and Google If there are some other basics I haven’t covered - please feel free to add your comment to the post and we’ll keep the conversation going. Check it out, and don’t hesitate to comment here or in our support forum, or even send us a guest post describing your experience studying for the VCP exam using Ravello. Good luck to all of us!

There are many great guides out there to study for the the VCA and VCP exam, but many of us don’t have access to a proper lab setup to train on. Especially now with VCP6, often times it’s tough to...

Announcing Networking & Security Smart Labs on AWS and Google Cloud

We at Ravello are proud to announce the launch of networking and security Smart Labs on AWS or Google Cloud. Our goal is to enable the entire ecosystem of networking and security technologies with real world labs that run in the cloud to achieve a level of scale and accuracy not possible with traditional network and security simulation approaches. Using Ravello’s technology, we enable true “data center like” labs in the cloud - without any restrictions on layer 2 networking and security testing. We envisioned a world where network and security teams are not constrained by hardware capacity each time they need a lab for design, modelling, proof of concept or even upgrade testing - and we are excited to formally announce the solution today. Ravello’s Smart Labs are powered by a virtual overlay cloud with software-defined networking, with support for running existing VMware and KVM virtual appliances, in a fully fenced environment, with complete layer 2 access in the public cloud. In addition, entire hypervisor labs with ESXi or KVM can be run on Ravello, creating a unique enabler for malware testing and security sandboxing. Virtual networks created in Ravello Smart Labs can be used for training, network modeling, planning for new security services, or examining “what-if” scenarios for the installed network. Use Cases Using Ravello, appliance vendors and their partners are able to rapidly provision demos, proof of concept labs and test environments for their virtual appliances. They can also accelerate training for their entire ecosystem ahead of a new software release. Enterprises can quickly upgrade test their own production deployments when new versions of networking and security virtual appliance are released, by replicating their data center environments in the public cloud and creating their own Smart Labs on Ravello. Guides to Setting Up Sample Networking & Security Smart Labs on Ravello Besides access to full layer 2 networking in the cloud, Ravello’s enables most existing VMware and KVM virtual appliances to run unmodified on AWS and Google Cloud. For instance, users are already running Smart Labs for popular virtual appliances. Below is a list of sample labs with detailed how-to guides on how to set them up. Setting up a Palo Alto Networks lab on Ravello Setting up an F5 Networks lab on Ravello Setting up a Citrix Netscaler lab on Ravello Setting up a Barracuda lab on Ravello Setting up a Fortinet lab on Ravello Pricing Pricing starts at $0.14/hr for 2 CPU/4GB RAM chunks. See our pricing calculator for more detail. How to get started In order to use Ravello’s Smart Labs, you can start with a free trial by signing up here. After activating your account, here are some how-to guides that illustrate how to set up various aspects of your networking and security labs in the cloud: How to keep your static IP intact when you move to AWS Adding unlimited NICs per VM in AWS or Google Changing the MAC addresses on your virtual machines and virtual appliances Promiscuous mode on AWS - using port mirroring for packet capture in the cloud How to setup VLANs on AWS or Google Cloud Using an external router in your Ravello application environment Please reach out to us if you need any help at support@ravellosystems.com

We at Ravello are proud to announce the launch of networking and security Smart Labs on AWS or Google Cloud. Our goal is to enable the entire ecosystem of networking and security technologies with real...

How to setup VLANs on AWS using Ravello?

Ravello’s software defined networking overlay on AWS & Google cloud exposes a clean Layer 2 interface to the guest VMs running on top. Networking overlay enables all capabilities that one has access to in a datacenter environment - including VLANs - which AWS & Google cloud cannot natively support. VLANs are extremely useful for network segregation in variety of scenarios - One can move from flat network to segregated network without changing IPs Use the same firewall interface for segregated networks In multi-tenancy operations such as hosting, one can share the same network for multiple customers without risking data breach Segregate traffic for each host in VDI deployments VLAN tags can be applied in: Access mode: End hosts are typically connected to Access switch ports. Although Access port is a member of a VLAN, it never gets tagged with that VLAN because the end host LAN card does not understand the tag. To facilitate Access mode, Ravello’s SDN strips off the VLAN tag before passing the packet to the guest VM on top. Trunk mode: Trunk switch ports multiplex traffic for multiple VLANs over the same physical link. Each device at the end of trunk port must be capable of adding and removing the VLAN tags. To facilitate Trunk mode, Ravello’s SDN will let the VLAN pass through to the guest VM on top without modification. This video walks one through on setting up VLAN tags on Ravello.

Ravello’s software defined networking overlay on AWS & Google cloud exposes a clean Layer 2 interface to the guest VMs running on top. Networking overlay enables all capabilities that one has access...

How to keep Static IPs on AWS & Google Cloud?

Application owners feel challenged when they move their multi-VM, multi-tier environment from data-center to AWS & Google cloud. The networking changes -- you cannot keep the same IP addresses, netmask, networking interconnect on public cloud as you have in your data center. Ravello’s software defined networking (SDN) overlay gives you the ability to mirror your data-center network on AWS & Google cloud without making changes. This article goes over how to mirror the static IPs that you have in your data center on top of Ravello’s platform to be able to run your application on AWS & Google cloud. Here’s a quick video showing you how to set up your Static IPs for interfaces on Ravello. [video url="https://www.youtube.com/watch?v=_ofUqNlDGjo"] As an example, I will walk you through how to mirror the Static IPs that I have in my Data Center onto Ravello’s platform. Data Center Setup I have the following load-balancer network setup in my data-center that I want to mirror on Ravello. Ravello Setup Ravello allows you to set up the network properties for each of the VMs - including static IPs. Navigate to the VM → Network Tab to be able to change the network settings. Scroll to the network interface for the VM that you want to associate a Static IP with, and click on the ‘Static’ radio button for IP configuration. Next, you will be presented with fields to enter the Static IP, Netmask, Gateway and DNS server for this interface. Ravello’s SDN automatically creates a virtual switch based on the IP address and netmask of the interfaces, and adds the interfaces that are part of the same subnet to the same virtual switch. As an example, look at the following: The webserver-1 is configured with an IP address of 11.168.0.2/24 and webserver-2 10.168.0.3/24. This prompts Ravello’s SDN to create two virtual switches 11.168.0.0/24 and 10.168.0.0/24 (highlighted in red above). To put both webserver-1 and webserver-2 onto the same switch, all that you need to do is give webserver-1 and webserver-2 an IP address that are a part of the same subnet and have the same netmask, and Ravello’s SDN will put both the network interfaces corresponding to webserver-1 and webserver-2 onto the same virtual switch. For example, if we change the IP address of webserver-2 to 11.168.0.3/24, Ravello will put both of them on the same virtual switch 11.168.0.0/24 (highlighted in red below). Next, if we want to put both the web-servers onto the same virtual switch as NetScaler load balancer, we will assign webserver-1, webserver-2 and NetScaler’s internal interface IP addresses & netmask that are on the same subnet. Changing IP for webserver-1 to 192.168.0.2/24, webserver-2 to 192.168.0.3/24 and NetScaler’s internal interface to 192.168.0.1/24 leads to new virtual switch 192.168.0.0/24 on Ravello that joins these three interfaces (see below). With these changes, the network on Ravello mirrors my network in data center. Ravello’s SDN - behind the scenes Here is how Ravello’s SDN behaves behind the scenes Once a Static IP and Netmask is defined, Ravello’s SDN will connect the VM’s network interface to virtual switch to which rest of the VMs on the same subnet are connected If a default Gateway is defined on Ravello’s user interface And the guest VM has a Gateway IP defined at the application level (i.e. you have configured the VM to serve as a Router), Ravello’s SDN will adhere to VM’s Gateway setting If the guest VM doesn’t have Gateway IP defined at the application level, Ravello’s SDN will add a virtual Router with the defined Gateway IP, and add it to the virtual switch If a default Gateway is not defined on Ravello’s user interface, Ravello’s SDN will simply do nothing If a DNS IP is defined on Ravello’s user-interface, Ravello’s SDN will add a leg to DNS server with this IP, and connect it to the same virtual switch as rest of the VMs on this subnet Using the constructs listed above, I can configure static IPs for my application and run it on AWS or Google cloud using Ravello.

Application owners feel challenged when they move their multi-VM, multi-tier environment from data-center to AWS & Google cloud. The networking changes -- you cannot keep the same IP addresses,...

Virtual Routers on Ravello

Ravello’s SDN enables access to all the networking constructs available in data-center environments on public cloud. This allows one to mirror their data center setup on AWS or Google cloud. One of these networking constructs is a virtual router. Ravello creates a virtual router in an ‘application’ when a Gateway is defined for one of the network interfaces associated with a VM in the application. As an example, Netscaler VM’s management interface has a Gateway (10.1.10.1) defined. Due to the presence of Gateway in the configuration, Ravello’s SDN creates a virtual router with a routable path, and connects it to virtual switch (10.1.10.0/24) that NetScaler’s management interface is plugged into. If I want to connect the external interface on my NetScaler to be routable as well, all that I need to do is define a Gateway IP for that network interface, and Ravello will hook the virtual switch corresponding to this interface to the virtual router as well (see below). If you are interested in bringing your own Juniper, Cisco (or any other external virtual router) for access to advanced feature-set, or to mirror your data center deployment - that is certainly possible as well. Ravello’s platform allows one to run VMware and KVM appliances ‘as-is’. One can easily import Cisco, Juniper, (or any other external virtual appliance) into their application, define the IP address of the external virtual router appliance to be the default gateway for other VMs in the application, and assign a default gateway to the external virtual router appliance so that it is connected to Ravello’s in-built virtual router. The built-in router in such scenarios simply acts as a forwarding engine. Interested in exploring deploying external virtual routing appliance? We have published several how-to guides including Juniper, Cisco, Arista Networks, Citrix NetScaler, F5 Big IP, Palo Alto Networks, Fortinet FortGate, Barracuda and PfSense.

Ravello’s SDN enables access to all the networking constructs available in data-center environments on public cloud. This allows one to mirror their data center setup on AWS or Google cloud. One of...

Adding NICs and changing MAC addresses in AWS EC2

For networking and security enthusiasts, below is a short comparison of running your VMs in AWS/Google vs. data center vs. Ravello. One of the main limitations of AWS/Google is that they don’t allow for advanced L2 networking configurations including VLANs, dynamic routing, IPv6 and multicast/broadcast traffic. With Ravello, your entire application environment is encapsulated and run in AWS/Google. One of the main benefits of doing this is that it opens up all the L2 and L3 networking limitations which exist in AWS/Google.   Ravello on top of AWS/Google Data center AWS/Google Layer 2 networking support ✓ ✓ ✕ IPv6 Support ✓ ✓ ✕ High fidelity environment copies ✓ ✓ ✕ Usage based pricing ✓ ✕ ✓ No migration required ✓ ✕ ✓ 1-click environment replication ✓ ✕ ✕ Share blueprints/snapshots ✓ ✕ ✕ Worldwide deployment ✓ ✕ ✓   An example of Ravello’s networking flexibility is adding NICs to a VM and updating the MAC addresses. These capabilities are important for cases where MAC addresses need to remain the same once the VM is uploaded to the public cloud, while providing a very simple way to add as many NICs per VM as needed. How to add multiple NICs to a VM Simply navigate to the Network tab of your VM, and click the “Add +” link to add a new network interface. How to change a NIC’s MAC address Once you’re on the Network tab of a VM, uncheck the “Auto MAC” option, and then you can edit the MAC addresses as desired.

For networking and security enthusiasts, below is a short comparison of running your VMs in AWS/Google vs. data center vs. Ravello. One of the main limitations of AWS/Google is that they don’t allow...

How to configure multiple NICs and multiple IPs per VM on AWS EC2

It is often required to configure more than a single network interface on your VMs, and sometimes one also needs to configure additional IPs on a given NIC. However, it is not as straightforward a task as it may sound, not even on the public cloud. On AWS, for example, you cannot "natively” define as many interfaces as you’d like, nor as many additional (secondary) IP addresses on each NIC, and in order to be able to define a large number of NICs, you’ll need to lease the larger, more expensive hosts (See the list of maximum allowed ENI and IP address on AWS). However Ravello uses nested virtualization to enable you to easily set multiple NICs and multiple IPs and run it on AWS or Google Cloud. In this post I’ll illustrate how this can be achieved, and what are the few simple actions required to be taken in order to achieve this. Assigning Multiple NICs to a Single VM Select the VM you want to edit, and go to the Network tab in the right-hand side VM properties tab. In the image you can see the list of already defined network interfaces, each with its own configuration of DHCP or static IP and external connectivity. In order to add a new interface, simply click the “Add +” link, and a new interface definition section will appear. Here you can define the NIC’s name, MAC address, device type, and IP address configuration. You can also define what type of external connectivity this NIC will have (public IP, Elastic IP or port forwarding) For more information about how you can configure your application’s network, please refer to our post about software defined networking. Assigning Multiple Secondary IPs to a NIC You are able to add multiple secondary IP addresses to each of the NICs on your VMs. Adding a secondary IP is done in the following way: Select the VM you want to edit, and go to the Network tab in the right-hand side VM properties tab. Select the NIC you want to edit, and press “Advanced” At the bottom of the NIC’s section you’ll notice the “Additional IP addresses” section. In order to add a new secondary IP address, simply click the “Add +” link, and a new interface definition section will appear. Now you can define the additional IP address, Netmask and Gateway parameters. Note the secondary IP address cannot be configured by a DHCP server, and must always be static. You can also define what type of external connectivity this NIC will have (public IP, Elastic IP or port forwarding) Summary In this short post we illustrated how you can use Ravello in order to create multiple NICs and secondary IP addresses on AWS or Google cloud, without being concerned with each cloud’s constraints and limitation. Using Ravello, configuring NICs and IPs is easy and straightforward, and allows you to focus solely on building your application and not on any infrastructure specifics. Feel free to sign up for a free trial and see how this facilitates your labs for development, testing, training, demos and more.

It is often required to configure more than a single network interface on your VMs, and sometimes one also needs to configure additional IPs on a given NIC. However, it is not as straightforward a...

Ansible SmartLab on Ravello

In some of our previous blog entries we’ve discussed Puppet for config management. Today we are going to talk about another, easier to setup, config management service, Ansible. Ansible is written in Python and uses YAML configuration playbooks to handle pushing out configurations. Ansible, like Puppet, is used to control what the final state of things should look like. Since we’ve discussed Puppet, it is only fair to stack the two up and see what is different. Puppet Ansible Master/Agent model, with agent installed on each client. No particular Master node. Any node with Ansible, a list of hosts, a playbook, and SSH keys will serve as Master. Pull model, where agents must check in to receive changes. Push model, where Master uses SSH to kick off playbooks on clients immediately. Puppet Domain Specific Language (DSL) used for manifests and classes. Can be extended with Ruby. YAML Playbooks which can be extended with Ansible Python API. Can purchase Puppet Enterprise which comes with support and advanced UI to simplify management. Can purchase Ansible Tower which comes with support and an advanced UI to simplify management. Puppet Labs Website Ansible Website   So, which is better? The answer to that is completely subjective. To answer that you have to ask which language better suits you, which model better suits your use case, and which method suits your workflow. There is no right answer to these questions. Puppet’s model is incredibly powerful and highly extensible, but it requires more setup in the front end, deploying and configuring agents. Ansible’s model is incredibly flexible, only requiring SSH keys and a list of hosts. Both have been adopted by large corporate enterprises. Some big names for Puppet are Red Hat, Cisco, ADP, and a long list of other major players. Ansible in a press release claims it added 300 major companies to its ranks in 2014 alone including the likes of Twitter, Apple, Juniper, Grainger, WeightWatchers, SaveMart, and NASA. Both projects are darlings of the Open Source community while also having business ventures to support their projects monetarily. How do I deploy Ansible? Well in Red Hat/CentOS world the answer to that is simple. Add the EPEL repository to your machine and run yum -y install ansible and watch the magic happen. You only need to install it on the host(s) you wish to use as master(s). Then you set up a list of the hosts you wish to manage, write a playbook that defines what the final state of said hosts should look like, and fire off your playbook. That sounds easy, but I still want to practice. Good for you. Hopefully my blogs are rubbing off on you and you have already started your Ravello trial, or better have already become a full fledged customer and want to kick the tires to plan and test your own Ansible implementation in a lab with your own system images on AWS and Google Cloud. Good news! I’ve created a blueprint called 11-Ansible-bp in Ravello to get you started. You can request ravello support to move this blueprint to your account to get started. 11-Ansible-bp, has Ansible set up on the host server.example.com with an Ansible hosts file that includes client.example.com in a server group and a sample playbook that deploys httpd with a templated config file and sets up a user. All root passwords are set to ravellosystems. You will need to use a key-pair with this application and the SSH user for that key-pair is centos. Get your Ansible playbook lab on AWS/Google cloud Let’s take a look at how I created this blueprint. I took the CentOS 7 Generic cloud image found on CentOS.org, added ssh keys and some repositories I use for this and other demonstrations. They can ssh to and from each other as root with no worries. To use, connect as the user centos with your key-pair to server.example.com. Then run “sudo su -” to become root. Then you can look at the history on root. Let’s look at the pertinent bits. # First we install ansible [root@server ~]# yum -y install ansible # Then we create a hosts file setting up the group [webservers] [root@server ~]# vim /etc/ansible/hosts #---- Contents of hosts: [webservers] client.example.com # Then we create the YAML playbook that deploys our webserver [root@server ~]# vim /etc/ansible/web.yml #---- Contents of web.yml (note, indents are important in YAML) #---- Also, YAML files must begin with ‘---’. --- - name: Deploy Apache to webservers group hosts: webservers vars: http_port: 80 remote_user: root tasks: - name: ensure apache is at the latest version yum: pkg=httpd state=latest - name: write the apache config file template: src=/etc/ansible/httpd.conf.j2 dest=/etc/httpd/conf/httpd.conf notify: - restart apache - name: write the index.html for the website template: src=/etc/ansible/index.html.j2 dest=/var/www/html/index.html notify: - restart apache - name: ensure apache is running (and enable it at boot) service: name=httpd state=started enabled=yes handlers: - name: restart apache service: name=httpd state=restarted # Here I’m installing apache to get a copy of httpd.conf # to template [root@server ~]# yum -q -y install httpd [root@server ~]# cp /etc/httpd/conf/httpd.conf /etc/ansible/httpd.conf.j2 # Here I replace ‘Listen 80’ with the Jinja2 template variable # http_port thus ‘Listen {{ http_port }}’ in the template. [root@server ~]# vim /etc/ansible/httpd.conf.j2 # Here I create an index.html to have it deploy. [root@server ~]# vim /etc/ansible/index.html.j2 # Here I run the playbook and it deploys to the client. [root@server ~]# ansible-playbook /etc/ansible/web.yml So, let’s take a moment and break that down. In the hosts file we use a simple ini format. The ini sections indicate host groups. Any host you don’t want in a group goes before you declare your first group. A more complex example might look like this: lonewolf.example.com [db] bd[1:50].example.com [web] www[1:100].example.com [foo] foo.example.com bar.example.com In the above, we have 50 servers in the [db] group, 100 in the [web] group, one in the [foo] group and one not in any group. The only thing needed on the client is Python 2.5 or greater and ssh keys set up from the Ansible host to each either as root or an account with full sudo rights. Next we look at the playbook, web.yml. --- - name: Deploy Apache to webservers group hosts: webservers vars: http_port: 80 remote_user: root tasks: - name: ensure apache is at the latest version yum: pkg=httpd state=latest - name: write the apache config file template: src=/etc/ansible/httpd.conf.j2 dest=/etc/httpd/conf/httpd.conf notify: - restart apache - name: write the index.html for the website template: src=/etc/ansible/index.html.j2 dest=/var/www/html/index.html notify: - restart apache - name: ensure apache is running (and enable it at boot) service: name=httpd state=started enabled=yes handlers: - name: restart apache service: name=httpd state=restarted In YAML we start the file with three dashes as a start of file indicator. Think of it like the HTML DOCTYPE declaration, only more strictly enforced. Indents and spacing matter as Python is picky about such things. Don’t use tabs. I use two spaces per level of indent. You could use a different number, but they must be consistent. The first - name declares the name for the play. Make it descriptive and unique to that play. Future you will thank you. Good play names are like love notes to your future self when you are trying to edit the play. We then declare the hosts, or in this case host group, to which this play applies. We then set a value for the template variable we set in httpd.conf.j2 so when it places the final file it has a legitimate value there. We then declare this is to be done as root. We could set a non-root user with sudo rights equally well, but my keys are set up as root. Then we start the tasks section which is the real payload of the play. We name each task descriptively and uniquely, then tell it how to do the task. The first task ensures Apache is installed and updated. The second deploys our template or checks for differences and deploys the updates if changed. If there are changes, it notifies the handler that then restarts the Apache service. The third deploys our actual web page. The fourth verifies the service is started and enabled. Finally we declare the handler the notify operations called. Once the playbook is written and the template files are in place, it is time to deploy. Playbooks can be run as many times as you like, as often as you like. You could even setup a cron job to enforce the state you declare in the playbook at regular intervals. Playbooks can even call other snippets called roles to create modular playbooks. Templates can be declared to set values on the fly using Jinja2 markup. The options are endless. To test the playbook, pull up your browser and browse to the public IP of your client server. It should have your handy dandy webpage ready to go. I encourage you to look at the files I have in /etc/ansible on server.example.com as a starting point. Then think about what you want to do. The Ansible documentation is a great place to get started. There is also the Ansible cookbook with great tips and tricks at Ansbile Cookbook. Ansible is yet another powerful tool in our DevOps arsenal. With simple setup and incredibly low overhead it is definitely worth consideration and testing. Enjoy testing Ansible with the Ravello blueprint and as always, happy computing.

In some of our previous blog entries we’ve discussed Puppet for config management. Today we are going to talk about another, easier to setup, config management service, Ansible. Ansible is written in...

How to use promiscuous mode on AWS: using port mirroring for packet capture in the cloud

It is a well-known fact that one cannot use promiscuous mode (port mirroring) on AWS or Google cloud. However getting packet capture on AWS in many cases would be the easy and sometime only solution for testing various use cases where traffic monitoring and deep packet analysis is required, such as IDS or networking applications development, testing or training. Using Ravello, you’ll be able to set up this kind of advanced network configuration for your application, while running on the public cloud, AWS or Google cloud. In this post, we’ll demonstrate how port mirroring can be configured for your application, in a few simple steps. For our demo we’ll set up a simple environment, consisting of three VMs. Each VM includes two NICs. Two of the VMs will communicate with each other (using a simple PING command), and the third VM will listen on all traffic between them, by setting its NIC to “port mirror” mode (from within the Ravello VM properties). Since we’ll only use tcpdump for sniffing the network, we will not need to configure the VM NIC to promiscuous mode, but in other cases, this may be a required configuration. First, let’s see the configuration of our two VMs communicating: VM #1: access10 NIC #1: configured via DHCP with reserved IP address 10.0.0.3/255.255.0.0 NIC #2: configured via DHCP with reserved IP address 30.0.0.3/255.255.255.0 It is possible to set any NIC to communicate on a separate VLAN VM #2: access20 NIC #1: configured via DHCP with reserved IP address 10.0.0.5/255.255.0.0 NIC #2: configured via DHCP with reserved IP address 30.0.0.5/255.255.255.0 It is possible to set any NIC to communicate on a separate VLAN Please note, that if we had defined the NICs in VM #1 and VM #2 to different VLANs we should have used another VM as trunk between the two VLANs. Defining such VM is also possible using Ravello, see our previous post about advanced networking on AWS EC2 for additional information. The third VM is the VM we’ll use for monitoring the traffic between the other VMs: VM #3: promisc NIC #1: configured via DHCP with reserved IP address 10.0.0.7/255.255.0.0 NIC #2: configured via DHCP with reserved IP address 30.0.0.7/255.255.255.0 Note that for NIC #2 in this VM, we have checked the option for port mirroring in the VM properties (and this is all we had to do!) Now, let’s preform our simple test - VM #1 and VM #2 will send to each other ping requests (over ICMP). VM #3 will monitor this traffic using tcpdump: Summary In this post we showed how easy it is for us to set promiscuous mode NIC and port forwarding setup of our internal application network over AWS or Google cloud. Due to our HVX technology and the resulting overlay network, Ravello is able to provide a fully functional Layer 2 and Layer 3 network which allows an accurate replica of your data center applications over the public cloud. This functionality is currently in beta phases - please contact our support team to enable it for your organization.

It is a well-known fact that one cannot use promiscuous mode (port mirroring) on AWS or Google cloud. However getting packet capture on AWS in many cases would be the easy and sometime only solution...

Running Barracuda Firewalls in AWS or Google for Testing, Training, and POCs

Training. POCs. Testing. These are use cases where the ability to create multiple copies of the same environment quickly is key to success. Barracuda has recently announced their plans to expand the partner channel, and has historically provided virtual appliances which can be run in AWS or Google Cloud. However, these providers don’t support IPv6 or Layer 2 networking functionality such as multicast, VMACs, Gratuitous ARP or VLAN tagging. Ravello’s nested virtualization technology solves these problems by offering a fast way to replicate environments while maintaining full Layer 2 networking in AWS/Google. Running Barracuda Firewalls in Ravello vs. Data Center vs. AWS/Google   Ravello on top of AWS/Google Data center AWS/Google Layer 2 networking support ✓ ✓ ✕ SMTP port 25 enabled ✓ ✓ Manual Process IPv6 Support ✓ ✓ ✕ High fidelity environment copies ✓ ✓ ✕ Usage based pricing ✓ ✕ ✓ No migration required ✓ ✕ ✓ 1-click environment replication ✓ ✕ ✕ Share blueprints/snapshots ✓ ✕ ✕ Worldwide deployment ✓ ✕ ✓   The purpose of this article is to showcase how to take an existing environment running in VMware ESXi™ and run it in Ravello. Sample Deployment The deployment we used for this example is a typical environment consisting of the NG Firewall, the Spam Firewall, and an Exchange environment. Deployment characteristics VMware ESXi 5.5.0 Barracuda Spam Firewall Virtual Appliance Barracuda NG Firewall Virtual Appliance Microsoft Exchange Server 2013 environment Windows Server 2012 Domain Controller Windows Server 2012 Client Access Server Windows Server 2012 Database Windows 7 client   Network setup The network configuration is comprised of two components: Internal network → 192.168.68.0 This includes the Barracuda Spam Firewall, as well as the Microsoft Exchange environment. External network → 192.168.168.0 This includes the Barracuda NG Firewall, and the Windows 7 VM. All internet traffic will pass through this firewall, while the Windows 7 VM is used to connect to the NG Firewall dashboard, and test the connectivity.   Setup steps 1. Import your VMs into Ravello 2. Create your Ravello application using the imported VMs 3. Set up networking in Ravello to match your existing environment 4. Test your setup 5. (Optional): Create a blueprint to easily replicate the environment as many times needed.   Step 1: Import your VMs into Ravello At this step, you can either import your VMs directly from VMware vSphere™, or get the fresh virtual appliances from the Barracuda website, which you can also import into Ravello. For this example, we created an environment in VMware, which we used to then import into Ravello. In order to import the VMs from vSphere, simply log into the Ravello product, navigate to Library > VMs, and then click on Import VM. This will open the Import Tool. Once the Import Tool loads in your web browser, choose the option “Extract and upload directly from VMware vCenter™ or vSphere”. This will prompt you to connect to your vSphere environment, where you can select the VMs to be imported. Step 2: Create your Ravello application using the imported VMs Once you log into Ravello, click on “Applications” and then select “Create Application” Import your VMs by clicking on the “+” sign, after which you can filter the list by VM name. Simply drag and drop the VMs onto the canvas to add them to your application. There should be 6 VMs: Barracuda NG Firewall, Barracuda Spam Firewall, Windows 7, and 3 Windows Server 2012 for the Exchange environment. Step 3: Set up networking in Ravello to match your existing environment For each of the VMs, you’ll need to match the network configuration in Ravello to be the one you had in your data center. To do so, simply click on each of the VMs, then navigate to the Network tab. The IP addresses configuration & services are as follows: Barracuda NG Firewall Elastic IP used to connect to the Internet: 85.190.178.53 External IP used by the Win 7 VM to connect to the firewall: 192.168.168.50 Internal IP used by the Spam Firewall & Exchange environment: 192.168.68.1. The Barracuda NG Firewall will be the gateway for the Spam Firewall. The NG Firewall services will also need to be updated in order to allow for inbound email traffic, and access to Exchange OWA from the Internet. To do so, click on the Services tab, and then add Supplied Services for 192.168.168.50: one for TCP port 25, and one for HTTPS port 443. Windows 7 IP: 192.168.168.70. Gateway: 192.168.168.1 Barracuda Spam Firewall IP: 192.168.68.55. Gateway: 192.168.68.1 Windows Server 2012 - Domain Controller IP: 192.168.68.52. Gateway: 192.168.68.1 Windows Server 2012 - Client Access Server IP: 192.168.68.53. Gateway: 192.168.68.1 Windows Server 2012 - Mail Database IP: 192.168.68.54. Gateway: 192.168.68.1 Step 4: Test your setup In order to test the setup, log into Exchange OWA for one of the users, and try sending an outbound email. After receiving the email, try replying back to it, in order to test the inbound traffic. Testing outbound email traffic   Testing inbound email traffic - reply to the email and verify that you received it in the Ravello Exchange environment Step 5 (Optional): Create a blueprint to easily duplicate the environment as many times needed. The purpose of this step is to highlight how easy it is to replicate the environment in Ravello once it has been created. This process makes it incredibly easy to create environments needed for training, POCs, and upgrade testing. The flow below describes how to do this in the Ravello UI, but the same functionality is also available through APIs, in order to do this programmatically, for hundreds and even thousands of environments. In your Ravello application, click on “Save as Blueprint”. This will create a snapshot/copy of the environment, which can then be used to create other identical environments. Now, create another application based on the blueprint. Navigate to Library > Blueprints. Select the blueprint, and then click on Actions > Create Application At this point, you can cloned the existing Ravello capsule/application. This can be done repeatedly through the UI, or programmatically through the APIs. More information on the APIs functionality can be found here. Conclusion The goal of this tutorial was to exhibit how simple it is to replicate environments in Ravello for training, POCs and testing. What used to take weeks, even months, can now be done within minutes using Ravello’s first-in-kind nested virtualization technology. Simply run your entire environment in a Ravello capsules, without having to worry about provisioning, complex network configuration setup, or expensive licenses. For more information, we've put together a presentation highlighting the main use cases and benefits of using Ravello with Barracuda appliances here. Or if you'd prefer something more engaging, here you can find a recording of our webinar on how to use Barracuda with Ravello. You can learn more about Ravello's overlay network capabilities and set up your free trial to create your own Barracuda environment in the cloud.

Training. POCs. Testing. These are use cases where the ability to create multiple copies of the same environment quickly is key to success. Barracuda has recently announced their plans to expand...

How to prepare an NFS shared storage server for a VMware ESXi lab

As most of you probably know besides implementing a hypervisor merely capable of running regular VMs, we’ve also implemented a CPU virtualization extensions called VT-I for Intel or SVM for AMD cpus. These extensions, in essence, allow running other hypervisors such as KVM or VMware ESXi™ in the public cloud on top of Ravello. In this blog I’m going to focus on installing and configuring NFS image template that can be later used as a shared datastore in a VMware datacenter. We recently held a webinar discussing how to build ESXi labs on AWS/ Google Cloud. Enjoy the webcast... Webcast: https://www.youtube.com/watch?v=h9byjFw5omQ Prerequisites If you do not already have an account in Ravello, please first open a trial account. Overview This article describes how to build a simple share storage using an NFS server in 10 minutes on top of Ravello. If you care for a fancier NFS server with multiple users, rolles, LDAP integration etc check out the internet for your favorite OS flavour (for example, see here). Creating your virtual machine To simplify the process we’ll use one of the preconfigured ubuntu virtual machines delivered by Ravello. Create an new application in Ravello (well, you can use an existing one if you prefer, no importance here). Drag a new Ubuntu vanilla VM from the library (at the time this blog was written the latest version is 14.04.1) onto the canvas Select a key pair or to create a new key pair (in the General tab on the VM properties pane) Optional: Set the Name and Hostnames in Ravello UI (General tab) to NFS or any hostname you would like to use in your application. Change the number of CPUs and Memory size to your need Change the disk size or add disks if needed (note: this tutorial assumes one hard disk, you can create multiple disks at this stage just make sure you know how to adjust the steps below accordingly ) Publish the application to your favorite cloud. The new VM will be fully started within about 5 minutes. Install and configure a simple NFS server SSH into the newly created VM using the DNS name (as appear on the VM summary tab) and your private key. ssh -i <key> ubuntu@<DNS name> Once in, do the following: sudo apt-get update sudo apt-get install nfs-kernel-server -y sudo mkdir /nfs sudo chmod 777 /nfs add to /etc/exports the following line: /nfs *(rw,async,no_subtree_check,no_root_squash) sudo service nfs-kernel-server restart Done, your new NFS is ready. The vm is listening to the hostname you defined and the NFS directory is /nfs. Saving NFS to the library Once the NFS server is properly installed and configured, you are welcome to finally save it to Ravello’s library for future usage. Read here how to save the NFS VM to Ravello’s library. VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site.

As most of you probably know besides implementing a hypervisor merely capable of running regular VMs, we’ve also implemented a CPU virtualization extensions called VT-I for Intel or SVM for AMD cpus....

To vCloud Air or AWS? The million dollar dilemma

Enterprises are trying to decide whether to embrace AWS or vCloud Air as a part of their cloud strategy. This post looks into benefits of each, and introduces Ravello as an option that brings together best of both worlds. Interested in seeing the benefits for yourself? Sign up for a free Ravello trial. With IT budgets shrinking, enterprises across the globe are faced with a million dollar question – can I stretch my budget by moving away from capex and embracing opex? Many companies are looking to reduce their data center foot-print (capex) and move their workloads to cloud (opex) where possible. In this endeavour, IT departments are trying to figure out — should go I with AWS (the market leader in the public cloud), or choose VMware vCloud™ Air (the public cloud from VMware – the market leader in data center virtualization)? Criteria AWS vCloud Air Ability to run VMware VMs without migration Needs AMI migration No migration needed as vCloud Air runs VMware vSphere™ Flexibility of VM CPU & Memory combination AWS provides multiple instance sizes, but doesn’t provide all combinations Allows one to create any combination of CPU and memory for the VM On-demand addition of vCPU, Memory and Disk Un-supported Supported Services AWS provides extremely rich set of IaaS and PaaS Extremely light compared to AWS Layer 2 networking Un-supported Supported Scalability Very elastic due to much larger infrastructure Much lighter compared to AWS Geographical spread 8 geographical regions (multiple locations in some regions) 5 geographical regions (multiple locations in some regions) OS support Wider choice of supported OS Light compared to AWS High availability High availability possible through deployment in multiple regions Hot redundant capacity and instant restart of affected VMs in case of failure Marketplace for virtual appliances Rich collection of virtual appliances and services Lighter compared to AWS Marketplace Management Tools AWS has its own management interface and supports REST APIs for integration vCloud Air can be managed using VMware tools (e.g. vSphere client, vCAC) in addition to REST APIs for integration Infrastructure maturity More mature Less mature compared to AWS Pricing (4 vCPU, 15GB Memory, 80 GB SSD Storage deployed in US) $0.28/hour $0.39/hour So, which is the preferred option - AWS or vCloud Air? Well, it depends on what matters to you! If Layer 2 networking, integration with existing VMware tools and running VMware VMs without migration is important - vCloud Air is a great option. If you are looking for a mature & scalable cloud with a rich set of services, and a vast geographical reach - clearly AWS is the way to go. Ravello, a nested virtualization platform enables the ability to run VMware VMs with Layer 2 networking to public clouds such as AWS and Google, bringing together best of both the worlds. Using Ravello one can create an exact replica of VMware based data-center environment on public cloud (see below). This unique capability enables many use-cases. Interested? Just sign up for a free Ravello trial, and drop us a line – we can get your VMware & KVM VMs running ‘as-is’ on AWS/Google using Ravello in no time.

Enterprises are trying to decide whether to embrace AWS or vCloud Air as a part of their cloud strategy. This post looks into benefits of each, and introduces Ravello as an option that brings together...

Step-by-step guide: Setting up CI/CD with Ravello, Jenkins and Chef on Amazon EC2

I recently presented at an Amazon (AWS) EC2 user group meet-up in Toronto, Canada. The idea was to demo some popular tools being used today rather than only provide a Ravello product demo. It turned out to be a very technical dynamic group with lots of interactive discussion and technical questions. As promised, I put together a detailed blog that described the talk and provides some technical details for anyone who wants to try it out. In the end I hope this blog is detailed enough for anyone interested in learning the various technologies to integrate some cool tools like Jenkins and Chef with Ravello Systems. Here is a link to the presentation, most of the real good content is in the blog First off "Why should you be looking at Continuous Deployment?" and "How does Ravello's solution get this concept off the ground so quickly?". With Continuous Deployment, you need to move from big, slow, and expensive processes to small, frequent and continuous ones. You need to know that this will happen in an automated fashion and be able to receive quick feedback on the test results. This gives everyone involved in the pipeline quick feedback and in most cases accurate results. Generally people move in this direction to be "Agile" or drive "Agility" in you project. It's also a requirement is you ever want to move even closer to Continuous Deployment where every change goes through the pipeline and automatically gets put into production. In most cases you need a dedicated virtual environment for this testing, lots of custom scripts and tools just to build the underlying framework or infrastructure to run the initial tests. (Compute, Network, Software stack and so on…). Ideally this is all kicked off with a single call, (Ideally via an API). You will see in the blog below that Ravello's solution makes this super easy and also leverages the agility and endless capacity of the public cloud. Your able to pay for the capacity you need and also move to a cost model where you pay for the running time of your test(s). You will see below the concept of a known "blueprint" is the perfect starting point to drive "Continuous Deployment". Feel free to check out more about "Continuous Deployment / Integration" concepts. Here is some information on how to get it all working and learn how you can completely automate application deployment via Jenkins and Chef. This can be a first step in getting to Continuous Deployment and Continuous Integration with an application of your choosing. Let's start by creating a new application inside your Ravello Organization. First sign up for a free trial. Here is a link to our developer centre and API. Drag your virtual machine into your new application canvas (I am using a base Ubuntu 12.04 I uploaded based on a .qcow2 file) We provide a base image in your organization that will get you started. Go ahead and configure your ubuntu server to you liking… you can see the VM Editor on the right after you click on the vm. You will also want to create a key pair so you can ssh and login to this vm when it is booted up. This is the base image you can use from your new ORG: You can create the key pair here. Go ahead and rename you vm to jenkins-01 My base ubuntu vm does not use a key pair for example (FYI - we also support installing from an .ISO) I am going to use a 20GB file system to get started. I will keep the networking simple and use the DHCP service provided by the Ravello Service. Let's make sure we open up some service ports so we can access the virtual machines from our local client. Jenkins will come up on port 8080 so we need to open that up, also port 22 so we can ssh to the virtual machine. Alright we are ready to publish our (soon to be) Jenkins virtual machine. In this example I want to make sure I deploy to Amazon - Virginia, I choose performance optimized and will run for 6 hours. * Note - When Ravello auto stop's your application we take a full snap shot / backup so you can simply come back and start it up at any time you like. You will see our your new virtual machine will start to deploy - you can get the connection information from the bottom left corner. When you deploy to the public cloud you can choose to have a public IP Address, You can also choose to have an Elastic IP Address. You can make these selections under the VM Editor I described above. External Access Section You will see you machine boot-up in a few minutes - you click on the console option in the bottom left corner. If you are quick you can watch the machine boot. You can connect to the console using the button in the bottom right corner. *if you configured your vm with a key pair you will have to ssh to the public ip using your ssh key. example - ssh -i ravello@85.190182.100 Now it is time to make sure we can ssh to our new virtual machine. You can get the connection information from the bottom righ corner after you click on the vm. ssh to your virtual machine (note I don't have a key pair configured) ssh ubuntu@85.190.182.100 I like to set my host name right away (if we had cloudinit installed in this virtual machine Ravello would look after that for us). ubuntu@ubuntu:/$ sudo su root@ubuntu:/# nano /etc/hostname root@ubuntu:/# reboot Here is what my filesystem looks like ubuntu@jenkins-01:/$ sudo su root@jenkins-01:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 20G 786M 18G 5% / udev 998M 8.0K 998M 1% /dev tmpfs 401M 204K 401M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1002M 0 1002M 0% /run/shm root@jenkins-01:/# Time to install Jenkins Before we can install Jenkins, we have to add the key and source list to the vm. This is done in 2 steps, first we'll add the key. root@jenkins-01:/# wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | apt-key add - OK Next run this.. echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list I like to make sure I update my ubuntu server before I move on... apt-get update We can proceed installing Jenkins. Note that Jenkins has a whole bunch of dependencies, so it might take a few moments to install them all. apt-get install jenkins you will see lots going on here….. Adding debian:Go_Daddy_Class_2_CA.pem done. Processing triggers for libc-bin ... ldconfig deferred processing now taking place root@jenkins-01:/# Not it's time in install the Ravello SDK, there are a few pre requisites so let's look after those... Install the following packages sudo apt-get install python-pip python-dev build-essential $ sudo pip install --upgrade pip Install pip sudo easy_install pip Then install Ravello-SDK sudo pip install ravello-sdk Ravello SDK get's installed int the following location: root@jenkins-01:/usr/local/bin# ls pip pip2 pip2.7 python-scripts ravello-create-nodes ravello-set-svm ravello-set-uuid root@jenkins-01:/usr/local/bin# Let's first test all this out manually before we move onto the Jenkins piece. I like to create a directory in here for my scripts - I called it "python-scripts" Jenkins is installed in /var/lib/jenkins Create a file and drop in the script below sudo nano python-scripts.py For our test we need a blueprint - let's just make a blueprint out of our existing application we are using. From the Ravello UI - Click "Save as Blueprint" - and give it a name. Optional - If you want to test via the python script before you do the Jenkins integration. Here is the python script… we have some sections commented out as we are just testing to see if this all works... If you are wondering where you get the blueprint ID - choose a blueprint from inside the Ravello UI - look at the URL showing - the ID is in that URL. Go under Library - then blueprints - click on your new blueprint. Example - 52396048 https://cloud.ravellosystems.com/#!blueprint;appid=52396048;pkey=p3 #here is our manual script - this will deploy to a cost optimized cloud #manual python script import os from ravello_sdk import * import time def createClient(): client = RavelloClient() client.connect() client.login ('YOUR_USERID', 'YOUR_PASSWORD') return client client = createClient() #gets blueprint design, sets the name to be that of the application we intend to publish and creates+publishes application bp = client.get_blueprint(int('YOUR_BLUEPRINT_ID')) bp['name'] = 'YOUR_APPLICATION_NAME' del bp['id'] app = client.create_application(bp) client.publish_application(app) #gets the application and waits until all vms are published app_id = app['id'] app = client.get_application(app_id) numOfVms = len(app['deployment']['vms']) started = [False for i in xrange(numOfVms)] allStarted = False while not allStarted: app = client.get_application(app_id) for it in xrange(numOfVms): started[it] = (app['deployment']['vms'][it]['state'] == "STARTED") #note that this assumes all vms start and there is no error allStarted = reduce(lambda x, y: x and y, started) time.sleep(30) #logs out of the ravello client client.logout() client.close() #we will use this section later on #writes the names and dns addresses, the format is arbitrary for readability and can be changed as you see fit #filename = "{0}{1}-{2}.params".format(os.getenv('JOB_NAME'), os.getenv('BUILD_NUMBER'), 'dns') #with open(filename, 'w+') as parFile: # parFile.write("Name - DNS pairs of the machines in the application {0} n".format(app['name'])) # for vm in app["deployment"]["vms"]: # parFile.write("{0}={1} n".format(vm['name'], str(vm["networkConnections"][0]["ipConfig"]["fqdn"]))) # for service in vm["suppliedServices"]:#writes the external port of any external service (for the case of port forwarding) # if service["external"]: # parFile.write(" {0}={1} n".format(service['name'], service["externalPort"])) Save that file and let's give it a try... root@jenkins-01:/usr/local/bin/python-scripts# Is python_manual.py Run this after you saved your changes... root@jenkins-01:/usr/local/bin/python-scripts# python python_manual.py f you flip back to you Ravello UI - you should see your application get created and then publish to the public cloud If yes you are good and we can move on to the Jenkins integration… Let's open a web browser and go to the Jenkins URL - Again you get this from the bottom right section of the UI after you click on the vm. - example - http://85.190.182.100:8080 - you can also just click the open link. In my case - here... http://85.190.182.100:8080/ You should see Jenkins ready to go… We don't have any security setup for this example... Let's first install the "python" plugin... Make sure you click on the "Available" tab and search for python. *If nothing comes up here, click on "Advanced" and update at the bottom of that screen. than - "download and install after reboot" Now let's create a new item or project Choose a freestyle project You can follow along on the sections I make... Since we want to automate this and not have to edit the script each time - we want this to be a "parameterized" project. I included the parameters below for your reference. Here are the "parameters" Below we want to select "Execute Python Script" this comes from our new plugin... Here is the Script we want to run… This time you don't need to edit the script - we will run it with "parameters". #this will be the script we use for this one - note we deploy to Amazon Virginia in the example #jenkins python script import os from ravello_sdk import * import time def createClient(): client = RavelloClient() client.connect() client.login( os.getenv('ravello-username'), os.getenv('ravello-password') ) return client client = createClient() #gets blueprint design, sets the name to be that of the application we intend to publish and creates+publishes application bp = client.get_blueprint(int(os.getenv('blueprint_id'))) bp['name'] = os.getenv('application_name') del bp['id'] #app = client.create_application(bp) #client.publish_application(app) #gets blueprint design, sets the name to be that of the application we intend to publish and creates+publishes application #bp = client.get_blueprint(int('52134098')) #bp['name'] = 'application_name23123456' #del bp['id'] app = client.create_application(bp) req={"preferredCloud": "AMAZON", "preferredRegion": "Virginia", "optimizationLevel": "PERFORMANCE_OPTIMIZED", "startAllVms": "true"} client.publish_application(app,req) #app = client.create_application(bp) #client.publish_application(app) #gets the application and waits until all vms are published app_id = app['id'] app = client.get_application(app_id) numOfVms = len(app['deployment']['vms']) started = [False for i in xrange(numOfVms)] allStarted = False while not allStarted: app = client.get_application(app_id) for it in xrange(numOfVms): started[it] = (app['deployment']['vms'][it]['state'] == "STARTED") #note that this assumes all vms start and there is no error allStarted = reduce(lambda x, y: x and y, started) time.sleep(30) #logs out of the ravello client client.logout() client.close() #writes the names and dns addresses, the format is arbitrary for readability and can be changed as you see fit filename = "{0}{1}-{2}.params".format(os.getenv('JOB_NAME'), os.getenv('BUILD_NUMBER'), 'dns') with open(filename, 'w+') as parFile: parFile.write("Name - DNS pairs of the machines in the application {0} n".format(app['name'])) for vm in app["deployment"]["vms"]: parFile.write("{0}={1} n".format(vm['name'], str(vm["networkConnections"][0]["ipConfig"]["fqdn"]))) for service in vm["suppliedServices"]:#writes the external port of any external service (for the case of port forwarding) if service["external"]: parFile.write(" {0}={1} n".format(service['name'], service["externalPort"])) Now let's run our new Jenkins job! Go back to main Jenkins screen and find your new job - click on it and then select "Build with Parameters". You will be asked to input you information - these are the items we edited manually earlier. After you hit build - you can look click on the running job on the left and then look at the Console Output. Here is the output You will notice your application has been deployed to the cloud... Look inside you will see all the vm's being published... In my example I had a windows vm deployed in my blueprint also Have a look at the Jenkins console - this is what Success looks like... After a couple minutes you will see all the vm's are started and are accessible. Now we can move on to doing some additional activities with our new application environment. But we need to know all the information about the virtual machines. Our script used the API to obtain all the information after it was deployed and started. We wrote that information here on the Jenkins virtual machine. /var/lib/jenkins/jobs/ravello-pythonsdk-demo/workspace If you look inside the job file - all your vm information is there... root@jenkins-01:/var/lib/jenkins/jobs/ravello-pythonsdk-demo/workspace# cat ravello-pythonsdk-demo3-dns.params Name - DNS pairs of the machines in the application awsdemoapp jenkins-01=jenkins01-awsdemoapp-3cuoxeci.srv.ravcloud.com web=8080 ssh=22 Windows2k8r2-01=windows2k8r201-awsdemoapp-0ur6h6st.srv.ravcloud.com rdp=3389 In conclusion, this was a quick example on how to get started with Ravello Systems and also integrate Jenkins to start to drive some nice automation and start to embrace the inner devop's in all of us... [video url="https://www.youtube.com/watch?v=-P-bZo1HKq0"] In my next blog I will show you how to use this same process but integrate your virtual machines with Chef. This will allow you to dynamically build you application components each and every time after a blueprint is deployment. Good luck, let us know how you get along...

I recently presented at an Amazon (AWS) EC2 user group meet-up in Toronto, Canada. The idea was to demo some popular tools being used today rather than only provide a Ravello product demo. It turned...

No Hardware Required - Testing VMware Integration and Management Tools on Ravello

As a Solution Architect for Scalar Decisions, we are often tasked with obtaining a thorough understanding of the integration process between solution components that will operate within a customer environment. We are also often asked to provide our personal (and professional) opinion on different products that compete with each other. In order to provide experience based knowledge and opinion to our customers, we rely heavily on lab environments to test and evaluate potential solutions. Scalar Decisions does own and operate hardware based lab resources to assist with these requirements, however, these systems are often: Difficult to reconfigure Difficult to maintain consistency Difficult to gain access to Difficult to book or audit activities We are also undergoing an office move which will make it difficult to maintain lab availability for a few months while the new facilities are built out. The overall theme here is that owning our own lab with a high-rate of change can be very difficult. To avoid sliding back into the “lab on a laptop” environment that many of our team members were used to, we began to adopt Ravello. Ravello allowed us to share common and standardized environments (via blueprints) between team members, provided segregation and flexibility between environments, and dramatically reduced the time it took us to build or expand a lab environment. Unfortunately there were many environments [that relied on VMware integration] that we could not move to Ravello, Until today! With nearly 100% of our customer base running VMware somewhere in their environment, the majority of our environment integration testing requires VMware to be available. Now that we have the ability to operate virtualized VMware ESXi™ and VMware vCenter™ instances within Ravello, any non-hardware related lab or demo can be moved to the cloud. As of this writing, our first Ravello blueprint leveraging ESXi images was built for a RedHat CloudForms workshop/class. This particular blueprint consists of: 1 Instructor RedHat CloudForms Instance 3 Student RetHat Clouforms Instances 2 VMware ESXi 5.5 Instances 1 VMware vCenter 5.5 Instance 1 MS Windows Server Instance 1 FreeNAS Instance 4 Separate Network Subnets Sharing the above infrastructure with 30-40 technical resources (from our Eastern offices) can be a challenge with the requirement of everyone sharing the same ESX hosts and vCenter servers. In Ravello, I can say that the initial setup of this lab took nearly 3-4 hours, and it can be copied and published to another cloud in 6.06 minutes! 6.06 MINUTES!! Even if we had unlimited hardware resources in the lab, we still couldn’t replicate that entire environment in 6 minutes (nor would I want to). The following screenshot outlines the resource requirement for this environment as well as the hourly cost: The following screenshots outline the basic server and network layout of the application. After 6.06 minutes, we are copied and running another version of CloudForms that is ready for testing: Leveraging Ravello for this type of workload provides a new opportunity for us to provide additional value to our customers. We have an opportunity to test and better acquaint ourselves with new software products (VMware vSphere™ 6 just came out!), deploy demo and lab environments in less time, and extend this feature to our customers so that they can utilize the lab environment at their leisure. And to deploy this lab, not one email was sent to request an IP address. We continue to look forward to using the Ravello platform for more advanced integration testing, prototyping, demonstration and learning activities. *Note: CloudForms can also be extended to support Ravello as a Cloud Provider within its cloud management capabilities. Check out the blog on Ravello with Cloudforms. VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site.

As a Solution Architect for Scalar Decisions, we are often tasked with obtaining a thorough understanding of the integration process between solution components that will operate within a customer...

How to build a 250 node VMware vSphere/ ESXi lab environment in AWS for testing

This blog was written with guidance from Scott Lowe around best practices of VMware data center design and automation. As most of you are aware, we recently announced the public beta of a new feature that allows users to run VMware ESXi™ on AWS or Google cloud. Essentially, we have implemented Intel VT/ AMD-V functionality in software in our hypervisor, HVX. That makes the underlying cloud look like real x86 hardware - complete with silicon extensions required to run modern hypervisors like ESXi and KVM. In this blog, I am going to illustrate how to set up a large scale, 250-node VMware ESXi data center in AWS for less than $250/hr. We believe that this could be extremely useful for enterprises for upgrade testing their VMware vSphere™ environment or for new product and feature testing. We recently held a webinar discussing how to build ESXi labs on AWS/ Google Cloud. Enjoy the webcast and slides... Prerequisites If you do not already have an account in Ravello, please first open a trial account. A VMware ESXi VM image in Ravello Library (described in another document). A VMware vCenter™ server VM image in Ravello Library (described in another document). The first ESXi host Adding the first ESXi Create an empty application (do not use any blueprint) in Ravello and give it a name. Add the vCenter server VM from the library and publish the application (note - it takes a few minutes to publish and few more minutes for the vCenter and all its services to go up). While this is being done, please continue to the next item (adding ESXi machine). Add one ESXi VMs from the library and update the application (note - it takes a few minutes to this kind of update and few more minutes for the ESXi machine and all its services to go up) and name it "firstesxi"( You will need to change the hostname (only in the Ravello web - defined in "General" tab->"Hostnames") to "firstesxi")). While this is being done, please continue to the next item (adding NFS machine). Use the VM named "NFS" from Ravello library to add an NFS server to your application and update your application in order to publish the NFS machine (you will need to select a key pair for that). This NFS machine contains some images to later on create virtual machines in your cluster. Note - You can use your own private NFS image (there is another blog describes how to create such an NFS image). If you choose your own private NFS image please adjust later steps accordingly. After all machines are published and finished booting, login to the vCenter server web (HTTPS on port 9443). The credentials are the same as when you saved your original vCenter into the Ravello library. Add a new datacenter. Add the ESXi host to the datacenter using its Ravello hostname "firstesxi". The credentials are the same as when you saved your original ESXi into the Ravello library. Note - you will see a yellow warning regarding your host because it does not have any datastore. Later on in this document, When you will add NFS datastore, this warning will disappear. Configuring ESXi host to use NFS Select the ESXi machine. Browse to "Related Objects" tab. Browse to "Datastores" tab (see screenshot). Click the "Create a new datastore" button. When asked to select either VMFS or NFS datastore type, select NFS and click Next. In the "Server" edit-box insert the hostname of the NFS server (should be "lio1"). In the "Folder" edit-box insert "/nfs" (see screenshot) and click "Next". Continue and finish the wizard. You can now see that the yellow warning sign regarding no local datastore has disappeared. Optional - Setting virtual distributed switch In order to configure virtual distributed switch, follow the instructions on this article. Note - each ESXi in your datacenter has 2 NICs: The first one is used for management. The second NIC is used for data. It is recommended putting the 2nd interface on a virtual distributed switch. Note - You can decide whatever your networking setting are preferred. Just please notice that some network configurations are not supported. If you prefer another networking configuration than described above, when it comes to Host Profiles and specifically Host Profile Remediation - it might fail/make your ESXi unreachable. Extract host profile Right click on the ESXi host. "All vCenter Actions->"Host Profiles->"Extract Host Profile..." Give the profile a name (for example "myprofile") and click "Finish". Creating a VM to run on your ESXi Deploy the VM Right click on the ESXi host. Select "Deploy OVF Template..." Select your OVA file Complete the wizard and wait few minutes until VM is deployed. Power on the VM. Using vCenter web interface- open the VM’s console and login with user and password associated with the OVA template. Note - you might need to install a plugin for your browser for that. Pay attention to popups/blocked windows during the plugin installation. Configure the network of the VM Static IPs Need to configure the default gateway to Ravello’s default gateway on the network of the application (usually 10.0.0.2). Set the netmask/network in such a way it will be exactly the same as the Ravello’s network (For example 255.255.0.0). Set the static IP of the machine to a "high unique ip" that it will not conflict with other Ravello VMs in the same application (For example 10.0.100.1). Set the DNS to known public DNS (like 8.8.8.8). To test the networking - Open the console and ping from your vm some address in the internet, like www.google.com. DHCP Not supported for now Install VMware tools If VMware tools is not installed on the VM, it is important you will first install VMware tools. Create a cluster with many hosts and VMs This is the most important part in this document. We will describe here how to build a big datacenter using scripts, automation and multi-selection actions in the GUI. You can repeat the following steps few times and create several big clusters. Each cluster can contain tens of ESXi hosts and hundreds of VMs. Here are some nice screenshot of a datacenter with 4 clusters, 64 hosts per cluster and around 500 VMs in total. Create a new Cluster Please give it a simple name. For example "ClusterA". Turn on the cluster’s DRS (by default - Fully automated). Add several ESXi hosts Note: If your application includes multiple ESXi serevrs, please make sure you start them separately, one by one, and not all at the same time. First you need to know your application ID and your ESXi template vm id. Your application ID can be found in the URL of your Canvas view (for example) of your Ravello application: https://cloud.ravellosystems.com/#/apps/YOUR_APP_ID/canvas?isNew=true Your ESXi template ID can be found when also in the URL when browsing to "Library->VMs" https://cloud.ravellosystems.com/#/library/vms/?vmIds=;YOUR_TEMPLATE_ID Then, download publish_vms_from_template_into_existing_application.py script from Ravello’s GitHub (https://github.com/ravello/vmware-automation). Then, run the script to deploy several ESXi hosts. In this example I have created 16 ESXi machines named "esxiA1".."esxiA16". python publish_vms_from_template_into_existing_application.py -t 56232441 -u "ohad@ravellosystems.com" -a 56558267 -b esxiA -n 16 Script should print "Success" within few few seconds. Then it takes ~5 minutes for ESXi machines to be deployed. Add ESXi hosts to the cluster - Using PowerCLI Install PowerCLI Installation instructions here. Connect to your vCenter Connect-VIServer -Server vcenter -Protocol https -User root -Password vmware (you can ignore the yellow warnings if such appear) Whereas: vcenter - indicates the hostname of your vCenter machine. root and vmware are the credentials for your vCenter machine. Add ESXi hosts to the cluster 1..16| Foreach { Add-VMHost esxiA$_ -Location (Get-Cluster -Name "ClusterA")[0] -User root -Password esxpassword -Force -RunAsync} Whereas: 1..16 - indicates that we want to add 16 hosts. esxiA$_ - indicates that that hosts names to add are "esxiA1".."esxiA16". "ClusterA" is the name of the cluster in which hosts will be added. root and esxpassword are the credentials for your ESXi machine. Apply host profile to the new cluster Right click on the cluster - "Attach Host Profile..." Select "myprofile" profile and complete the wizard (might take a while for "Validation") Select all hosts in the cluster Right click and "Enter Maintenance Mode" (confirm whatever needed). Again, select all hosts in the cluster. Right click and "All vCenter Actions->" "Host profiles->" "Remediate..." and then continue the wizard (might take a while for "Validation") Wait a few minutes until host configuration tasks are completed. Again, select all hosts in the cluster. Right click and "Exit Maintenance Mode" (confirm whatever needed). Clone VM several times - Using PowerCLI Now, that you have a nice cluster, it is time to deploy some VMs on it. In this document we will describe how to fastly clone a VM into the cluster several times (using a "linked clone"). Install PowerCLI Installation instructions here. Connect to your vCenter Connect-VIServer -Server vcenter -Protocol https -User root -Password vmware (you can ignore the yellow warnings if such appear) Whereas: vcenter - indicates the hostname of your vCenter machine. root and vmware are the credentials for your vCenter machine. Clone VMs using a linked clone $sOriginVM="Ubuntu 14.04" $sOriginVMSnapshotName="mastervm_linkedclone_snap" $oVCenterFolder=(Get-VM $sOriginVM).Folder $oSnapShot=New-Snapshot -VM $sOriginVM -Name $sOriginVMSnapshotName -Description "Snapshot for linked clones" -Memory -Quiesce $oESXDatastore=Get-Datastore -Name "Datastore" $oResourcePool=(Get-ResourcePool -Location (Get-Cluster "ClusterA")) 1..25| Foreach {New-VM -Name UbuntuCloned$_ -VM $sOriginVM -Location $oVCenterFolder -Datastore $oESXDatastore -ResourcePool $oResourcePool -LinkedClone -ReferenceSnapshot $oSnapShot} Whereas: 1..25 - indicates that we want to clone 25 vms. Ubuntu 14.04 - indicates that the original vm to clone is named "Ubuntu 14.04". UbuntuCloned$_ - indicates the name of the target cloned VMs, "UbuntuCloned1".."UbuntuCloned25" . Datastore - indicates that the NFS datastore is named "Datastore". ClusterA - indicates that the target location is in a cluster named "ClusterA" Then, Using vCenter web interface, select all vms and power on. Set the VM order accordingly in Ravello GUI Storage VM and vCenter VM need to start first (before all ESXi machines) and shut down last (after all ESXi machines). Optional - Monitoring crucial vms It is recommended that you will install a monitoring tool to monitor crucial machines in your datacenter, such as NFS and vCenter server. It helps a lot understanding how the datacenter behaves and how you can stretch its limits. I will demonstrate graphs with New Relic. Saving application to blueprint How to create a blueprint Known issues/limitations The ESXi network interfaces in Ravello are having some problems working with VMXNet3. Please use only E1000 for the VMs network interfaces and for the ESXi network interfaces. VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site. Note: If your application includes multiple ESXi serevrs, please make sure you start them separately, one by one, and not all at the same time.

This blog was written with guidance from Scott Lowe around best practices of VMware data center design and automation. As most of you are aware, we recently announced the public beta of a new feature...

Nested ESXi on AWS or Google Cloud with Ravello: Frequently Asked Questions

Q1: Why would I want to run VMware ESXi on AWS or Google cloud? A1: VMware technology partners, resellers and customers need infrastructure lab environments for development and testing, sales demos, PoCs, and training. The public cloud is ideal for these kinds of workloads. Now, with Ravello, the VMware ecosystem can run customized ESXi™ infrastructure labs in AWS or Google cloud on demand. Read more about specific nested ESXi use-cases here. Q2: Why can I not run ESXi on AWS or Google cloud natively? Why do I need Ravello? A2: VMware ESXi is designed to run on physical servers that have CPUs with Intel VT or AMD-V virtualization extensions. AWS and Google clouds offer VMs - not physical hardware - and these VMs do not have virtualization extensions. Hence it is impossible to run ESXi on AWS or Google cloud. Ravello’s HVX technology that runs on top of AWS and Google implements Intel VT/ AMD-V in software making the cloud look like real hardware capable of running ESXi. For more information, please read the technology overview about ESXi and nested virtualization. Q3: Can I run VMware VMs on AWS or Google with Ravello without needing ESXi? A3: Yes. Ravello HVX exposes VMware devices to the VMs running on top. Hence, enterprises can run their VMware VMs on AWS or Google cloud. For example, if an enterprise has a production SharePoint farm (application) running on VMware ESXi infrastructure in their data center, and they want to create a QA environment for the application in AWS, they can simply upload their VMs to Ravello and deploy their SharePoint farm on AWS for QA. In this case, they do not need ESXi. Now consider another example. An enterprise has VMware vSphere™ 5.5 infrastructure running in their data center, and they are preparing to upgrade to vSphere 6.0 for which they need ESXi infrastructure test environments to test their upgrade procedure. In this case, the enterprise can create a copy of their ESXi “infrastructure” in AWS using Ravello. Read more about specific nested ESXi use-cases here. Q4: What about ESXi licenses? A4: Ravello has a BYOL (Bring Your Own License) policy. Users are responsible for their own ESXi license. However, please note that if the use-case is around creating a copy of the application environment (the VMs themselves and the networking) in AWS or Google, then there is no ESXi needed, and hence no ESXi licenses needed. Contact us if you have any questions. Q5: What is the performance like if I run ESXi on Ravello? A5. When running a VM on ESXi on Ravello on AWS, this is the stack: VM - ESXi - Ravello HVX - Xen - x86 hardware. So the VM is sitting on 3 hypervisors. Most instructions are executed directly on the physical CPU, so CPU performance is good. However, IO does get impacted because of multiple virtualization layers. Hence, while there is a performance overhead involved, it should more than suffice for most lab use-cases involving ESXi infrastructure (demos, training, PoCs, development and testing etc.). Q6: How much does it cost to run ESXi on AWS with Ravello? A6. Ravello pricing is completely usage based. There are no up-front fees or per user charges. Pricing starts at $0.14/hr per 2vCPU/4GB chunks. A typical lab setup involving 2 ESXi hosts (4 vCPUs/16 GB RAM), 1 VMware vCenter™ appliance (2 vCPUs, 4 GB RAM) and 1 NFS or iSCSI shared storage appliance (2 vCPUs, 4 GB RAM) would cost $1.43/hr. Learn more on ESXi pricing. Q7. I need to run a large scale ESXi test lab with complex networking. Can I do that with Ravello? A7. Yes, definitely. Here is an example of a 250 node ESXi data center environment running on Ravello on AWS - with multiple subnets, VLANs etc. And here is another example of a 32 node ESXi VSAN cluster running on Ravello on AWS including multicast and other advanced L2 networking features. Please contact us so that we can assist you in your deployment. Q8. I’m interested in trying this out. How do I get started? A8. Its easy to get started. Just sign up for a free trial here. You don’t need a credit card or existing AWS or Google cloud credentials. After activating your account, follow these steps to set up your environment Step 1: How to install ESXi on AWS or Google cloud Step 2: How to set up a vCenter appliance on AWS or Google cloud Step 3: How to create an ESXi/ vSphere cluster on AWS or Google cloud If you have any questions, please don’t hesitate to reach us at support@ravellosystems.com VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site.

Q1: Why would I want to run VMware ESXi on AWS or Google cloud? A1: VMware technology partners, resellers and customers need infrastructure lab environments for development and testing, sales demos,...

Using Ravello Systems for Technical Training of Red Hat CloudForms

Author: Geert Jansen The product owner of CloudForms, Geert's areas of expertise include tinkering with all new technologies, developing extensions and modules to contribute on the GitHub.   Red Hat CloudForms is Red Hat’s Cloud Management Platform (CMP) product. A CMP is a piece of software that provides high-level management capabilities on top of virtualization infrastructures, private clouds, and public clouds. Some examples of these management capabilities are self-service, chargeback/showback, orchestration, reporting and policy enforcement.{C} CloudForms acts as a “manager of managers”. It connects to one or more supported virtualization or cloud management systems to provide for management of those. These management connections are implemented by what we call “providers.” Our currently supported providers are VMware vSphere™, Red Hat Enterprise Virtualization, Red Hat Enterprise Linux OpenStack Platform, Amazon Web Services, and Microsoft System Center Virtual Machine Manager. This “manager of managers” nature of CloudForms makes it challenging for us to provide lab environments for technical training of our staff. I am a firm believer that for technical training, there is no substitute for quality “keyboard time” i.e. people getting their hands dirty with a working setup to educate themselves and try out new things. Installation of CloudForms itself is very easy (it’s a virtual appliance), however in order to get a useful setup you need to connect to at least one management system. And this is where it gets somewhat tricky. Setting up for example vSphere or OpenStack is not that difficult but it takes time and more importantly it requires physical hardware. Both require low-level installs via an ISO image or PXE, layer-2 networking, and Intel/AMD hardware virtualization features. Normally these requirements are only available in a bare metal environment. Before Ravello we used centralized hardware labs for this, which was problematic. Due to the difficulties of automating and scheduling real hardware, and the limited amount of hardware we had, the lab was a shared lab. This means multiple technical people would access it at the same time. This lead to performance issues, and worse, stability issues where one person would make a change that would impact others. Some people got frustrated by this and went with the “datacenter under your desk” approach. They would either scavenge old servers or expense new cheap ones, and put it under their desk. This of course is not very economical, both in terms of direct expenses, as well as the time required for everybody to maintain their own lab. Also it leads to excess noise and heat in our offices. You can imagine we were very happy when the Ravello team approached us and told us they were working on an implementation of Intel/AMD hardware virtualization that would support VMware ESXi™. Together with Ravello’s already available low-level installs, layer-2 networking, and nested virtualization support for KVM, this would potentially allow us to replace our hardware based lab with a cloud based one. The benefits are obvious: We can prepare a lab once (as a Blueprint) and then everybody else can just use it. No more wasted time where a lot of people are maintaining their own labs. There are no more performance issues. Since Ravello is cloud based, there’s a (virtually) infinite supply of labs that we can start up so that everybody can have their own. There are no more isolation issues. Each person gets their own lab, fully isolated from any other labs. The person has full rights and can make arbitrary changes. And if the lab breaks down, it can always be reset to a pristine state. The economics are much better. Labs are typically active only a fraction of time, but there are correlated peak demands when e.g. new versions are released. This makes an on-demand solution much more efficient. For the last few months we beta tested the Ravello hardware virtualization feature. I’m glad to say that it has worked very well for us, and even performance is great. We did not do a formal performance test, but I cannot feel any difference between “simply nested” systems running as Ravello VMs, and “doubly” nested VMs running on e.g. ESXi running inside Ravello Based on our successful testing, we now have an on-demand Ravello based lab with CloudForms, VMware vSphere and Red Hat Enterprise Virtualization/KVM that is available to all Red Hat staff. Access to the lab is provided by a permanently running CloudForms instance. We chose to go this route rather than giving direct access to the Ravello web UI as this allows us more fine grained control over usage and approvals. Using some simple integration code that calls out to the Ravello RESTful API, the lab can be ordered as a service catalog item. To access a lab, staff log on to this CloudForms instance with their Red Hat account. They then choose the “CloudForms 3.1 Functional Lab” service from the service catalog, select the expiration time (default: 3 months) and press “Submit”. This is shown in the screenshot below. After submission, the request typically gets auto-approved, and the lab will be provisioned right away. Once the lab is ready, an email is sent with login details to the requesting user. The system will also create separate dynamic DNS entries for easy access. The default runtime of our lab is 12 hours. If the lab is not extended by the user before then, it will automatically stop (obviously all state is preserved and it can be started up later at any time). The limited runtime with an manual extension is a way to limit our cost. In addition to extending the runtime, full lab control is available to the user via a custom CloudForms menu. This is highlighted in the screenshot below where you can see buttons for e.g. Start, Stop, Status and Delete. The screenshot below shows how the labs look in the Ravello web UI. As you can see, most of them are typically idle. Each lab contains: An infrastructure server that provides DHCP, identity management, iSCSI and NFS services for the other systems in the lab. A “bastion” host for external access. A CloudForms appliance. A RHEV system consisting of a manager (in self-hosted mode) and a second hypervisor. A vSphere system consisting of a VMware vCenter™ Server appliance and two ESXi nodes. This is shown in the screenshot below: Summary We’ve been using the hardware virtualization feature in Ravello for a few months now and are very happy with it. It solves a real problem for us where we need to take care of technical training for a product with complex infrastructure requirements that would normally requires physical hardware. With Ravello we can do this in the cloud instead. This gives us better performance, good isolation, and lower cost. VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site.

Author: Geert Jansen The product owner of CloudForms, Geert's areas of expertise include tinkering with all new technologies, developing extensions and modules to contribute on the GitHub.   Red Hat...

How to create VMware ESXi 5.5 & 6.0 image on Ravello?

We at Ravello have been working on some really cool technology for the last couple of months. We have implemented a CPU virtualization extension called VT-I for Intel or SVM for AMD in our HVX hypervisor. These extensions allow running other hypervisors such as KVM or VMware ESXi™ on top of Ravello in addition to running regular VMs. In this blog we are going to walk through installing and configuring ESXi on a public cloud - extremely useful for running ESXi enabled virtual labs. We will go over how to create your own VMware ESXi image, and save it in the library to easily add additional ESXi hosts for your Ravello application later. Broadly speaking, we will undertake the following steps: Download ESXi ISO from VMware Upload ESXi ISO to Ravello Install ESXi on Ravello Configure ESXi to run on Ravello Optional - DHCP special tweak Optional - Save ESXi to VM library Prerequisite - A Ravello account. If you don’t already have one, you can open one here. We recently held a webinar discussing how to build ESXi labs on AWS/ Google Cloud. Enjoy the webcast and slides... [video url="https://www.youtube.com/watch?v=h9byjFw5omQ"] [slideshare id=48986275&doc=20150602esxiwebinar-150604114506-lva1-app6891] 1. Download ESXi ISO from VMware Download the ESXi version 5.5 or 6.0. You will be required to login/register as a VMware user. The ISO file is ~300 MB and may take a few minutes to download. 2. Upload ESXi ISO to Ravello Once the ESXi ISO is downloaded, the next step is to upload it to Ravello. Here are the instructions on how to upload ISO image to Ravello. 3. Install ESXi on Ravello This step has two parts - a) Creating an empty ESXi application b) Installing ESXi on empty ESXi application Creating an empty ESXi application 1. Create an application in Ravello and give it a name. Do not use a Blueprint 2. In the application, add “Empty ESX” from the VM library. “Empty ESX” is a special machine that has CPUID configured to enable nested virtualization in Ravello. 3. Change the image of the ‘cdrom’ to use the ESXi ISO uploaded earlier (Disks > Browse and select your ISO and save) 4. Publish the application and wait (~5 minutes) until the application is published. Installing ESXi on empty ESXi application 1. Once application is published, you will see a green play icon 2. Next click on the ‘Console’ to get console access to the ESXi application 3. Click “Next/Enter/Accept/Continue” to install ESXi. You will be prompted to set password for root. Steps on installing ESXi captured here. 4. Once installation completes, eject the ISO, save & update the ESXi application. Please note it is extremely important to eject the ISO before the reboot 4. Configuring ESXi to work on Ravello 5. Optional: DHCP special tweak In case you will use your ESXi with DHCP configuration and you will use NFS datastore for the ESXi using the datastore's hostname (rather than its IP), you must perform the follow step: When ESXi starts, it expects to find the previous IP leased by the DHCP client. Since the Ravello DHCP is restarted every time the app is created, the restarted DHCP server doesn’t ‘remember’ the previously leased IP, and hence ESXi client needs to ask for a new IP. While ESXi asks for a new IP from the DHCP server, ESXi may start some services (like NFS) which don’t find an IP, and hence network activities at this time may fail. 6. Optional: Saving ESXi to the library Once the ESXi is installed and configured, it is recommended to save it to Ravello’s library for future usage. Instructions on saving the ESXi VM to Ravello’s library. Known Limitations VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site. Note: If your application includes multiple ESXi servers, please make sure you start them separately, one by one, and not all at the same time. Enable SSH – From the Direct Console User Interface, press F2 to access the System Customization menu. Select Troubleshooting Options and press Enter. From the Troubleshooting Mode Options menu, select Enable SSH. Press Enter to enable the service. Next, go to the Ravello’s UI and assign “Public IP” to the network interface, check “Even without external services”, and create SSH service on Ravello (screenshots below). Now click on ‘Update’ on the Ravello UI to update the application   Next few steps are required to be able to add more than one ESXi to the same cluster/data center in the VMware vCenter™ server later. Set unique MAC addresses – In Ravello UI’s Network tab for the VM, ensure that “Auto MAC” is checked for both interfaces On ESXi run –‘esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1’ Enable nested on all ESX guests – This is important to be able to power on VMs running on ESXi, and avoids the need to configure each of guests with the ‘vmx.allowNested’ flag. Run ‘vi /etc/vmware/config’. Add the following to the file ‘vmx.allowNested = "TRUE"’ and save. Ensure changes are saved – Run ‘/sbin/auto-backup.sh’. Ignore any warnings. Disable SSH – From the Direct Console User Interface, press F2 to access the System Customization menu. Select Troubleshooting Options and press Enter. From the Troubleshooting Mode Options menu, select Disable SSH. Press Enter to disable the service. run ‘vi /etc/rc.local.d/local.sh’ add the following lines and save the file – /bin/kill $(cat /var/run/crond.pid) /bin/echo '* * * * * rm -rf /etc/dhclient*.leases' >> /var/spool/cron/crontabs/root /bin/crond run ‘rm -rf /etc/dhclient*.leases’ Enable SSH if disabled (as written above) Un-mount local datastore – Login to SSH using username root and the password you have chosen when you installed your ESXi. Run ‘esxcli storage filesystem unmount -l datastore1’ Get the local datastore device name and partition by running ‘esxcli storage vmfs extent list’ Use the local datastore device name (mpx.vmhba1:C0:T0:L0 in my case) and partition (3 in my case)from previous step to run the following after making appropriate adjustments ‘partedUtil delete "/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0" 3’ Delete ESX UUID – Run “vi /etc/vmware/esx.conf” Go to the line in the file where “/system/uuid” is defined. Delete the line and save the file. Ensure changes are saved – Run ‘/sbin/auto-backup.sh’. Ignore any warnings. Disable SSH (as written above) There are known issues with VMXNet3 as network interface for ESXi. Please use e1000. Disk controller support on ESXi is currently limited to “LSI Logic Parallel".

We at Ravello have been working on some really cool technology for the last couple of months. We have implemented a CPU virtualization extension called VT-I for Intel or SVM for AMD in our HVX...

Setting up Arista vEOS switch and VM Tracer lab with VMware ESXi hosts on AWs and Google Cloud for customer trials and sales demo

VM Tracer is a capability that is natively integrated into the Arista Extensible Operating System (EOS) and works with the entire Arista 7000 Family of Data Center Switches, and links the Arista switches to VMware vCenter™. It provides unprecedented visibility into the virtualized environment, seamless integration into a familiar industry-standard CLI, and automatic configuration of tasks and policy by integrating natively with VMware vCenter. It is coupled with vEOS as a management plane, and enables visibility into the vSwitch, the VM farm, and policy control that is natively integrated into VMware vCenter. VM Tracer works with VMware vSphere™ 4.0 and higher. It utilizes the published vCenter APIs and works across all editions of vSphere. In order to setup an environment where I can demo this capability, I need two VMware ESXi™ host with VCenter running on top of each ESXi host along with a couple of guest Vms and our vEOS switch virtual appliance based switching backplane. This lab environment is very resource intensive. Up until now, I ran all of these components on my mac so I could take it to customer sites for sales demo. The issues I run into were limited capacity available to allocate to each ESXi hosts and lack of an easy way to leave the demo/lab environment with the prospect, so they can play with it. I could put all the VM images on dropbox or USB stick and then provide it to them and have them reconfigure everything on their test ESXi hosts. But, this is big challenge, because it slows down the sales cycle. Demo lab environment architecture We were working with the team at Ravello Systems for our standard vEOS switch and server based lab environments and I was approached to participate in the ESXi beta program for my VMTracer lab environment. These are the steps I followed: I uploaded 2 pre-configured existing ESXi host VM images from my workstation to my account in Ravello. These images had all of my pre-configured guest VMs and the vCenter VM. I worked with the technical team at Ravello to update appropriate settings on these ESXi host VM to allow for nested virtualization. Then, I had to configure networking for each of the ESXi hosts in the Ravello interface. I also had to configure public access for the vCenter VM running on the ESXi host, so one could access the vCenter web app from outside and do all the settings. We ran into some issues with this, since this vCenter was a guest VM on top of ESXi VM. After few days, the Ravello team was able to figure out a solution to enable this. Add vEOS leaf-spine switch network components, so we could demonstrate VMauto discovery, adaptive segmentation and other cool capabilities of VMTracer. One of my other colleague had already built an Arista leaf spine topology with our vEOS virtual appliance on Ravello and saved it as a blueprint. He included that topology to this ESXi host environment. VMTracer Auto discovery VmTracer environment in Ravello Now, for my sales demo, I just have to start a new application from this BP, publish it on AWS or Google, and I can show the demo to my customers. A lot of my prospects want to continue to play with the lab after the meeting, so they can familiarize themselves and also show it other folks within the organization. I plan to provide this blueprint to my customers for use trial. They will create an account with Ravello and can then start a lab environment from this blueprint. This eliminates the need to have them spend time searching for infrastructure resources to setup the lab and gets them up and running very quickly, so they can spend more time working with our products. I see this as great enabler for customer trials with live lab environments during our sales process. VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site.

VM Tracer is a capability that is natively integrated into the Arista Extensible Operating System (EOS) and works with the entire Arista 7000 Family of Data Center Switches, and links the...

How to Create VMware Data Center on Ravello

We at Ravello have been working on some really cool technology for the last couple of months. We have implemented a CPU virtualization extension called VT-I for Intel or SVM for AMD in our HVX hypervisor. These extensions allow running other hypervisors such as KVM or VMWare ESXi™ on top of Ravello in addition to running regular VMs. In this blog we are going to walk through setting a full VMWare datacenter in a public cloud - extremely useful for running ESXi enabled virtual labs. We will go over: Creating a data center Configuring ESXi hosts to use NFS Setting up virtual distributed switch (optional) Creating VMs to run on VMware cluster Set the VMs start and shutdown order Saving application blueprint Prerequisite Ravello account. If you don’t already have one, start your free trial. VMware ESXi VM image in Ravello Library VMware vCenter Server VM image in Ravello Library We recently held a webinar discussing how to build ESXi labs on AWS/ Google Cloud. Enjoy the webcast and slides... [video url="https://www.youtube.com/watch?v=h9byjFw5omQ"] [slideshare id=48986275&doc=20150602esxiwebinar-150604114506-lva1-app6891] 1. Creating a data center 1. Create an empty application in Ravello, and give it a name. Do not use a blueprint. 2. Add the vCenter Server from the VM library saved earlier, and publish the application. Please note that it takes a few minutes to publish and a few more for the vCenter full be operational. 3. Add one or more ESXi VMs from the VM library. Please note that each of the hostnames (defined in General > Hostnames) needs to be unique. Please change hostnames accordingly if you add more than one ESXi machine. Once done click ‘Update’ on application canvas Please note that it takes a few minutes to update and few more for ESXi to be operational. 4. Check out how to create NFS shared storage for ESXi. Add the NFS VM created by following the instructions to the Application and Click ‘Update’ on your your application canvas to publish. Please note that you need to select a key pair for NFS VM. The NFS VM contains some images to create virtual machines in your cluster 5. Login to vCenter Server web interface on port 9443 (https://publicIP:9443) 6. Add a new data center 7. Add a new cluster with default settings 8. Add the ESXi hosts to the datacenter using the hostname defined in Ravello (General > Hostnames on Ravello UI) 2. Configure ESXi hosts to use the NFS 1. Select an ESXi machine, and browse to the ‘Related Objects’ tab. Click on ‘Datastores’ tab. 2. Click on the icon on upper left to create a new datastore. When asked to select Type between VMFS or NFS datastore, select NFS and click Next 3.In the “Server” edit-box insert the hostname of the NFS server (should be “lio1”). In the “Folder”edit-box insert “/nfs”and click “Next”. 4. Continue and finish the wizard. You have added the datastore for the first ESXi host. Repeat the steps to add the Datastore for the other ESXi hosts 3. Optional Step - Setting up virtual distributed switch Please follow these instructions to configure the virtual distributed switch. Each ESXi in the cluster has 2 NICs - first one used for management, and second used for data. It is recommended to put the second interface on the virtual distributed switch. 4. Creating VM to run on ESXi The most efficient way to deploy VMs in vCenter is to deploy an OVA template. One can download such templates of free OS (such as Ubuntu, Fedora, etc) from the internet. To deploy the OVA, right click on the target ESXi host, and select ‘Deploy OVF template’ and select OVA downloaded to local machine earlier. Go through the wizard and wait a few minutes until the VM is deployed, and then power on the VM. Using vCenter’s web interface, open the VM’s console and login with user and password associated with the OVA template (user/password can be found in the ‘Notes’ section in the ‘Summary’ tab). Please note that you may need to install a browser plugin. Pay attention to popups windows during the plugin installation. Next step is to configure the networking for the VM. Please note that only static IPs are currently supported. You will need to - Configure the default gateway to mirror the default gateway on Ravello UI under network tab for the application (usually 10.0.0.2). Set the netmask/network to mirror as the Ravello’s network (e.g. 255.255.0.0). Set the static IP of the machine to a “high unique ip” so that it doesl not conflict with other Ravello VMs in the same application (e.g. 10.0.100.1). Set the DNS to a known public DNS (like 8.8.8.8). Test the networking, by opening the console and pinging your favourite internet site (e.g. www.google.com.) If VMware tools are not installed, please install VMware tools on your VM. 5. Set the start and shutdown order for the VMs Next, one needs to assign the right start and shutdown order for the VMs in Ravello UI (Settings > Startup and Shutdown Order). Storage VM and vCenter VM need to start first (before all ESXi machines) and shut down last (after all ESXi machines). 6. Saving application blueprint Follow the steps listed here to save the application blueprint Known Limitations There are known issues with VMXNet3 as network interface for ESXi. Please use e1000. VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site.

We at Ravello have been working on some really cool technology for the last couple of months. We have implemented a CPU virtualization extension called VT-I for Intel or SVM for AMD in our HVX...

Instructions for VMware ESXi Guest VM external connectivity

Please follow the instructions to enable ingress/egress connectivity to the VMs running on top of VMware ESXi™ in Ravello. The VMs must be defined with static IP configuration since the Ravello built-in DHCP server won’t work with nested VMs. Add a second NIC to an ESXi host VM in Ravello UI . The second NIC would be used to create an uplink for a separate vSwitch on ESXi. Nested VMs will be connected to this vSwitch. In Ravello UI define a subnet for nested VMs. For example if you wish to use subnet 20.0.0.x/24 put the following in the newly created NIC form: IP address: 20.0.0.1 Subnet: 255.255.255.0 Gateway: 20.0.0.254 The IP address 20.0.0.1 can be used for the first nested guest VM running in the ESXi For each additional VM press Advanced/Add and enter a different IP address from the same subnet and the identical gateway address as in step 2. If you wish to provide ingress connectivity to a specific TCP/UDP port go to the Services tab and click on Add Supplied Service. Then fill in a port number and select the designated address of the VM (20.0.0.1 - for the example given in step 2.) In ESXi create a new vSwitch of type “Virtual machine port group” and bind it to the second physical NIC created in step 1. VMs that need external connectivity must be wired to this vSwitch. In a running nested VM: turn off a DHCP client and use static IP configuration instead. Type an IP address and router from step 2 and 3.: Ifconfig eth0 20.0.0.1/24 ip route add –net 0 gw 20.0.0.254 Now make sure you have egress connectivity by pinging an Internet address: ping 8.8.8.8. After that you can test ingress connectivity. VMware product names, logos, brands, and other trademarks featured or referred to in the ravellosystems domain are the property of VMware. VMware is not affiliated with Ravello Systems or any of Ravello System's employees or representatives. VMware does not sponsor or endorse the contents, materials, or processes discussed on the site.

Please follow the instructions to enable ingress/egress connectivity to the VMs running on top of VMware ESXi™ in Ravello. The VMs must be defined with static IP configuration since the Ravello...

How to run Fortinet FortiGate in AWS or Google cloud with Layer 2 networking for demos, PoCs, training and testing

Fortinet has a growing list of technology-alliances, and a prolific ecosystem of resellers, technology partners and customers. This ecosystem needs complete, fully featured Fortinet FortiGate environments for - demos, PoCs and testing. Public clouds - AWS or Google are ideal for these transient workloads, but they don’t support Layer 2 networking - multicast/broadcast, VMACs, Gratuitous ARP and VLANs don’t work, making it difficult to create representative environments using FortiGate AMI (Amazon Machine Image). Ravello's nested virtualization and overlay networking solves this problem by running FortiGate KVM and VMware appliance with Layer 2 networking on AWS/Google. Looking to demo/PoC FortiGate? Just use FortiGate VM instead! Advanced capabilities such as unified threat management, NG Firewall, IPS, NAT make FortiGate a popular security appliance. It is used by enterprises across the globe to secure branch offices, headquarters and data centers. However before deploying, enterprises typically like to see a demo and run PoCs - needing a fully featured FortiGate environment. FortiGate VM - FortiGate’s virtualized version is a great alternative to get customers to ‘test-drive’ the FortiGate without shipping hardware. Fortinet sales engineers, resellers, technology partners can use FortiGate VM to demo the power of the platform, run PoCs with ease, and Fortinet trainers can use it to spin training environments. Transient workloads for demos, PoC, training Companies have explored provisioning their data-centers to run these transient workloads for demo, PoC, training, upgrade and development test environments - and have experienced a sticker shock - it is expensive! Further, it takes weeks to months to procure, provision the hardware, and get the environment running, and there are opportunity costs associated when the environment is not being used. Public clouds, such as AWS and Google provide the flexibility to move to a usage-based pricing model and avoid these opportunity costs. However, FortiGate’s AMI (Amazon Machine Image) that natively runs on Amazon is held back by the public cloud limitations that don’t allow it to support IPv6 and Layer 2 networking (broadcast, multicast, VLANs, VMACs, Gratuitous ARP won’t work!), and prevent from creating representative copies of production data center environments. Low cost & technological flexibility - can I have it all? Nested virtualization platform with software defined networking overlay - such as Ravello - brings together financial benefits of moving to cloud while avoiding technological limitations. This allows organizations to recreate an exact replica of their complex data center environment on Google cloud and AWS - running the same version and configuration of the Fortinet FortiGate VMware or KVM virtual appliance as they do in their data center. The platform gives the ability to snapshot or ‘blue-print’ a multi-VM application including Forigate VM complete with complex networking, and spin up as many copies as needed at the click of a button or through a REST API call. Here is a comprehensive comparison of benefits of running FortiGate VM in DC, natively on public cloud and on public cloud using Ravello:       Running FortiGate VM on AWS / Google using Ravello Running FortiGate VM in DC Running FortiGate AMI natively on AWS/Google Usage based costs ✓ ✕ ✓ Layer 2 networking support ✓ ✓ ✕ IPv6 support ✓ ✓ ✕ High fidelity copy of DC environment ✓ ✓ ✕ Zero day deployment (no migration) ✓ ✓ ✕ Unlimited capacity ✓ ✕ ✓ One click creation of replica environments from snapshots or blueprints ✓ ✕ ✕ Automation through REST APIs ✓ ✕ ✓ Version control application infrastructure ✓ ✕ ✕ Share blueprints with others ✓ ✕ ✕ Better end-user experience through global deployment ✓ ✕ ✓ Lack of exposure to transient infrastructure issues ✓ ✕ ✕ Data Center Setup Eager to reap the benefits, I decided to move my data center FortiGate VM deployment to Ravello. The subsequent sections chronicle the steps I had to undertake. My data center Fortinet FortiGate contains two interfaces - one each connected to external and internal networks. The external interface is connected to the internet. Three hosts sit behind the firewall on the internal network - two ubuntu linux machines and a windows 2012 machine (see below). UTM, Firewall, NAT and web-filtering are all enabled. Recreating the high-fidelity environment in Ravello Re-creating this environment in Ravello was simple 3 step process: Upload the 4 VMs - FortiGate VM virtual appliance, 2 Ubuntu linux VMs, 1 Windows 2012 VM - to Ravello Verify and adjust settings where needed Publish the application on the cloud of my choice - Google or AWS Uploading the VMs I used the Ravello VM uploader to upload my multi-VM environment. 1. Ravello VM uploader gave me multiple options - ranging from directly uploading my multi-VM environment from VMware vSphere™/ VMware vCenter™ to uploading OVFs or VMDKs or QCOW or ISOs individually. Siding with their recommendation, I chose to directly upload from vSphere. 2. After entering my vSphere credentials on the next screen, the upload process began. was able to track the VM upload progress from Ravello’s user interface.   Verifying settings 1. Verification started by asking for a VM name for the Fortinet FortiGate 2. Clicking ‘Next’, I was allowed to choose the resources (VCPUs and Memory) that I wanted my FortiGate to run on. Since VM00 runs on 1 VCPU and 1GB RAM, I set it as such. 3. Clicking ‘Next’, I was taken to the Disk tab. Since FortiGate KVM requires 30GB of disk-space and uses VirtIO controller, I selected it as such on this screen. 4. Next I entered the static IPs & netmasks for each of the FortiGate interfaces (internal and external) mirroring what I had in my data center. Here I also assigned a public IP to my external interface to be able to access the management UI through the internet. 5. Clicking ‘Next’, I was taken to the services tab. Since access to the FortiGate UI is through HTTP/HTTPS, I enabled these “Services” to open ports for external access. 6. I went through the steps 1-5 for my other VMs - linux & windows hosts ending up with a total of 4 VMs on my application canvas   Publishing the environment to Google Cloud 1. With my application canvas complete, I clicked ‘Publish’ to run it in cost-optimized mode. If I had chosen performance optimized, I would have been presented with a choice of AWS or Google Cloud, and corresponding regions to publish it on. My environment took roughly 5 minutes to come alive. 2. Clicking on the networking tab, one can see this closely mirrors my data center setup. 3. Once the FortiGate was up, I pointed my web-browser at the public IP of FortiGate and was presented with FortiGate’s management UI. Verifying that Fortinet FortiGate VMware appliance works To verify that the FortiGate is working as expected, just for kicks I created a web-filtering rule that blocked web browsing to news.google.com As you can see from the screenshot below, when my Windows LAN host (192.168.0.4) tries to access news.google.com, it gets blocked by the ForiGate VM verifying that FortiGate VM is working as expected. Conclusion Ravello’s nested virtualization and overlay networking provides a straightforward easy way run Fortinet FortiGate VMware or KVM appliance demos, PoCs, training and testing using AWS and Google cloud. Just sign up for a free Ravello trial, and drop us a line – we can share our FortiGate config, and also help you get your Fortinet FortiGate VM appliance running ‘as-is’ in Ravello in no time.

Fortinet has a growing list of technology-alliances, and a prolific ecosystem of resellers, technology partners and customers. This ecosystem needs complete, fully featured Fortinet FortiGate...

Putting Puppet to Use (3 of 3 in Puppet series)

Author: Michael J. Clarkson Jr. President at Flakjacket Inc., Michael is Red Hat Certified Architect Level II (E,I,X,VA,SA-OSP,DS,A), Cloudera Certified Administrator Apache Hadoop The last couple of blogs introduced Puppet, including basic deployment and how the language works. We could spend months of blogs continuing that but the team at Puppet Labs have done an excellent job of that. For those who want to learn in classrooms they even offer training. Today we put a bow on this three part series and bring this all together. We answer the questions: Do I need Puppet? Which type of Puppet? How do I plan my deployment? Where do I test my plan? You can start your Ravello Puppet Smart Lab free trial by clicking the button below: Request Puppet lab blueprint trial First, let’s review what Puppet does. Puppet manages the state of your systems. It can manage most POSIX based operating systems including all flavors of Linux, most types of UNIX, and most operating systems that support Ruby. Because it uses Ruby it can also be used to manage the state of things in Windows environments as well. By defining in Puppet manifests what we want the final state of things to be we can control from one centralized Puppet master the state of any node running the Puppet agent and connected to the master. At regular intervals, 30 minutes by default on Linux, agents phone home to the master and compare their current state to the manifests assigned to that agent. If all is well, nothing changes. If there are differences, it fixes those. You can use this to ensure packages are installed, updated, and configured; users are created with the proper group memberships; and files exist and are in a particular state. When someone comes along and changes the state outside of Puppet, it is reverted to approved status at next check in. This works across one node or thousands with relatively low overhead. Further, with the thriving community contributing to Puppet Forge which includes contributors from most of the major projects you won’t be left to create your own solutions from scratch. There are Puppet modules written for everything from basic file management to massive OpenStack deployment. Do I need Puppet? This question is an easy one. Do you manage more than one system? Do you want to ensure the state of things is properly maintained and vetted, even when you leave your users to their own devices over a long weekend? Do you want the support of a thriving community whose contributed modules keep you from having to reinvent the wheel? Congratulations. You need Puppet. Of course there are other answers to the problems above like Chef but Puppet currently has the biggest user base and Red Hat has adopted it as a key component of Satellite Server 6 so it won’t be going anywhere for a long time. Which type of Puppet? There is the free and fully open source Puppet which we talked about two weeks ago and for many that is a great solution. But what if you need support? Is there an easier to deploy version of Puppet with industry leading support and a powerful user interface? The answer to that question is Puppet Enterprise. There is a hot off the press new version with details here. Available from the team at Puppet Labs, there is even a free trial good for up to ten nodes that you can use to learn it, test it, and decide if it is right for you. Puppet Enterprise isn’t free, but if you need the peace of mind a support contract gives you and an easier interface to Puppet, you can’t go wrong with Puppet Enterprise. How do I plan my deployment? This one takes thought. As the old adage says, “If you fail to plan, you plan to fail.” This is where you take into account your existing environment, making note of all of the applications, users, and files you wish to control. You can use the facter command to help describe those things in the Puppet DSL to make your manifest creation easier. From there you can decide if you should go for the full open source version and forge ahead with only community support or if investing in Puppet Enterprise provides you a better ROI. Where do I test my plan? From there the next phase involves creating a test environment in microcosm which replicates at least a part of your production environment sufficiently to test the design you planned out. This is where Ravello Systems comes in. By giving you the technology of nested virtualization for AWS and Google cloud to replicate your existing DC environment in the public cloud right down to the switches, routers, and existing servers/systems images(VMware or KVM) you can fully realize a realistic test environment for proof of concept and testing without the capital expenditure of buying new hardware for testing. You can select your environment for performance or cost and Ravello selects between Amazon EC2 and Google Cloud based on your preference. Pay for what you use, take it down when you are done. Bonus When you sign up for a Ravello account ask your support representative for a copy of my Puppet blueprint. In it I have a copy of the Puppet training VM, a sample Puppet deployment similar to the one from the blog entry two weeks ago, and a Puppet Enterprise deployment, all free for you to use to kick the tires. Coming soon will also be a new video entry in which I take you through the blueprint on my new YouTube channel. Please subscribe to stay up to date. Enjoy using Puppet and as always, happy computing! Join us next time when we delve into OpenStack deployment on Ravello. Other posts in the Linux Smart Lab in the Cloud series How to use AWS or Google cloud to learn Red Hat data center deployments Building a cloud ready linux image locally using KVM Ramping up for RHEL7/CentOS7 Linux security in the cloud User Management with FreeIPA Server Big Data Smart Labs – Hadoop Deployment lab for User Trial, POC on AWS or Google Cloud using Ravello Puppet Smart Lab on AWS/Google Cloud – For testing and training (1 of 3 in Puppet series) Puppet Domain Specific Language Overview (2 of 3 in Puppet series) OpenStack Juno multi compute node Lab – Ready to use environment on AWS and Google Cloud

Author: Michael J. Clarkson Jr. President at Flakjacket Inc., Michael is Red Hat Certified Architect Level II (E,I,X,VA,SA-OSP,DS,A), Cloudera Certified Administrator Apache Hadoop The last couple of...

Puppet Domain Specific Language Overview (2 of 3 in Puppet series)

Author: Michael J. Clarkson Jr. President at Flakjacket Inc., Michael is Red Hat Certified Architect Level II (E,I,X,VA,SA-OSP,DS,A), Cloudera Certified Administrator Apache Hadoop Last week we looked at how to deploy a Puppet Master/Agent lab environment. Before we use that environment, we have to learn to speak its language. This week we cover the basics of writing a simple manifest in the Puppet DSL. You can start your Ravello Puppet Smart Lab free trial by clicking the button below: Request Puppet lab blueprint trial Intro to the Puppet Language Puppet itself is written in Ruby and Puppet manifests can be extended with Ruby so if you are familiar with Ruby or similar languages, Puppet DSL won’t be a huge leap for you. At the heart of the Puppet language is the declaration of resources and groups of resources called classes. A resource could describe the final state of a particular file or package. A class, on the other hand, could describe the sum of all of the resources needed to configure an entire application, such as the final state of the daemons, config files, packages, and maintenance tasks. We can then combine smaller classes into larger ones. Table 1.1 LAMP Class Linux Class Package resources Config resources Daemon resources Linux Class Package resources Config resources Daemon resources Linux Class Package resources Config resources Daemon resources In table 1.1 we see the concept a bit more clearly. The lamp class contains the Linux, Apache, MySQL, and PHP classes. Each of those classes contain resources which describe the final states of each service. By deploying the LAMP Class, we configure all of the packages, configs, and daemons needed to run an entire LAMP stack in one shot. Because of Puppet’s declarative nature, we don’t need to step it through getting from the current state, whatever that is, to the final state. We declare how the final state of things should look and Puppet does whatever is needed to get from here to there. From this design comes one of Puppet’s greatest strengths and potential weaknesses. Resources in a manifest are considered by default as independent. If one resource relies on another that must be explicitly declared. For instance, you wouldn’t want to start the Apache service until after the resource that installs it and the other one that configures it complete. The exception to this unordered madness are variables. Those must always be declared before they are assigned values. Resources Let’s begin our exploration of the Puppet language with its most basic component, the resource. A resource contains a type, a title, and a set of attributes. The syntax looks like this. type {‘title’: attribute => value, } For instance file {‘/etc/issue’: ensure => file, owner => ‘root’, group => ‘root’, mode => ‘0644’, content => "This system for authorized use onlynDon’t touch without permissionn", } In the above resource declaration we ensure the file /etc/issue exists, is owned by user and group root, has a 0644 permission set, and contains the following text. This system for authorized use only Don’t touch without permission We only declared the final state. As such, Puppet modules can be applied multiple times with the same result. It checks the state of things and simply ensures that the state matches the manifest. Other examples of resources might be exec resources like this. exec {‘foo.sh’: path => ‘/usr/local/bin’, cwd => ‘/home/puppetuser’, } In the above example we run the script foo.sh with a current working directory (cwd) of /home/puppetuser. Classes OS using case statements and then based on OS call file resources that match that OS. Then we ensure the ntp package is deployed, which would be OS independent. Then it deploys a good copy of ntp.conf from a source directory, which could just as well be a website or ftp host. Then we set up the ntp service to be running and enabled. Inside the class we can also see examples of dependency. Note the require and subscribe statements. The require statement in the file resource, for instance, requires the ntp resource run before the file resource. The subscribe statement in the service resource is similar except it not only requires the file resource but refreshes the service any time a change to the file resource is applied. class ntp { case $operatingsystem { centos, redhat: { $service_name = 'ntpd' $conf_file = 'ntp.conf.el' } debian, ubuntu: { $service_name = 'ntp' $conf_file = 'ntp.conf.debian' } } package { 'ntp': ensure => installed, } file { 'ntp.conf': path => '/etc/ntp.conf', ensure => file, require => Package['ntp'], source => "/root/examples/answers/${conf_file}" } service { 'ntp': name => $service_name, ensure => running, enable => true, subscribe => File['ntp.conf'], } } The Puppet DSL is pretty straightforward when you glance at it, but it may seem a bit daunting when you realize the vast array of things it can do. Have no fear. Chances are, almost anything you try to do has been done by someone before and a Puppet module that does that thing is available from Puppet Forge. Also from the folks at Puppet Forge is the IDE for working with Puppet, Geppetto. Modules Puppet resources and classes you create can be exported into modules for reuse. Common tasks can be automated with variables in your resources and classes and then called when you need to perform tasks with values for the variables handed to them. They are self contained bundles of manifests made up of resources, classes, source files, and everything needed to rapidly do things. You can roll your own or find ones created by other Puppet users at Puppet Forge. Where can I learn more? The fine folks at Puppet Labs have spectacular tutorials, a free virtual machine image, and tons of excellent documentation. Here are some links to get you started: Reference Manual Language Style Guide Learning Puppet Puppet Labs training Where does Ravello Systems fit in all of this? What does all of this have to do with Ravello? True mastery of Puppet requires practice, planning, and preparation. If you are going to implement Puppet in your environment it is imperative that you test your implementation. Ravello offers a free trial to get you started. You can recreate your production environment, including networking, operating systems, load balancers, and everything that goes with it in the cloud. You can create testing labs in a snap or proof of concept demonstrations. Ravello’s nested virtualization allows you the power to fully test and implement solutions that used to require physical resources for a fraction of the cost and none of the capital expenditure. In our next entry in our three part series on Puppet we bring it all together. Sign up for your free trial today and ask for the Puppet Smart Lab blueprint to continue your Puppet exploration. In it you’ll find a copy of the Puppet Training VM and a Master Agent deployment to get you started. Other posts in the Linux Smart Lab in the Cloud series How to use AWS or Google cloud to learn Red Hat data center deployments Building a cloud ready linux image locally using KVM Ramping up for RHEL7/CentOS7 Linux security in the cloud User Management with FreeIPA Server Big Data Smart Labs – Hadoop Deployment lab for User Trial, POC on AWS or Google Cloud using Ravello Puppet Smart Lab on AWS/Google Cloud – For testing and training (part 1) Putting Puppet to Use (3 of 3 in Puppet series) OpenStack Juno multi compute node Lab – Ready to use environment on AWS and Google Cloud

Author: Michael J. Clarkson Jr. President at Flakjacket Inc., Michael is Red Hat Certified Architect Level II (E,I,X,VA,SA-OSP,DS,A), Cloudera Certified Administrator Apache Hadoop Last week we looked at...

How to run Juniper vSRX in AWS or Google cloud with Layer 2 networking for demos, PoCs, training and testing

Juniper Networks has a growing list of technology-alliances, and a prolific ecosystem of resellers, technology partners and customers. This ecosystem needs complete, fully featured Juniper vSRX environments for - demos, PoCs and testing. Public clouds - AWS or Google are ideal for these transient workloads, but cannot run vSRX. Even if it did, public cloud doesn’t support Layer 2 networking - multicast/broadcast, VMACs, Gratuitous ARP and VLANs don’t work, making it difficult to create representative environments on AWS or Google. Ravello's nested virtualization and overlay networking solves this problem. Looking to demo SRX? Just use vSRX! Advanced capabilities such as intrusion prevention system, unified threat management, NAT, IPsec VPN, and advanced routing make Juniper SRX a popular security appliance. It is used by enterprises across the globe to secure branch offices, headquarters and data centers. However before deploying, enterprises typically like to see a demo and run PoCs - needing a fully featured SRX environment. vSRX - SRX’s virtualized version is a great alternative for such use-cases. Juniper sales engineers, resellers, technology partners can use vSRX to demo the power of the platform, run PoCs with ease, and SRX trainers can use it to spin training environments. Transient workloads for demos, PoC, training Companies have explored provisioning their data-centers to run these transient workloads for demo, PoC, training, upgrade and development test environments - and have experienced a sticker shock - it is expensive! Further, it takes weeks to months to procure, provision the hardware, and get the environment running, and there are opportunity costs associated when the environment is not being used. Public clouds, such as AWS and Google provide the flexibility to move to a usage-based pricing model and avoid these opportunity costs. However, vSRX doesn’t have a version of the appliance that can run natively on either AWS or Google cloud. And even if it did, the public cloud limitations would not allow it to support IPv6 and Layer 2 networking (broadcast, multicast, VLANs, VMACs, Gratuitous ARP won’t work!). Low cost & technological flexibility - can I have it all? Nested virtualization platform with software defined networking overlay - such as Ravello - brings together financial benefits of moving to cloud while avoiding technological limitations. This allows organizations to recreate an exact replica of their complex data center environment on Google cloud and AWS - running the same version and configuration of the Juniper vSRX VMware or KVM virtual appliance as they do in their data center. The platform gives the ability to snapshot or ‘blue-print’ a multi-VM application including Juniper vSRX complete with complex networking, and spin up as many copies as needed at the click of a button or through a REST API call. Here is a comprehensive comparison of benefits of running an application in DC, natively on public cloud and on public cloud using Ravello:   Running on AWS / Google using Ravello Running in DC Running natively on AWS/Google Usage based costs ✓ X vSRX doesn’t run on AWS or Google natively. Even if it did, due to public cloud limitations it would not be able to support Layer 2 networking and IPv6 Layer 2 networking support ✓ ✓ X IPv6 support ✓ ✓ X High fidelity copy of DC environment ✓ ✓ X Zero day deployment (no migration) ✓ ✓ X Unlimited capacity ✓ X X One click creation of replica environments from snapshots or blueprints ✓ X X Automation through REST APIs ✓ X X Version control application infrastructure ✓ X X Share blueprints with others ✓ X X Better end-user experience through global deployment ✓ X X Lack of exposure to transient infrastructure issues ✓ X X Eager to reap the benefits, I decided to move my data center vSRX deployment to Ravello. The subsequent sections chronicle the steps I had to undertake. Data Center Setup My data center Juniper vSRX contains two interfaces - one each connected to external and internal networks. The external interface is connected to the internet. Three hosts sit behind the firewall on the internal network - two ubuntu linux machines and a windows 2012 machine (see below). UTM, Firewall, NAT and web-filtering are all enabled. Recreating the high-fidelity environment in Ravello Re-creating this environment in Ravello was simple 3 step process - Upload the 4 VMs - vSRX virtual appliance, 2 Ubuntu linux VMs, 1 Windows 2012 VM - to Ravello Verify and adjust settings where needed Publish the application on the cloud of my choice - Google or AWS Uploading the VMs I used the Ravello VM uploader to upload my multi-VM environment. 1. Ravello VM uploader gave me multiple options - ranging from directly uploading my multi-VM environment from vSphere/vCenter to uploading OVFs or VMDKs or QCOW or ISOs individually. Siding with their recommendation, I chose to directly upload from vSphere. 2. After entering my vSphere credentials on the next screen, the upload process began. was able to track the VM upload progress from Ravello’s user interface. Verifying settings 1. Verification started by asking for a VM name for the Juniper vSRX 2. Clicking ‘Next’, I validated the amount of resources (VCPUs and Memory) that I wanted my vSRX to run on. 3. Clicking ‘Next’, I was taken to the Disk tab. It was already pre-populated with the right disk-size and controller. 4. Next I entered the static IPs & netmasks for each of the vSRX interfaces (internal and external) mirroring what I had in my data center. 5. Clicking ‘Next’, I was taken to the services tab. Since access to the J-Web was enabled only on the internal interface for my vSRX, I didn’t need to create any “Services” to open ports for external access. 6. I went through the steps 1-6 for my other VMs - linux & windows hosts ending up with a total of 4 VMs on my application canvas Publishing the environment to AWS 1. With my application canvas complete, I clicked ‘Publish’ to run it on AWS. I was presented with a choice of AWS Regions to publish it in, and I chose AWS Virginia. My environment took roughly 5 minutes to come alive. 2. Clicking on the networking tab, one can see this closely mirrors my data center setup. 3. Once the vSRX was up, I pointed my web-browser at the internal IP of vSRX from an internal host was presented with vSRX’s management UI. Verifying that Juniper vSRX VMware appliance works To verify that the vSRX is working as expected, I created a security rule that blocked pings to Google’s DNS service 8.8.4.4. As you can see from the screenshot below, when my linux host (192.168.0.3) pings 8.8.4.4, it gets blocked. Next, I disabled the policy to block access to Google DNS on vSRX through J-Web. As expected, the pings to Google DNS began getting an ICMP response, proving that firewall was operational, and working as expected. Conclusion Ravello’s nested hypervisor and overlay networking provides a straightforward easy way run Juniper vSRX VMware or KVM appliance demos, PoCs, training and testing using AWS and Google cloud. Just sign up for a free Ravello trial, and drop us a line – we can share our vSRX config, and also help you get your Juniper vSRX appliance running ‘as-is’ in Ravello in no time.

Juniper Networks has a growing list of technology-alliances, and a prolific ecosystem of resellers, technology partners and customers. This ecosystem needs complete, fully featured Juniper...

NetScaler Smart Labs - enabling demos, PoCs, training with L2 networking on Google cloud & AWS

NetScaler has a large ecosystem comprising of developers, test engineers, trainers, sales engineers, channel partners, resellers, and of course, enterprise customers. All of these players and use cases have one important element in common: they all require fully featured and complete multi-VM environments that are truly representative of a data-center NetScaler environment (be it for training, for sales demos and PoCs, for upgrade testing, etc.). This post is meant to help the ecosystem learn about the best answer to this need and run the VMware/KVM NetScaler VPX on AWS or Google Cloud. The cloud is ideal for temporary environments but has limitations The bursty nature of the training, sales, and testing workloads has already sent members of the ecosystem to attempt using the cloud via the AWS AMI NetScaler appliance. However, the limited features enabled by this AMI (due to L2 limitations like multicast or broadcast) prevent NetScaler users, customers, resellers and partners from actually getting the complete and fully featured appliance environment (which is available to them in the VMware version of the appliance). Nested hypervisor & overlay networking overcome cloud limitations At Ravello we realized that we can bridge this gap quite easily and enable the NetScaler ecosystem to benefit from both the completeness of the VMware/KVM appliance and the infinite capacity and pay-per-usage model of the public cloud. Our nested hypervisor and overlay network features, enable running the VMware/KVM appliance on AWS or Google cloud. Ravello’s nested virtualization engine enables the NetScaler ecosystem members to replicate complete multi-VM environments, spin them up in the cloud and tear them down, all with one click or API call. Additional benefit of automation through “blueprinting” Furthermore, sales engineers, resellers, channel partners and customers are able to save blueprints of the entire NetScaler appliance environment, and thus create repeatable deployments of the exact required setup without ANY additional provisioning work, without any reconfigurations or any changes to VMs or networking (same DNS names, subnets, VLANs, etc.) AWS vs. Ravello feature summary table SupportedSupportedSupportedSupportedSupportedSupported Feature NetScaler VPX AMIs NetScaler VPX on AWS/Google using Ravello L2 networking Not supported IPv6 Not supported Gratuitous ARP Not supported VLAN tags Not supported Dynamic routing Not supported Virtual MACs Not supported Read more here about the importance of this set of features to the richness of the NetScaler load balancer setup. Step-by-step guide to your NetScaler VPX lab in the cloud Using this step-by-step guide to set up your VMware/KVM NetScaler VPX on AWS or Google, you will be well on your way to having complete on-demand NetScaler environments for your sales demos, PoC, training and testing labs. You can start your free trial here, or drop us a note and we will be happy to share our NetScaler blueprint and setup with you to check it out.

NetScaler has a large ecosystem comprising of developers, test engineers, trainers, sales engineers, channel partners, resellers, and of course, enterprise customers. All of these players and...

Oracle

Integrated Cloud Applications & Platform Services