Tuesday Dec 17, 2013

Performance Analysis in a Multitenant Cloud Environment Using Hadoop Cluster and Oracle Solaris 11

Oracle Solaris 11 comes with a new set of commands that provide the ability to conduct
performance analysis in a virtualized multitenant cloud environment. Performance analysis in a
virtualized multitenant cloud environment with different users running various workloads can be a
challenging task for the following reasons:

Each virtualization software adds an abstraction layer to enable better manageability. Although this makes it much simpler to manage the virtualized resources, it is very difficult to find the physical system resources that are overloaded.

Each Oracle Solaris Zone can have different workload; it can be disk I/O, network I/O, CPU, memory, or combination of these.

In addition, a single Oracle Solaris Zone can overload the entire system resources.It is very difficult to observe the environment; you need to be able to monitor the environment from the top level to see all the virtual instances (non-global zones) in real time with the ability to drill down to specific resources.


The benefits of using Oracle Solaris 11 for virtualized performance analysis are:

Observability. The Oracle Solaris global zone is a fully functioning operating systems, not a propriety hypervisor or a minimized operating system that lacks the ability to observe the entire environment—including the host and the VMs, at the same time. The global zone can see all the non-global zones’ performance metrics.

Integration. All the subsystems are built inside the same operating system. For example, the ZFS file system and the Oracle Solaris Zones virtualization technology are integrated together. This is preferable to mixing many vendors’ technology, which causes a lack of integration between the different operating system (OS) subsystems and makes it very difficult to analyze all the different OS subsystems at the same time.

Virtualization awareness. The built-in Oracle Solaris commands are virtualization-aware,and they can provide performance statistics for the entire system (the Oracle Solaris global zone). In addition to providing the ability to
drill down into every resource (Oracle Solaris non-global zones), these commands provide accurate results during the performance analysis process.

In this article, we are going to explore four examples that show how we can monitor virtualized environment with Oracle Solaris Zones using the built-in Oracle Solaris 11 tools. These tools provide the ability to drill down to specific resources, for example, CPU, memory, disk, and network. In addition, they provide the ability to print statistics per Oracle Solaris Zone and provide information on the running applications.


Read it 
Article: Performance Analysis in a Multitenant Cloud Environment

Tuesday Oct 22, 2013

How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)


Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab.
This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris
11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System
(HDFS) and the Hadoop MapReduce programming model.
We will also cover the Hadoop installation process and the cluster building blocks:
NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better
scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job.

Summary of Lab Exercises
This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:
    Install Hadoop.
    Edit the Hadoop configuration files.
    Configure the Network Time Protocol.
    Create the virtual network interfaces (VNICs).
    Create the NameNode and the secondary NameNode zones.
    Set up the DataNode zones.
    Configure the NameNode.
    Set up SSH.
    Format HDFS from the NameNode.
    Start the Hadoop cluster.
    Run a MapReduce job.
    Secure data at rest using ZFS encryption.
    Use Oracle Solaris DTrace for performance monitoring.
 

Read it now

Monday May 20, 2013

How To Protect Public Cloud Using Solaris 11 Technologies

When we meet with our partners, we often ask them, “ What are their main security challenges for public cloud infrastructure.? What worries them in this regard?”
This is what we've gathered from our partners regarding the security challenges:

1.    Protect data at rest in transit and in use using encryption
2.    Prevent denial of service attacks against their infrastructure
3.    Segregate network traffic between different cloud users
4.    Disable hostile code (e.g.’ rootkit’ attacks)
5.    Minimize operating system attack surface
6.    Secure data deletions once we have done with our project
7.    Enable strong authorization and authentication for non secure protocols

Based on these guidelines, we began to design our Oracle Developer Cloud. Our vision was to leverage Solaris 11 technologies in order to meet those security requirements.


First - Our partners would like to encrypt everything from disk up the layers to the application without the performance overhead which is usually associated with this type of technology.
The SPARC T4 (and lately the SPARC T5) integrated cryptographic accelerator allow us to encrypt data using ZFS encryption capability.
We can encrypt all the network traffic using SSL from the client connection to the cloud main portal using the Secure Global Desktop (SGD) technology and also encrypt the network traffic between the application tier to the database tier. In addition to that we can protect our Database tables using Oracle Transparent Data Encryption (TDE).
During our performance tests we saw that the performance impact was very low (less than 5%) when we enabled those encryption technologies.
The following example shows how we created an encrypted file system

# zfs create -o encryption=on rpool/zfs_file_system

Enter passphrase for 'rpool/zfs_file_system':
Enter again:

NOTE - In the above example, we used a passphrase that is interactively requested but we can use SSL or a key repository.
Second  - How we can mitigate denial of service attacks?
The new Solaris 11 network virtualization technology allow us to apply virtualization technologies to  our network by splitting the physical network card into multiple virtual network ‘cards’. in addition, it provides the capability to setup flow which is sophisticated quality of service mechanism.
Flows allow us to limit the network bandwidth for a specific network port on specific network interface.

In the following example we limit the SSL traffic to 100Mb on the vnic0 network interface

# dladm create-vnic vnic0 –l net0
# flowadm add-flow -l vnic0 -a transport=TCP,local_port=443 https-flow
# flowadm set-flowprop -p maxbw=100M https-flow


During any (Denial of Service) DOS attack against this web server, we can minimize the impact on the rest of the infrastructure.
Third -  How can we isolate network traffic between different tenants of the public cloud?
The new Solaris 11 network technology allow us to segregate the network traffic on multiple layers.

For example we can limit the network traffic based on the layer two using VLANs

# dladm create-vnic -l net0  -v 2 vnic1

Also we can be implement firewall rules for layer three separations using the Solaris 11 built-in firewall software.
For an example of Solaris 11 firewall see
In addition to the firewall software, Solaris 11 has built-in load balancer and routing software. In a cloud based environment it means that new functionality can be added promptly since we don't need an extra hardware in order to implement those extra functions.

Fourth - Rootkits have become a serious threat is allowing the insertion of hostile code using custom kernel modules.
The Solaris Zones technology prevents loading or unloading kernel modules (since local zones lack the sys_config privilege).
This way we can limit the attack surface and prevent this type of attack.

In the following example we can see that even the root user is unable to load custom kernel module inside a Solaris zone

# ppriv -De modload -p /tmp/systrace

modload[21174]: missing privilege "ALL" (euid = 0, syscall = 152) needed at modctl+0x52
Insufficient privileges to load a module

Fifth - the Solaris immutable zones technology allows us to minimize the operating system attack surface
For example: disable the ability to install new IPS packages and modify file systems like /etc
We can setup Solaris immutable zones using the zonecfg command.

# zonecfg -z secure-zone
Use 'create' to begin configuring a new zone.
zonecfg:secure-zone> create
create: Using system default template 'SYSdefault'
zonecfg:secure-zone> set zonepath=/zones/secure-zone
zonecfg:secure-zone> set file-mac-profile=fixed-configuration
zonecfg:secure-zone> commit
zonecfg:secure-zone> exit

# zoneadm -z secure-zone install

We can combine the ZFS encryption and immutable zones for more examples see:

Sixth - The main challenge of building secure BIG Data solution is the lack of built-in security mechanism for authorization and authentication.
The Integrated Solaris Kerberos allows us to enable strong authorization and authentication for non-secure by default distributed systems like Apache Hadoop.

The following example demonstrates how easy it is to install and setup Kerberos infrastructure on Solaris 11

# pkg install pkg://solaris/system/security/kerberos-5
# kdcmgr -a kws/admin -r EXAMPLE.COM create master


Finally - our partners want to assure that when the projects are finished and complete, all the data is erased without the ability to recover this data by looking at the disk blocks directly bypassing the file system layer.
ZFS assured delete feature allows us to implement this kind of secure deletion.
The following example shows how we can change the ZFS wrapping key to a random data (output of /dev/random) then we unmount the file system and finally destroy it.

# zfs key -c -o  keysource=raw,file:///dev/random rpool/zfs_file_system
# zfs key -u rpool/zfs_file_system
# zfs destroy rpool/zfs_file_system


Conclusion
In this blog entry, I covered how we can leverage the SPARC T4/T5 and the Solaris 11 features in order to build secure cloud infrastructure. Those technologies allow us to build highly protected environments without  the need to invest extra budget on special hardware. They also  allow us to protect our data and network traffic from various threats.
If you would like to hear more about those technologies, please join us at the next IGT cloud meet-up

Friday Jun 04, 2010

Increasing Application Availability Using Oracle VM Server for SPARC (LDoms) An Oracle Database Example

This white paper by Orgad Kimchi and Roman Ivanov of Oracle's ISV Engineering Team describes how to use the warm migration feature of Oracle VM Server for SPARC to move a running Oracle database application from one server to another without disruption. It includes explanations of the process and instructions for setting up the source and target servers.


For the full white paper see Increasing Application Availability Using Oracle VM Server for SPARC

About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today