Monday May 20, 2013

How To Protect Public Cloud Using Solaris 11 Technologies

When we meet with our partners, we often ask them, “ What are their main security challenges for public cloud infrastructure.? What worries them in this regard?”
This is what we've gathered from our partners regarding the security challenges:

1.    Protect data at rest in transit and in use using encryption
2.    Prevent denial of service attacks against their infrastructure
3.    Segregate network traffic between different cloud users
4.    Disable hostile code (e.g.’ rootkit’ attacks)
5.    Minimize operating system attack surface
6.    Secure data deletions once we have done with our project
7.    Enable strong authorization and authentication for non secure protocols

Based on these guidelines, we began to design our Oracle Developer Cloud. Our vision was to leverage Solaris 11 technologies in order to meet those security requirements.


First - Our partners would like to encrypt everything from disk up the layers to the application without the performance overhead which is usually associated with this type of technology.
The SPARC T4 (and lately the SPARC T5) integrated cryptographic accelerator allow us to encrypt data using ZFS encryption capability.
We can encrypt all the network traffic using SSL from the client connection to the cloud main portal using the Secure Global Desktop (SGD) technology and also encrypt the network traffic between the application tier to the database tier. In addition to that we can protect our Database tables using Oracle Transparent Data Encryption (TDE).
During our performance tests we saw that the performance impact was very low (less than 5%) when we enabled those encryption technologies.
The following example shows how we created an encrypted file system

# zfs create -o encryption=on rpool/zfs_file_system

Enter passphrase for 'rpool/zfs_file_system':
Enter again:

NOTE - In the above example, we used a passphrase that is interactively requested but we can use SSL or a key repository.
Second  - How we can mitigate denial of service attacks?
The new Solaris 11 network virtualization technology allow us to apply virtualization technologies to  our network by splitting the physical network card into multiple virtual network ‘cards’. in addition, it provides the capability to setup flow which is sophisticated quality of service mechanism.
Flows allow us to limit the network bandwidth for a specific network port on specific network interface.

In the following example we limit the SSL traffic to 100Mb on the vnic0 network interface

# dladm create-vnic vnic0 –l net0
# flowadm add-flow -l vnic0 -a transport=TCP,local_port=443 https-flow
# flowadm set-flowprop -p maxbw=100M https-flow


During any (Denial of Service) DOS attack against this web server, we can minimize the impact on the rest of the infrastructure.
Third -  How can we isolate network traffic between different tenants of the public cloud?
The new Solaris 11 network technology allow us to segregate the network traffic on multiple layers.

For example we can limit the network traffic based on the layer two using VLANs

# dladm create-vnic -l net0  -v 2 vnic1

Also we can be implement firewall rules for layer three separations using the Solaris 11 built-in firewall software.
For an example of Solaris 11 firewall see
In addition to the firewall software, Solaris 11 has built-in load balancer and routing software. In a cloud based environment it means that new functionality can be added promptly since we don't need an extra hardware in order to implement those extra functions.

Fourth - Rootkits have become a serious threat is allowing the insertion of hostile code using custom kernel modules.
The Solaris Zones technology prevents loading or unloading kernel modules (since local zones lack the sys_config privilege).
This way we can limit the attack surface and prevent this type of attack.

In the following example we can see that even the root user is unable to load custom kernel module inside a Solaris zone

# ppriv -De modload -p /tmp/systrace

modload[21174]: missing privilege "ALL" (euid = 0, syscall = 152) needed at modctl+0x52
Insufficient privileges to load a module

Fifth - the Solaris immutable zones technology allows us to minimize the operating system attack surface
For example: disable the ability to install new IPS packages and modify file systems like /etc
We can setup Solaris immutable zones using the zonecfg command.

# zonecfg -z secure-zone
Use 'create' to begin configuring a new zone.
zonecfg:secure-zone> create
create: Using system default template 'SYSdefault'
zonecfg:secure-zone> set zonepath=/zones/secure-zone
zonecfg:secure-zone> set file-mac-profile=fixed-configuration
zonecfg:secure-zone> commit
zonecfg:secure-zone> exit

# zoneadm -z secure-zone install

We can combine the ZFS encryption and immutable zones for more examples see:

Sixth - The main challenge of building secure BIG Data solution is the lack of built-in security mechanism for authorization and authentication.
The Integrated Solaris Kerberos allows us to enable strong authorization and authentication for non-secure by default distributed systems like Apache Hadoop.

The following example demonstrates how easy it is to install and setup Kerberos infrastructure on Solaris 11

# pkg install pkg://solaris/system/security/kerberos-5
# kdcmgr -a kws/admin -r EXAMPLE.COM create master


Finally - our partners want to assure that when the projects are finished and complete, all the data is erased without the ability to recover this data by looking at the disk blocks directly bypassing the file system layer.
ZFS assured delete feature allows us to implement this kind of secure deletion.
The following example shows how we can change the ZFS wrapping key to a random data (output of /dev/random) then we unmount the file system and finally destroy it.

# zfs key -c -o  keysource=raw,file:///dev/random rpool/zfs_file_system
# zfs key -u rpool/zfs_file_system
# zfs destroy rpool/zfs_file_system


Conclusion
In this blog entry, I covered how we can leverage the SPARC T4/T5 and the Solaris 11 features in order to build secure cloud infrastructure. Those technologies allow us to build highly protected environments without  the need to invest extra budget on special hardware. They also  allow us to protect our data and network traffic from various threats.
If you would like to hear more about those technologies, please join us at the next IGT cloud meet-up

Thursday Jul 02, 2009

Storage virtualization with COMSTAR and ZFS


COMSTAR is a software framework that enables you to turn any
OpenSolaris host into a SCSI target that can be accessed over the
network by initiator hosts. COMSTAR breaks down the huge task of
handling a SCSI target subsystem into independent functional modules.
These modules are then glued together by the SCSI Target Mode Framework (STMF).

COMSTAR features include:

    \* Extensive LUN Masking and mapping functions
    \* Multipathing across different transport protocols
    \* Multiple parallel transfers per SCSI command
    \* Scalable design
    \* Compatible with generic HBAs


COMSTAR is integrated into the latest OpenSolaris

In this entry I will demonstrate the integration between COMSTAR and ZFS


Architecture layout :




You can install all the appropriate COMSTAR packages

# pkg install storage-server
On a newly installed OpenSolaris system, the STMF service is disabled by default.

You must complete this task to enable the STMF service.
View the existing state of the service
# svcs stmf
 disabled 15:58:17 svc:/system/stmf:default

Enable the stmf service
# svcadm enable stmf
Verify that the service is active.
# svcs stmf
 online 15:59:53 svc:/system/stmf:default

Create a RAID-Z storage pool.
The server has six controllers each with eight disks and I have
built the storage pool to spread I/O evenly and to enable me to build 8
 RAID-Z stripes of equal length.
# zpool create -f  tank \\
raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0
raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0


After the pool is created, the zfs utility can be used to create 50GB ZFS volume.
# zfs create -V 50g tank/comstar-vol1

Create a logical unit using the volume.
# sbdadm create-lu /dev/zvol/rdsk/tank/comstar-vol1
Created the following logical unit :

GUID DATA SIZE SOURCE
--------------------------- ------------------- ----------------
600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

Verify the creation of the logical unit and obtain the Global Unique Identification (GUID) number for the logical unit.
# sbdadm list-lu
Found 1 LU(s)
GUID DATA SIZE SOURCE
-------------------------------- ------------------- ----------------
600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

This procedure makes a logical unit available to all initiator hosts on a storage network.
Add a view for the logical unit.

# stmfadm add-view GUID_number

Identify the host identifier of the initiator host you want to add to your view.
Follow the instructions for each port provider to identify the initiators associated with each
port provider.


You can see that the port mode is Initiator

# fcinfo hba-port
       HBA Port WWN: 210000e08b91facd
        Port Mode: Initiator
        Port ID: 2
        OS Device Name: /dev/cfg/c16
        Manufacturer: QLogic Corp.
        Model: 375-3294-01
        Firmware Version: 04.04.01
        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
        Serial Number: 0402R00-0634175788
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 200000e08b91facd
        Max NPIV Ports: 63
        NPIV port list:

Before making changes to the HBA ports, first check the existing port
bindings.
View what is currently bound to the port drivers.
In this example, the current binding is pci1077,2422.
# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
neti sctp arp usba qlc fctl  
fcp cpc random crypto stmf nfs lofs logindmux ptm ufs sppp nsctl ipc ]
> ::devbindings -q qlc
ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlc)
ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlc)

Quit mdb.
> $q
Remove the current binding, which in this example is qlc.
In this example, the qlc driver is actively bound to pci1077,2422.
You must remove the existing binding for qlc before you can add that
binding to a new driver.

 Single quotes are required in this syntax.
# update_drv -d -i 'pci1077,2422' qlc
Cannot unload module: qlc
Will be unloaded upon reboot.

This message does not indicate an error.
The configuration files have been updated but the qlc driver remains
bound to the port until reboot.
Establish the new binding to qlt.
Single quotes are required in this syntax.
# update_drv -a -i 'pci1077,2422' qlt
  Warning: Driver (qlt) successfully added to system but failed to
attach

This message does not indicate an error. The qlc driver remains bound
to the port, until reboot.
The qlt driver attaches when the system is rebooted.
Reboot the system to attach the new driver, and then recheck the
bindings.
# reboot
# mdb -k

Loading modules: [ unix genunix specfs dtrace mac cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
neti sctp arp usba fctl stmf lofs fcip cpc random crypto nfs logindmux
ptm ufs sppp nsctl ipc ]
> ::devbindings -q qlt
ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlt)
ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlt)

Quit mdb.
 > $q


You can see that the port mode is Target

# fcinfo hba-port

        HBA Port WWN: 210000e08b91facd
        Port Mode: Target
        Port ID: 2
        OS Device Name: /dev/cfg/c16
        Manufacturer: QLogic Corp.
        Model: 375-3294-01
        Firmware Version: 04.04.01
        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
        Serial Number: 0402R00-0634175788
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 200000e08b91facd
        Max NPIV Ports: 63
        NPIV port list:


Verify that the target mode framework has access to the HBA ports.
# stmfadm list-target -v

Target: wwn.210100E08BB1FACD
Operational Status: Online
Provider Name : qlt
Alias : qlt1,0
Sessions : 0
Target: wwn.210000E08B91FACD
Operational Status: Online
Provider Name : qlt
Alias : qlt0,0
Sessions : 1
Initiator: wwn.210000E08B89F077
Alias: -
Logged in since: Thu Jul 2 12:02:59 2009


Now for the client setup :


On the client machine verify that you can see the new logical unit
# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t2d0                 disk         connected    configured   unknown
c1::dsk/c1t3d0                 disk         connected    configured   unknown
c2                             fc-private   connected    configured   unknown
c2::210000e08b91facd           disk         connected    configured   unknown
c3                             fc           connected    unconfigured unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok

You might need to rescan the SAN BUS in order to discover the new logical unit
# luxadm -e forcelip /dev/cfg/c2
# format
Searching for disks...
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
1. c1t2d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@2,0
2. c1t3d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0
3. c2t210000E08B91FACDd0 <SUN-COMSTAR-1.0 cyl 65533 alt 2 hd 16 sec 100>
/pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w210000e08b91facd,0
Specify disk (enter its number):

You can see the SUN-COMSTAR-1.0 in the disk properties
Now you can build storage pool on top of it
# zpool create comstar-pool c2t210000E08B91FACDd0
Verify the pool creation
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT
comstar-pool 49.8G 114K 49.7G 0% ONLINE -

After the pool is created, the zfs utility can be used to create a ZFS volume.
#  zfs create -V 48g comstar-pool/comstar-vol1


For more information about COMSTAR please check the COMSTAR  project on OpenSolaris




About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today