Tuesday Apr 08, 2014

Tech Article: Building a Cloud-Based Data Center - Part 1

Tech Article: Building a Cloud-Based Data Center - Part 1 by Ron Larson and Richard Friedman
This article discusses the factors to consider when building a cloud,
the cloud capabilities offered by Oracle Solaris 11, and the structure of Oracle Solaris Remote Lab, an Oracle implementation of an Oracle Solaris 11 cloud. 

Topics included in this article:

Why a Cloud? And Why Oracle Solaris 11?
Cloud Benefits—Solving Business Needs
The Cloud as Multitier Data Center Virtualization
OS Virtualization with Oracle Solaris Zones
The Cloud as a Service
Choosing the Virtualization Model That Fits
Advantages of Creating a Cloud Using Oracle Solaris Zones
Oracle Solaris Remote Lab—An Overview

This is the first in a series of articles that show how to build a cloud with Oracle Solaris 11.
In our next article, we take a look at how Oracle Solaris 11 provides data security for Oracle Solaris Remote Lab.


Monday Dec 09, 2013

Presentations from Oracle Week 2013

Thanks for attending the Oracle week 2013, the largest paid IT event in Israel with 140 technical seminars and 1800 participants.


Source

Oracle ISV Enginerring ran two seminars Built for Cloud: Virtualization Use Cases and Technologies in Oracle Solaris 11 with two Oracle partners Grigale and 4NET Plus and Hadoop Cluster Installation and Administration – Hands on Workshop

I am posting here presentations and link to the Hadoop hands-on lab, for further reading on Hadoop and Oracle Solaris.

A big thank-you to the participants for the engaged discussions and to the organizers from John Bryce especially Yael Dotan and Yaara Raz for a great event!












Monday Jun 24, 2013

How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones.
The following are benefits of using Oracle Solaris for a MongoDB cluster:

• You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster.
• In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort.
• You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools.
ZFS built-in compression provides optimized disk I/O utilization for better I/O performance.
In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies.

Figure 1 shows the architecture:


Thursday Jan 24, 2013

How to Set Up a Hadoop Cluster Using Oracle Solaris Zones

This article starts with a brief overview of Hadoop and follows with an
example of setting up a Hadoop cluster with a NameNode, a secondary
NameNode, and three DataNodes using Oracle Solaris Zones.

The following are benefits of using Oracle Solaris Zones for a Hadoop cluster:

· Fast provision of new cluster members using the zone cloning feature

· Very high network throughput between the zones for data node replication

· Optimized disk I/O utilization for better I/O performance with ZFS built-in compression

· Secure data at rest using ZFS encryption






Hadoop use the Distributed File System (HDFS) in order to store data.
HDFS provides high-throughput access to application data and is suitable
for applications that have large data sets.

The Hadoop cluster building blocks are as follows:

· NameNode: The centerpiece of HDFS, which stores file system metadata,
directs the slave DataNode daemons to perform the low-level I/O tasks,
and also runs the JobTracker process.

· Secondary NameNode: Performs internal checks of the NameNode transaction log.

· DataNodes: Nodes that store the data in the HDFS file system, which are also known as slaves and run the TaskTracker process.

In the example presented in this article, all the Hadoop cluster
building blocks will be installed using the Oracle Solaris Zones, ZFS,
and network virtualization technologies.





Tuesday Sep 11, 2012

Oracle Solaris 8 P2V with Oracle database 10.2 and ASM

Background information

In this document I will demonstrate the following scenario:

Migration of Solaris 8 Physical system with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of Solaris 11 control domain.


In the first example we will preserve the host information.

On the second example we will modify the host name.

Executive summary

In this document I will demonstrate how we managed to leverage the Solaris 8 p2v tool in order to migrate a physical Solaris 8 system with Oracle database with ASM file system into a Solaris 8 branded Zone.

The ASM file system located on a LUN in SAN storage connected via FC HBA.

During the migration we used the same LUN on the source and target servers in order to avoid data migration.

The P2V tool successfully migrated the Solaris 8 physical system into the Solaris 8 branded Zone and the Zone was able to access the ASM file system.

Architecture layout



Source system

Hardware details:
Sun Fire V440 server with 4 UltraSPARC 3i CPUs 1593MHZ and 8GB of RAM

Opreating system Solaris 8 2/04 + latest recommended patch set


Target system


Oracle's SPARC T4-1 server with A single 8-core, 2.85 GHz SPARC T4 processor and 32GB of RAM

Install Solaris 11


Setting up Control Domain

primary# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
primary# ldm add-vds primary-vds0 primary
primary# ifconfig -a
net0: flags=1000843 mtu 1500 index 2
inet 10.162.49.45 netmask ffffff00 broadcast 10.162.49.255
ether 0:14:4f:ab:e3:7a
primary# ldm add-vsw net-dev=net0 primary-vsw0 primary
primary# ldm list-services primary
VCC
NAME LDOM PORT-RANGE
primary-vcc0 primary 5000-5100

VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
primary-vsw0 primary 00:14:4f:fb:44:4d net0 0 switch@0 1 1 1500 on

VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary

primary# ldm set-mau 1 primary
primary# ldm set-vcpu 16 primary
primary# ldm start-reconf primary
primary# ldm set-memory 8G primary
primary# ldm add-config initial
primary# ldm list-config

factory-default
initial [current]
primary-with-clients

primary# shutdown -y -g0 -i6

Enable the virtual console service

primary# svcadm enable vntsd

primary# svcs vntsd

STATE STIME FMRI

online 15:55:10 svc:/ldoms/vntsd:default

Setting Up Guest Domain

primary# ldm add-domain ldg1
primary# ldm add-vcpu 32 ldg1
primary# ldm add-memory 8G ldg1
primary# ldm add-vnet vnet0 primary-vsw0 ldg1

primary# ldm add-vnet vnet1 primary-vsw0 ldg1

primary# ldm add-vdsdev /dev/dsk/c3t1d0s2 vol1@primary-vds0

primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1

primary# ldm set-var auto-boot\?=true ldg1

primary# ldm set-var boot-device=vdisk1 ldg1

primary# ldm bind-domain ldg1

primary# ldm start-domain ldg1

primary# telnet localhost 5000

{0} ok boot net - install

Install solaris 10 update 10 (Solaris 10 08/11)


Verify that all the Solaris services on guest LDom are up & running


guest # svcs -xv

Oracle Solaris Legacy Containers install

The Oracle Solaris Legacy Containers download includes two versions of the product:


- Oracle Solaris Legacy Containers 1.0.1


- For Oracle Solaris 10 10/08 or later


- Oracle Solaris Legacy Containers 1.0


- For Oracle Solaris 10 08/07


- For Oracle Solaris 10 05/08


Both product versions contain identical features. The 1.0.1 product depends


on Solaris packages introduced in Solaris 10 10/08. The 1.0 product delivers


these packages to pre-10/08 versions of Solaris.


We will use the Oracle Solaris Legacy Containers 1.0.1
since our Solaris 10 version is 08/11 To install the Oracle Solaris Legacy Containers 1.0.1 software:

1. Download the Oracle Solaris Legacy Containers software bundle
from http://www.oracle.com.

2. Unarchive and install 1.0.1 software package:
guest # unzip solarislegacycontainers-solaris10-sparc.zip
guset # cd solarislegacycontainers/1.0.1/Product
guest # pkgadd -d `pwd` SUNWs8brandk

Starting the migration

On the source system


sol8# su - oracle


Shutdown the Oracle database and ASM instance
sol8$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:19:48 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown immediate
sol8$ export ORACLE_SID=+ASM


sol8$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:21:38 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown
ASM diskgroups dismounted
ASM instance shutdown

Stop the listener


sol8 $ lsnrctl stop

LSNRCTL for Solaris: Version 10.2.0.5.0 - Production on 26-AUG-2012 13:23:49

Copyright (c) 1991, 2010, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The command completed successfully

Creating the archive

sol8 # flarcreate -S -n s8-system /export/home/s8-system.flar

Copy the archive to the target guest domain


On the target system

Move and connect the SAN storage to the target system

On the control domain add the SAN storage LUN into the guest domain

primary # ldm add-vdsdev /dev/dsk/c5t40d0s6 oradata@primary-vds0
primary # ldm add-vdisk oradata oradata@primary-vds0 ldg1

On the guest domain verify that you can access the LUN

guest# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c0d2
/virtual-devices@100/channel-devices@200/disk@2


Set up the Oracle Solaris 8 branded zone on the guest domain


The Oracle Solaris 8 branded zone s8-zone  is configured with the zonecfg command.

Here is the output of zonecfg –z s8-zone info command after configuration is completed:


guest# zonecfg -z s8-zone info

zonename: s8-zone

zonepath: /zones/s8-zone

brand: solaris8

autoboot: true

bootargs:

pool:

limitpriv: default,proc_priocntl,proc_lock_memory

scheduling-class: FSS

ip-type: exclusive

hostid:

net:

        address not specified

        physical: vnet1

        defrouter not specified

device

        match: /dev/rdsk/c0d2s0

attr:

        name: machine

        type: string

        value: sun4u

 Install the Solaris 8 zone

guest# zoneadm -z s8-zone install -p -a /export/home/s8-system.flar

Boot the Solaris 8 zone

guest# zoneadm –z s8-zone boot

guest # zlogin –C s8-zone 

sol8_zone# su - oracle

Modify the ASM disk ownership 

sol8_zone# chown oracle:dba /dev/rdsk/c0d2s0

Start the listener


sol8_zone$ lsnrctl start

Start the ASM instance

sol8_zone$ export ORACLE_SID=+ASM
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:36:44 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 130023424 bytes
Fixed Size 2050360 bytes
Variable Size 102807240 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Start the database

sol8_zone$ export ORACLE_SID=ORA10
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:37:13 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size 2052448 bytes
Variable Size 385879712 bytes
Database Buffers 1207959552 bytes
Redo Buffers 14721024 bytes
Database mounted.
Database opened.

Second example


In this example we will modify the host name.

guest # zoneadm -z s8-zone install -u -a /net/server/s8_image.flar

Boot the Zone

guest # zoneadm -z s8-zone boot

Configure the Zone with a new ip address and new host name

guest # zlogin –C s8-zone

Modify the ASM disk ownership

sol8_zone # chown oracle:dba /dev/rdsk/c0d2s0

sol8_zone # cd $ORACLE_HOME/bin

reconfigure the ASM parameters

sol8_zone # ./localconfig delete

Aug 27 05:17:11 s8-zone last message repeated 3 times
Aug 27 05:17:28 s8-zone root: Oracle CSSD being stopped
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
sol8_zone # ./localconfig add

Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'other'..
Operation successful.
Configuration for local CSS has been initialized

sol8_zone # su - oracle

Start the listener + ASM + Oracle database

Monday Jun 04, 2012

Oracle Solaris Zones Physical to virtual (P2V)

Introduction
This document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.
Using an example and various scenarios, this paper describes how to take advantage of the
Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace.


The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster
Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.

Oracle Solaris Zones
Oracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.
This technology provides an isolated and secure environment for running applications.
A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.
Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.

Oracle Solaris Zones Physical-to-Virtual (P2V)
A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical
system and migrate it into a virtualized operating system environment
There are three main steps using this tool

1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image.
2. Preparing the target system by configuring a new zone that will host the new image.
3. Image installation on the target system using the image we created on step 1.

The host, where the image is built, is referred to as the source system and the host, where the
image is installed, is referred to as the target system.



Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)
Here are some benefits of this new feature:


  •  Simple- easy build process using Oracle Solaris 10 built-in commands.

  •  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.

  •  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page.

    Prerequisites
    The minimum Solaris version on the target system should be Solaris 10 9/10.
    Refer to the latest Administration Guide for Oracle Solaris 10 for a complete procedure on how to
    download and install Oracle Solaris.



  • NOTE: If the source system that used to build the image is an older version then the target
    system, then during the process, the operating system will be upgraded to Solaris 10 9/10
    (update on attach).
    Creating the Image Used to distribute the software.
    We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine,

    or build it into a NFS shared storage and
    mount the NFS file system from the target machine.
    Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.
    An image is created by using the flarcreate command:
    Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flar
    The command does the following:



  •  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).

  •  -n specifies the image name.

  •  -L specifies the archive format (i.e cpio).

    Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.
    Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB
    10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flar
    You can see example of the archive identification section in Appendix A: archive identification section.
    We can compress the flar image using the gzip command or adding the -c option to the flarcreate command
    Source # gzip /var/tmp/solaris_10_up9.flar
    An md5 checksum can be created for the image in order to ensure no data tampering
    Source # digest -v -a md5 /var/tmp/solaris_10_up9.flar


    Moving the image into the target system.
    If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.

    Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmp
    Configuring the Zone on the target system
    After copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.
    To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )

    ZFS integration
    A flash archive can be created on a system that is running a UFS or a ZFS root file system.
    NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then by
    default, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.
    This image cannot be used to install a zone. You must create the flar with an explicit cpio or pax
    archive when the system has a ZFS root.
    Use the flarcreate command with the -L archiver option, specifying cpio or pax as the
    method to archive the files. (For example, see Step 1 in the previous section).
    Optionally, on the target system you can create the zone root folder on a ZFS file system in
    order to benefit from the ZFS features (clones, snapshots, etc...).

    Target # zpool create zones c2t2d0


    Create the zone root folder:

    Target # chmod 700 /zones
    Target # zonecfg -z solaris10-up9-zone
    solaris10-up9-zone: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:solaris10-up9-zone> create
    zonecfg:solaris10-up9-zone> set zonepath=/zones
    zonecfg:solaris10-up9-zone> set autoboot=true
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone:net> set address=192.168.0.1
    zonecfg:solaris10-up9-zone:net> set physical=nxge0
    zonecfg:solaris10-up9-zone:net> end
    zonecfg:solaris10-up9-zone> verify
    zonecfg:solaris10-up9-zone> commit
    zonecfg:solaris10-up9-zone> exit

    Installing the Zone on the target system using the image
    Install the configured zone solaris10-up9-zone by using the zoneadm command with the install -
    a option and the path to the archive.
    The following example shows how to create an Image and sys-unconfig the zone.
    Target # zoneadm -z solaris10-up9-zone install -u -a
    /var/tmp/solaris_10_up9.flar
    Log File: /var/tmp/solaris10-up9-zone.install_log.AJaGve
    Installing: This may take several minutes...
    The following example shows how we can preserve system identity.
    Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar


    Resource management


    Some applications are sensitive to the number of CPUs on the target Zone. You need to
    match the number of CPUs on the Zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>add dedicated-cpu
    zonecfg:solaris10-up9-zone> set ncpus=16


    DTrace integration
    Some applications might need to be analyzing using DTrace on the target zone, you can
    add DTrace support on the zone using the zonecfg command:
    zonecfg:solaris10-up9-zone>set
    limitpriv="default,dtrace_proc,dtrace_user"


    Exclusive IP
    stack

    An Oracle Solaris Container running in Oracle Solaris 10 can have a
    shared IP stack with the global zone, or it can have an exclusive IP
    stack (which was released in Oracle Solaris 10 8/07). An exclusive IP
    stack provides a complete, tunable, manageable and independent
    networking stack to each zone. A zone with an exclusive IP stack can
    configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec.
    For an example of how to configure an Oracle Solaris zone with an
    exclusive IP stack, see the following example

    zonecfg:solaris10-up9-zone set ip-type=exclusive
    zonecfg:solaris10-up9-zone> add net
    zonecfg:solaris10-up9-zone> set physical=nxge0

    When the installation completes, use the zoneadm list -i -v options to list the installed
    zones and verify the status.
    Target # zoneadm list -i -v
    See that the new Zone status is installed
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - solaris10-up9-zone installed /zones native shared
    Now boot the Zone
    Target # zoneadm -z solaris10-up9-zone boot
    We need to login into the Zone order to complete the zone set up or insert a sysidcfg file before
    booting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg file
    section
    Target # zlogin -C solaris10-up9-zone

    Troubleshooting
    If an installation fails, review the log file. On success, the log file is in /var/log inside the zone. On
    failure, the log file is in /var/tmp in the global zone.
    If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F

    to reset the zone to the configured state.
    Target # zoneadm -z solaris10-up9-zone uninstall -F
    Target # zonecfg -z solaris10-up9-zone delete -F
    Conclusion
    Oracle Solaris Zones P2V tool provides the flexibility to build pre-configured
    images with different software configuration for faster deployment and server consolidation.
    In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.

    Appendix A: archive identification section
    We can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access the
    identification section that contains the detailed description.
    Target # head -n 20 /var/tmp/solaris_10_up9.flar
    FlAsH-aRcHiVe-2.0
    section_begin=identification
    archive_id=e4469ee97c3f30699d608b20a36011be
    files_archived_method=cpio
    creation_date=20100901160827
    creation_master=mdet5140-1
    content_name=s10-system
    creation_node=mdet5140-1
    creation_hardware_class=sun4v
    creation_platform=SUNW,T5140
    creation_processor=sparc
    creation_release=5.10
    creation_os_name=SunOS
    creation_os_version=Generic_142909-16
    files_compressed_method=none
    content_architectures=sun4v
    type=FULL
    section_end=identification
    section_begin=predeployment
    begin 755 predeployment.cpio.Z

    Appendix B: sysidcfg file section
    Target # cat sysidcfg
    system_locale=C
    timezone=US/Pacific
    terminal=xterms
    security_policy=NONE
    root_password=HsABA7Dt/0sXX
    timeserver=localhost
    name_service=NONE
    network_interface=primary {hostname= solaris10-up9-zone
    netmask=255.255.255.0
    protocol_ipv6=no
    default_route=192.168.0.1}
    name_service=NONE
    nfs4_domain=dynamic

    We need to copy this file before booting the zone
    Target # cp sysidcfg /zones/solaris10-up9-zone/root/etc/


  • Wednesday Mar 23, 2011

    How Traffix Systems Optimized Its LTE Diameter Load Balancing and Routing Solutions Using Oracle Hardware and Software

    Please read the following paper "How Traffix Systems Optimized Its LTE Diameter Load Balancing and Routing Solutions Using Oracle Hardware and Software"


    This paper provides technical information on how Traffix Systems,
    the leading Diameter protocol solutions vendor, optimized its
    Long Term Evolution (LTE) Traffix Diameter Load Balancer and Traffix Diameter Router to benefit from Oracle's software and
    hardware


    This paper also includes brief technical descriptions of how specific
    Oracle Solaris features and capabilities are implemented in the Traffix
    solutions





    About

    This blog covers cloud computing, big data and virtualization technologies

    Search

    Categories
    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today