Wednesday Oct 01, 2014

Oracle Software in Silicon Cloud

I'm happy to announce that the Oracle Software in Silicon Cloud is available today.

Oracle’s revolutionary Software in Silicon technology extends the design philosophy of engineered systems to the chip. Co-engineered by Oracle’s software and microprocessor engineers, Software in Silicon implements accelerators directly into the processor to deliver a rich feature-set that enables quick development of databases and applications that are more reliable and run faster. Now, with the Oracle Software in Silicon Cloud, developers can have a secure environment to test and improve their software and exploit the unique advantages of Oracle’s Software in Silicon technology.

Oracle Software in Silicon Cloud provides developers a ready-to-run virtual machine environment to install, test, and improve their code in arobust and secure cloud platform powered by the revolutionary Software in Silicon technology in Oracle’s forthcoming SPARC M7 processor running Oracle Solaris.

This hardware-enabled functionality can be used to detect and prevent data corruptions and security violations. Test workloads have demonstrated average results of 40x faster than software-only tools, with some tests showing it’s more than 80x faster. This performance advantage illustrates the capability to eventually be always-on in production and not limited to test environments.

Oracle Software in Silicon Cloud users will have access to the latest Oracle Solaris Studio release that includes tools to detect numerous types of memory corruption errors and provide detailed diagnostic information to aid developers in quickly improving code reliability.
Code examples, demonstrations, and documentation will help users more quickly exploit the unique advantage of running applications with Software in Silicon technology.

Software in Silicon features implemented in Oracle’s forthcoming SPARC M7 processor include:

Application Data Integrity is the first-ever end-to-end implementation of memory-access validation in hardware. Designed to help prevent security bugs such as HeartBleed from putting systems at risk, it enables hardware monitoring of memory requests by software processes in real-time and it stops unauthorized access to memory whether that access is due to a programming error or a malicious attempt to exploit buffer overruns. It also helps accelerate code development and helps ensure software quality, reliability and security.

Query Acceleration increases in-memory database query processing performance by operating on data streaming directly from memory via extremely high-bandwidth interfaces –-with speeds up to 160 GB/s—resulting in tremendous performance gains. Query acceleration is implemented in multiple engines in the SPARC M7 processor.

Decompression units in the Software in Silicon acceleration engines significantly increase usable memory capacity. The units on a single processor run data decompression with performance that is equivalent to 16 decompression PCI cards or 60 CPU cores. This capability allows compressed databases to be stored in-memory while being accessed and manipulated at full performance.

For more information about the chip see this.
Read the full news press here.
For recent posts about the Oracle Software in Silicon Cloud "Securing a Cloud-Based Data Center","Building a Cloud-Based Data Center".

Tuesday Aug 19, 2014

Securing a Cloud-Based Data Center

No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data and applications.
In this article, we discuss the security features of Oracle Solaris 11 that provide a bullet-proof cloud environment.
As an example, we show how the Oracle Solaris Remote Lab implementation utilizes these features to provide a high level of security for its users.

Note: This is the second article in a series on cloud building with Oracle Solaris 11. See Part 1 here. 

When we build a cloud, the following aspects related to the security of the data and applications in the cloud
become a concern:

• Sensitive data must be protected from unauthorized access while residing on storage devices, during
transmission between servers and clients, and when it is used by applications.

• When a project is completed, all copies of sensitive data must be securely deleted and the original
data must be kept permanently secure.

• Communications between users and the cloud must be protected to prevent exposure of sensitive information
from “man in a middle attacks.

• Limiting the operating system’s exposure protects against malicious attacks and penetration by
unauthorized users or automated “bots” and “rootkits” designed to gain privileged access.

• Strong authentication and authorization procedures further protect the operating system from tampering.

Denial of Service attacks, whether they are started intentionally by hackers or accidentally by other cloud users, must be quickly detected and deflected, and the service must be restored.

In addition to the security features in the operating system, deep auditing provides a trail of actions that can identify violations,issues, and attempts to penetrate the security of the operating system.

Combined, these threats and risks reinforce the need for enterprise-grade security solutions that are specifically designed to protect cloud environments. With Oracle Solaris 11, the security of any cloud is ensured.

This article explains how.

Tuesday Jul 08, 2014

Network Virtualization High Availability

How to add high availability to the network infrastructure of a multitenant cloud environment using the DLMP aggregation technology introduced in Oracle Solaris 11.1.
This article is Part 1 of a two-part series. In Part 1, we will cover how to implement network HA using datalink multipathing (DLMP) aggregation technology, which was introduced in Oracle Solaris 11.1.

In Part 2 of this series, we will explore how to secure the network and perform typical network management operations for an environment that uses DLMP aggregations.

Once we virtualize a network cloud infrastructure using Oracle Solaris 11 network virtualization technologies—such as virtual network interface cards (VNICs), virtual switches, load balancers, firewalls, and routers—the network itself becomes an increasingly critical component of the cloud infrastructure.

In order to add resiliency to the network infrastructure layer, we need to implement an HA solution at this layer, such as we would do for any other mission-critical component of the data center.

A DLMP aggregation allows us to deliver resiliency to the network infrastructure by providing transparent failover and increasing throughput.
The objects that are involved in the process are VNICs, irrespective of whether they are configured inside Oracle Solaris Zones or in logical domains under Oracle VM Server for SPARC.

Using this technology, you can add HA to your current network infrastructure without the cross-organizational complexity that might often be associated with this kind of solution.

The benefits of this technology are clear and they take into account the limitations of existing technologies:

Since the IEEE 802.3ad trunking standard does not cover the case for building a trunk across multiple network switches, the network switch becomes a single point of failure (SPOF). Some vendors have added this capability to their product, but these implementations are vendor-specific and, therefore, prevent combining switches from multiple vendors when building a multi-switch trunk. Because Oracle Solaris provides resilience, DLMP aggregation can be implemented across two different network switches, thus eliminating the network switch as a SPOF. As an additional benefit, because the aggregation is implemented at the operating system layer, there is no need to set anything up on the switch.

Building a network HA solution that is based on previously available IP network multipathing (IPMP) can be a complex task. With IPMP, HA is implemented at Layer 3 (the IP layer), which needs to be configured in the global zones and within each zone, and requires multiple VNICs to be assigned to each zone or virtual machine (VM). This involves more configuration steps, requires spare IP addresses out of the address pool, and generally can be an error-prone process. In contrast, the DLMP aggregation setup is much simpler since all the configuration takes place at Layer 2 in the global zone; therefore, every non-global zone can directly benefit from the underlying technology without the need for additional configuration. In addition, every new Oracle Solaris Zone that is provisioned automatically benefits from this capability. Moreover, we can create an aggregation over four 10 Gb/sec network interfaces; combining all the interfaces together, we can achieve up to 40 Gb/sec of network bandwidth.

DLMP can provide additional benefits when employed together with other network virtualization technologies that are implemented in the Oracle Solaris 11 operating system, such as link protection and the ability to configure a bandwidth limit on a VNIC or a traffic flow to meet service-level agreements (SLAs). Combining these technologies provides for a uniquely compelling network solution in terms of HA, security, and performance in a cloud environment.

Monday May 19, 2014

How to Set Up a Hadoop 2.2 Cluster From the Unified Archive

Tech Article: How to Set Up a Hadoop 2.2 Cluster From the Unified Archive.
Learn how to combine an Apache Hadoop 2.2 (YARN) cluster using Oracle Solaris Zones, the ZFS file system, and the new Unified Archive capabilities of Oracle Solaris 11.2 to set up a Hadoop cluster on a single system.
Also see how to configure manual or automatic failover, and how to use the Unified Archive to create a “cloud in a box” and deploy bare-metal system.

The article starts with a brief overview of Hadoop and follows with an example of setting up a Hadoop cluster with two NameNodes, a Resource Manager, a History Server, and three DataNodes. As a prerequisite, you should have a basic understanding of Oracle Solaris Zones and network administration.

Table of Contents:
About Hadoop and Oracle Solaris Zones
Download and Install Hadoop
Configure the Network Time Protocol
Configure the Active NameNode
Set Up the Standby NameNode and the ResourceManager
Set Up the DataNode Zones
Format the Hadoop File System
Start the Hadoop Cluster
About Hadoop High Availability
Configure Manual Failover
About Apache ZooKeeper and Automatic Failover
Configure Automatic Failover
Create a "Cloud in a Box" Using Unified Archive
Deploy a Bare-Metal System from a Unified Archive

Tuesday Dec 17, 2013

Performance Analysis in a Multitenant Cloud Environment Using Hadoop Cluster and Oracle Solaris 11

Oracle Solaris 11 comes with a new set of commands that provide the ability to conduct
performance analysis in a virtualized multitenant cloud environment. Performance analysis in a
virtualized multitenant cloud environment with different users running various workloads can be a
challenging task for the following reasons:

Each virtualization software adds an abstraction layer to enable better manageability. Although this makes it much simpler to manage the virtualized resources, it is very difficult to find the physical system resources that are overloaded.

Each Oracle Solaris Zone can have different workload; it can be disk I/O, network I/O, CPU, memory, or combination of these.

In addition, a single Oracle Solaris Zone can overload the entire system resources.It is very difficult to observe the environment; you need to be able to monitor the environment from the top level to see all the virtual instances (non-global zones) in real time with the ability to drill down to specific resources.

The benefits of using Oracle Solaris 11 for virtualized performance analysis are:

Observability. The Oracle Solaris global zone is a fully functioning operating systems, not a propriety hypervisor or a minimized operating system that lacks the ability to observe the entire environment—including the host and the VMs, at the same time. The global zone can see all the non-global zones’ performance metrics.

Integration. All the subsystems are built inside the same operating system. For example, the ZFS file system and the Oracle Solaris Zones virtualization technology are integrated together. This is preferable to mixing many vendors’ technology, which causes a lack of integration between the different operating system (OS) subsystems and makes it very difficult to analyze all the different OS subsystems at the same time.

Virtualization awareness. The built-in Oracle Solaris commands are virtualization-aware,and they can provide performance statistics for the entire system (the Oracle Solaris global zone). In addition to providing the ability to
drill down into every resource (Oracle Solaris non-global zones), these commands provide accurate results during the performance analysis process.

In this article, we are going to explore four examples that show how we can monitor virtualized environment with Oracle Solaris Zones using the built-in Oracle Solaris 11 tools. These tools provide the ability to drill down to specific resources, for example, CPU, memory, disk, and network. In addition, they provide the ability to print statistics per Oracle Solaris Zone and provide information on the running applications.

Read it 
Article: Performance Analysis in a Multitenant Cloud Environment

Friday Jun 14, 2013

Public Cloud Security Anti Spoofing Protection

This past week (9-Jun-2013) Oracle ISV Engineering participated in the IGT cloud meetup, the largest cloud community in Israel with 4,000 registered members.
During the meetup, ISV Engineering presented two presentations: 

Introduction to Oracle Cloud Infrastructure presented by Frederic Pariente
Use case : Cloud Security Design and Implementation presented by me 
In addition, there was a partner presentation from ECI Telecom
Using Oracle Solaris11 Technologies for Building ECI R&D and Product Private Clouds presented by Mark Markman from ECI Telecom 
The Solaris 11 feature that received the most attention from the audience was the new Solaris 11 network virtualization technology.
The Solaris 11 network virtualization allows us to build any physical network topology inside the Solaris operating system including virtual network cards (VNICs), virtual switches (vSwitch), and more sophisticated network components (e.g. load balancers, routers and fire-walls).
The benefits for using this technology are in reducing infrastructure cost since there is no need to invest in superfluous network equipment. In addition the infrastructure deployment is much faster, since all the network building blocks are based on software and not in hardware. 
One of the key features of this network virtualization technology is the Data Link Protection. With this capability we can provide the flexibility that our partners need in a cloud environment and allow them root account access from inside the Solaris zone. Although we disabled their ability to create spoofing attack  by sending outgoing packets with a different source IP or MAC address and packets which aren't types of IPv4, IPv6, and ARP.

The following example demonstrates how to enable this feature:
Create the virtual VNIC (in a further step, we will associate this VNIC with the Solaris zone):

# dladm create-vnic -l net0 vnic0

Setup the Solaris zone:
# zonecfg -z secure-zone
Use 'create' to begin configuring a new zone:
zonecfg:secure-zone> create
create: Using system default template 'SYSdefault'
zonecfg:secure-zone> set zonepath=/zones/secure-zone
zonecfg:secure-zone> add net
zonecfg:secure-zone:net> set physical=vnic0
zonecfg:secure-zone:net> end
zonecfg:secure-zone> verify
zonecfg:secure-zone> commit
zonecfg:secure-zone> exit

Install the zone:
# zoneadm -z secure-zone install
Boot the zone: # zoneadm -z secure-zone boot
Log In to the zone:
# zlogin -C secure-zone

NOTE - During the zone setup select the vnic0 network interface and assign the IP address.

From the global zone enable link protection on vnic0:

We can set different modes: ip-nospoof, dhcp-nospoof, mac-nospoof and restricted.
Any outgoing IP, ARP, or NDP packet must have an address field that matches either a DHCP-configured IP address or one of the addresses listed in the allowed-ips link property.
mac-nospoof: prevents the root user from changing the zone mac address. An outbound packet's source MAC address must match the datalink's configured MAC address.
dhcp-nospoof: prevents Client ID/DUID spoofing for DHCP.
only allows IPv4, IPv6 and ARP protocols. Using this protection type prevents the link from generating potentially harmful L2 control frames.

# dladm set-linkprop -p protection=mac-nospoof,restricted,ip-nospoof vnic0

Specify the IP address as values for the allowed-ips property for the vnic0 link:

# dladm set-linkprop -p allowed-ips= vnic0

Verify the link protection property values:
# dladm show-linkprop -p protection,allowed-ips vnic0

vnic0 protection rw mac-nospoof, -- mac-nospoof,
restricted, restricted,
ip-nospoof ip-nospoof,
vnic0 allowed-ips rw -- --

We can see that is set as allowed ip address.

Log In to the zone

# zlogin secure-zone

After we login into the zone let's try to change the zone's ip address:

root@secure-zone:~# ifconfig vnic0
ifconfig:could not create address: Permission denied

As we can see we can't change the zone's ip address!

Optional - disable the link protection from the global zone:

# dladm reset-linkprop -p protection,allowed-ips vnic0

NOTE - we don't need to reboot the zone in order to disable this property.

Verify the change

# dladm show-linkprop -p protection,allowed-ips vnic0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
vnic0 protection rw -- -- mac-nospoof,
vnic0 allowed-ips rw -- -- --

As we can see we don't have restriction on the allowed-ips property.

Conclusion In this blog I demonstrated how we can leverage the Solaris 11 Data link protection in order to prevent spoofing attacks.

Monday May 20, 2013

How To Protect Public Cloud Using Solaris 11 Technologies

When we meet with our partners, we often ask them, “ What are their main security challenges for public cloud infrastructure.? What worries them in this regard?”
This is what we've gathered from our partners regarding the security challenges:

1.    Protect data at rest in transit and in use using encryption
2.    Prevent denial of service attacks against their infrastructure
3.    Segregate network traffic between different cloud users
4.    Disable hostile code (e.g.’ rootkit’ attacks)
5.    Minimize operating system attack surface
6.    Secure data deletions once we have done with our project
7.    Enable strong authorization and authentication for non secure protocols

Based on these guidelines, we began to design our Oracle Developer Cloud. Our vision was to leverage Solaris 11 technologies in order to meet those security requirements.

First - Our partners would like to encrypt everything from disk up the layers to the application without the performance overhead which is usually associated with this type of technology.
The SPARC T4 (and lately the SPARC T5) integrated cryptographic accelerator allow us to encrypt data using ZFS encryption capability.
We can encrypt all the network traffic using SSL from the client connection to the cloud main portal using the Secure Global Desktop (SGD) technology and also encrypt the network traffic between the application tier to the database tier. In addition to that we can protect our Database tables using Oracle Transparent Data Encryption (TDE).
During our performance tests we saw that the performance impact was very low (less than 5%) when we enabled those encryption technologies.
The following example shows how we created an encrypted file system

# zfs create -o encryption=on rpool/zfs_file_system

Enter passphrase for 'rpool/zfs_file_system':
Enter again:

NOTE - In the above example, we used a passphrase that is interactively requested but we can use SSL or a key repository.
Second  - How we can mitigate denial of service attacks?
The new Solaris 11 network virtualization technology allow us to apply virtualization technologies to  our network by splitting the physical network card into multiple virtual network ‘cards’. in addition, it provides the capability to setup flow which is sophisticated quality of service mechanism.
Flows allow us to limit the network bandwidth for a specific network port on specific network interface.

In the following example we limit the SSL traffic to 100Mb on the vnic0 network interface

# dladm create-vnic vnic0 –l net0
# flowadm add-flow -l vnic0 -a transport=TCP,local_port=443 https-flow
# flowadm set-flowprop -p maxbw=100M https-flow

During any (Denial of Service) DOS attack against this web server, we can minimize the impact on the rest of the infrastructure.
Third -  How can we isolate network traffic between different tenants of the public cloud?
The new Solaris 11 network technology allow us to segregate the network traffic on multiple layers.

For example we can limit the network traffic based on the layer two using VLANs

# dladm create-vnic -l net0  -v 2 vnic1

Also we can be implement firewall rules for layer three separations using the Solaris 11 built-in firewall software.
For an example of Solaris 11 firewall see
In addition to the firewall software, Solaris 11 has built-in load balancer and routing software. In a cloud based environment it means that new functionality can be added promptly since we don't need an extra hardware in order to implement those extra functions.

Fourth - Rootkits have become a serious threat is allowing the insertion of hostile code using custom kernel modules.
The Solaris Zones technology prevents loading or unloading kernel modules (since local zones lack the sys_config privilege).
This way we can limit the attack surface and prevent this type of attack.

In the following example we can see that even the root user is unable to load custom kernel module inside a Solaris zone

# ppriv -De modload -p /tmp/systrace

modload[21174]: missing privilege "ALL" (euid = 0, syscall = 152) needed at modctl+0x52
Insufficient privileges to load a module

Fifth - the Solaris immutable zones technology allows us to minimize the operating system attack surface
For example: disable the ability to install new IPS packages and modify file systems like /etc
We can setup Solaris immutable zones using the zonecfg command.

# zonecfg -z secure-zone
Use 'create' to begin configuring a new zone.
zonecfg:secure-zone> create
create: Using system default template 'SYSdefault'
zonecfg:secure-zone> set zonepath=/zones/secure-zone
zonecfg:secure-zone> set file-mac-profile=fixed-configuration
zonecfg:secure-zone> commit
zonecfg:secure-zone> exit

# zoneadm -z secure-zone install

We can combine the ZFS encryption and immutable zones for more examples see:

Sixth - The main challenge of building secure BIG Data solution is the lack of built-in security mechanism for authorization and authentication.
The Integrated Solaris Kerberos allows us to enable strong authorization and authentication for non-secure by default distributed systems like Apache Hadoop.

The following example demonstrates how easy it is to install and setup Kerberos infrastructure on Solaris 11

# pkg install pkg://solaris/system/security/kerberos-5
# kdcmgr -a kws/admin -r EXAMPLE.COM create master

Finally - our partners want to assure that when the projects are finished and complete, all the data is erased without the ability to recover this data by looking at the disk blocks directly bypassing the file system layer.
ZFS assured delete feature allows us to implement this kind of secure deletion.
The following example shows how we can change the ZFS wrapping key to a random data (output of /dev/random) then we unmount the file system and finally destroy it.

# zfs key -c -o  keysource=raw,file:///dev/random rpool/zfs_file_system
# zfs key -u rpool/zfs_file_system
# zfs destroy rpool/zfs_file_system

In this blog entry, I covered how we can leverage the SPARC T4/T5 and the Solaris 11 features in order to build secure cloud infrastructure. Those technologies allow us to build highly protected environments without  the need to invest extra budget on special hardware. They also  allow us to protect our data and network traffic from various threats.
If you would like to hear more about those technologies, please join us at the next IGT cloud meet-up

Wednesday Feb 06, 2013

Building your developer cloud using Solaris 11

Solaris 11 combines all the good stuff that Solaris 10 has in terms
of enterprise scalability, performance and security and now the new cloud

During the last year I had the opportunity to work on one of the
most challenging engineering projects at Oracle,
building developer cloud for our OPN Gold members in order qualify their applications on Solaris 11.

This cloud platform provides an intuitive user interface for VM provisioning by selecting VM images pre-installed with Oracle Database 11g Release 2 or Oracle WebLogic Server 12c , In addition to that simple file upload and file download mechanisms.

We used the following Solaris 11 cloud technologies in order to build this platform, for example:

Oracle Solaris 11 Network Virtualization
Oracle Solaris Zones
ZFS encryption and cloning capability
NFS server inside Solaris 11 zone

You can find the technology building blocks slide deck here .

For more information about this unique cloud offering see .

Tuesday Apr 21, 2009

IGT Cloud Computing WG Meeting

Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "IGT Cloud Computing WG Meeting" at the Sun office in Herzeliya. During the event Nati Shalom, CTO GigaSpaces, Moshe Kaplan, CEO Rocketier , Haim Yadid, Performance Expert, ScalableJ, presented various cloud computing technologies,There were 50 attendees from a wide breadth of technology firms.

For more information regarding using Sun's cloud see .

meeting agenda :

Auto-Scaling Your Existing Web Application Nati Shalom, CTO, Gigaspaces


In this session, will cover how to take a standard JEE web application and scale it out  or down dynamically, without changes to the application code. Seeing as most web applications are over-provisioned to meet  infrequent peak loads, this is a dramatic change, because it enables growing your application as needed, when needed, without paying for unutilized resources.  Nati will discuss the challenges involved in dynamic scaling, such as ensuring the integrity and consistency of the application, how to keep the load-balancer in sync with servers' changing location, and how to maintain affinity and high availability of session information with the load balancer. If time permits, Nati  will show a live demo of a Web 2.0 app scaling dynamically on the Amazon cloud.


How your very large databases can work in the cloud computing world?
Moshe Kaplan, RockeTier, a performance expert and scale out architect
Cloud computing is famous for its flexibility, dynamic nature and ability to infinite growth. However, infinite growth means very large databases with billions of records in it. This leads us to a paradox: "How can weak servers support very large databases which usually require several CPUs and dedicated hardware?"
The Internet industry proved it can be done. These days many of the Internet giants, processing billions of events every day, are based on cloud computing architecture such and sharding. What is Sharding ? What kinds of Sharding can you implement? What are the best practices?

Utilizing the cloud for Performance Validation
Haim Yadid, Performance Expert, ScalableJ

Creating Loaded environment is crucial for software performance validation. Execution of such a simulated environment required usually great deal of hardware which is then left unused during most of the development cycle. In this short session I will suggest utilizing cloud computing for performance validation. I will present a case study where loaded environment used 12 machines on AWS for the duration of the test. This approach gives much more flexibility and reduces TCO dramatically. We will discuss the limitation of this approach and suggest means to address them.


Wednesday Mar 04, 2009

Technical overview GlassFish 3.0 on Amazon cloud

The integrated GigaSpaces GlassFish solution with its components is captured in the following diagram :


   SLA Driven deployment environment:

The SLA Driven deployment environment is responsible for hosting all services in the network. It basically does match making between the application requirements and the availability of the resources over the network. It is comprised of the following components:

    • Grid Service Manager - GSM – responsible for managing the application lifecycle and deployment

    • Grid Service Container GSC – a light weight container which is essentially a wrapper on top of the Java process that exposes the JVM to the GSM and provides a means to deploy and undeploy services dynamically.

    • Processing-Unit (PU )– Represents the application deployment unit. A Processing Unit is essentially an extension of the spring application context that packages specific application components in a single package and uses dependency injection to mesh together these components. The Processing Unit is an atomic deployment artifact and its composition is determined by the scaling and failover granularity of a given application. It, therefore,  is the unit-of-scale and failover. There are number of pre-defined Processing Unit types :

      • Web Processing Unit Web Processing Unit is responsible for managing Web Container instances and enables them to run within SLA driven container environment. With a Web Processing Unit, one can deploy the Web Container as group of services and apply SLAs or QoS semantics such as one-per-vm, one-per-machine, etc.  In other words, one can easily use the Processing Unit SLA to determine how web containers would be provisioned on the network. In our specific case most of the GlassFish v3 Prelude integration takes place at this level.

      • Data Grid Processing Unit Data Grid is a processing unit that wraps the GigaSpaces space instances. By wrapping the space instance it adds SLA capabilities avliable with each processing unit. One of the common SLA is to ensure that primary instances will not be running on the same machine as the backup instances etc. It also determines deployment topology (partitioned, replicated), as well as scaling policy, etc. The data grid includes another instance, not shown in the above diagram, called the Mirror Service. The Mirror Service is responsible for making sure that all updates made on the Data Grid will be passed reliably to the underlying database.


  • Load Balancer Agent – The Load Balancer Agent is responsible for listening to web-containers availability and add those instances to the Load Balancer list when a new container is added, or remove it when it has been removed. The Load Balancer Agent is currently configured to work with the Apache Load Balancer but can be easily set up to work with any external Load Balancer.

How it works:

The following section provides a high-level description of how all the above components work together to provide high performance and scaling.


  • Deployment - The deployment of the application is done through the GigaSpaces processing-unit deployment command. Assigning specific SLA as part of the deployment lets the GSM know how we wish to distribute the web instances over the network. For example, one could specify in the SLA that there would be only one instance per machine and define the minimum number of instances that need to be running. If needed, one can add specific system requirements such as JVM version, OS-Type, etc. to the SLA . The deployment command points to to a specific web application archive (WAR). The WAR file needs to include a configuration attribute in its META-INF configuration that will instruct the deployer tool to use GlassFish v3 Prelude as the web container for this specific web application. Upon deployment the GlassFish-processing-unit will be started on the available GSC containers that matches the SLA definitions. The GSC will assign specific port to that container instance. .  When GlassFish starts it will load the war file automatically and start serving http requests to that instance of the web application.

  • Connecting to the Load Balancer - Auto scaling -The load balancer agent is assigned with each instance of the GSM. It listens for the availability of new web containers and ensures that the available containers will join the load balancer by continuously updating the load-balancer configuration whenever such change happens. This happens automatically through the GigaSpaces discovery protocol and does not require any human intervention.

  • Handling failure - Self healing - If one of the web containers fails, the GSM will automatically detect that and start and new web container on one of the available GSC containers if one exists. If there is not enough resources available, the GSM will wait till such a resource will become available. In cloud environments, the GSM will initiate a new machine instance in such an event by calling the proper service on the cloud infrastructure.

  • Session replication - HttpSession can be automatically backed up by the GigaSpaces In Memroy Data Grid (IMDG) . In this case user applications do not need to change their code. When user data is stored in the HttpSession,  that data gets stored into the underlying IMDG. When the http request is completed that data is flushed into the shared data-grid servers.

  • Scaling the database tier - Beyond session state caching, the web application can get a a reference to the GigaSpaces IMDG and use it to store data in-memory in order to reduce contention on the database. GigaSpaces data grid automatically synchronizes updates with the database. To enable maximum performance, the update to the database is done in most cases asynchronously (Write-Behind). A built-in hibernate plug-in handles the mapping between the in-memory data and the relational data model. You can read more on how this model handles failure as well as consistency, aggregated queries here.

Tuesday Mar 03, 2009

GlassFish 3.0 on Amazon cloud

Here is how you can run the demo of GlassFish 3.0 on Amazon cloud. 

Where should I start? The best way to get started is to run a demo applicaiton and see for yourself how this integration works. To make it even simpler we offer the demo on our new cloud offering. This will enable you to expereince how a full production ready environment which include full clustering, dynamic scaling, full high avliability and Session replication works in one click. To run the demo on the cloud follow the follwoing steps:

1. Download the GlassFish web scaling deployment file from here to your local machine.

2. Get the mycloud page and get your free access code – this will enable you to get access to the cloud.



3. Select the stock-demo-2.3.0-gf-sun.xml then hit the Deploy button (you first need to save the attached file in one your local machine) – The system will start provisioning the web application on the cloud. This will include a machine for managing the cloud, a machine for the load-balancer and machines for the web and data-grid containers. After approximately 3 minutes the application will be deployed completely. At this point you should see “running” link on the load-balancer machine. Click on this link to open your web-client application.


4. Test auto-scaling – click multiple times on the “running” link to open more clients. This will enable us to increase the load (request/sec) on the system. As soon as the request/sec will grow beyond a certain threshold you’ll see new machines being provisioned. After two minutes approximately the machine will be running and a new web-container will be auto-deployed into that machine. This new web-container will be linked automatically with the load-balancer and the load-balancer in return will spread the load to this new machine as well. This will reduce the load on each of the servers.

5. Test self-healing – you can now kill one of the machines and see how your web client is behaving. You should see that even though the machine was killed the client was hardly effected and system scaled itself down automatically.

Seeing what’s going on behind the scene:

All this may seam to be like a magic for you. If you want to access the machines and watch the web containers, the data-grid instances and the machines as well as the real time statistics you can click on the Manage button button. This will open-up a management console that is running on the cloud through the web. With this tool you can view all the components of our systems. You can even query the data using SQL browser and view the data as it enters the system. In addition to that you can choose to add more services, relocate services through a simple click of a mouse or drag and drop action.


For more information regarding using Glassfish see


Wednesday Dec 03, 2008

IGT2008 - The World Summit of Cloud Computing

Yesterday  I want to the  IGT 2008  event  In addition to exhibiting at the event, I  delivered a opening demo presentation and a hands-on

xVM server workshop



This blog covers cloud computing, big data and virtualization technologies


« August 2016