X

Recent Posts

Sun

How to Build Software Defined Networks Using Elastic Virtual Switches - Part 1

Oracle Solaris 11.2 enhances the existing,integrated software-defined networking (SDN) technologies provided by earlier releases of Oracle Solaris to provide much greater application agility without the added overhead of expensive network hardware. It now enables application-driven, multitenant cloud virtual networking across a completely distributed set of systems; decoupling from the physical network infrastructure; and application-level network service-level agreements (SLAs)—all built in as part of the platform. Enhancements and new features include the following:• Network virtualization with virtual network interface cards (VNICs), elastic virtual switches, virtual local area networks (VLANs), and virtual extensible VLANs (VXLANs)• Network resource management and integrated, application- level quality of service (QoS) to enforce bandwidth limits on VNICs and traffic flows• Cloud readiness, a core feature of the OpenStack distribution included in Oracle Solaris 11• Tight integration with Oracle Solaris Zones About the Elastic Virtual Switch Feature of Oracle SolarisThe Elastic Virtual Switch (EVS) feature provides a built-in distributed virtual network infrastructure that can be used to deploy and manage virtual switches that are spread across several compute nodes. These compute nodes are the physical machines that host virtual machines (VMs). An elastic virtual switch is an entity that represents explicitly created virtual switches that belong to the same Layer 2 (L2) segment. An elastic virtual switch provides network connectivity between VMs connected to it from anywhere in the network.For more information how-to combine software-defined networking (SDN) and Elastic Virtual Switch (EVS) technologies using examples see the following article:How to Build Software Defined Networks Using Elastic Virtual Switches - Part 1

Oracle Solaris 11.2 enhances the existing,integrated software-defined networking (SDN) technologies provided by earlier releases of Oracle Solaris to provide much greater application agility without...

Sun

Impressions from Oracle Week 2014

Last week the 21st Oracle Week took place in Herzlia Israel with more than 1800 participates.This is the largest paid IT event in Israel. The event included 100 technical seminars covering many Oracle technologies such as: Database & Big Data, Business Analytics, Development and Infrastructure. Source The event was very impressive with lots of good energy. It was a unique opportunity to meet with customers, colleagues, develop new business contacts and to eat delicious deserts :) The Oracle Systems division presented two demos . The first was ZFS Storage Appliance and the second Oracle Database Appliance (ODA). Hundreds of people came by to see and hear about Oracle technology and received cool headsets and a smiley on the badge :) During the event I ran three seminars: Introduction to OpenStack with a guest lecture, Mark Markman from ECI Telecom. Hadoop Hands On Lab . This Hadoop Lab demonstrated how to set up a Hadoop cluster on Oracle Solaris 11 using Zones, ZFS. Big Data Infrastructure - Challenges, Technologies and Practices with a guest lecture, Sam Babad from MidLink. The total audience for these sessions was 40 people. A big thank-you to the participants for the great interaction and to the organizers from John Bryce especially Yael Dotan, Margarita Ripkin and Oshrat Shem-Tov, Issrae Sakhafie, Oren Parnes and to Dorit Almog from Oracle for a great event!Looking forward to see you next year in Oracle Week 2015!

Last week the 21st Oracle Week took place in Herzlia Israel with more than 1800 participates.This is the largest paid IT event in Israel. The event included 100 technical seminars covering many Oracle...

Sun

Oracle Software in Silicon Cloud

I'm happy to announce that the Oracle Software in Silicon Cloud is available today.Oracle’s revolutionary Software in Silicon technology extends the design philosophy of engineered systems to the chip. Co-engineered by Oracle’s software and microprocessor engineers, Software in Silicon implements accelerators directly into the processor to deliver a rich feature-set that enables quick development of databases and applications that are more reliable and run faster. Now, with the Oracle Software in Silicon Cloud, developers can have a secure environment to test and improve their software and exploit the unique advantages of Oracle’s Software in Silicon technology. Oracle Software in Silicon Cloud provides developers a ready-to-run virtual machine environment to install, test, and improve their code in arobust and secure cloud platform powered by the revolutionary Software in Silicon technology in Oracle’s forthcoming SPARC M7 processor running Oracle Solaris.This hardware-enabled functionality can be used to detect and prevent data corruptions and security violations. Test workloads have demonstrated average results of 40x faster than software-only tools, with some tests showing it’s more than 80x faster. This performance advantage illustrates the capability to eventually be always-on in production and not limited to test environments.Oracle Software in Silicon Cloud users will have access to the latest Oracle Solaris Studio release that includes tools to detect numerous types of memory corruption errors and provide detailed diagnostic information to aid developers in quickly improving code reliability.Code examples, demonstrations, and documentation will help users more quickly exploit the unique advantage of running applications with Software in Silicon technology.Software in Silicon features implemented in Oracle’s forthcoming SPARC M7 processor include:Application Data Integrity is the first-ever end-to-end implementation of memory-access validation in hardware. Designed to help prevent security bugs such as HeartBleed from putting systems at risk, it enables hardware monitoring of memory requests by software processes in real-time and it stops unauthorized access to memory whether that access is due to a programming error or a malicious attempt to exploit buffer overruns. It also helps accelerate code development and helps ensure software quality, reliability and security.Query Acceleration increases in-memory database query processing performance by operating on data streaming directly from memory via extremely high-bandwidth interfaces –-with speeds up to 160 GB/s—resulting in tremendous performance gains. Query acceleration is implemented in multiple engines in the SPARC M7 processor.Decompression units in the Software in Silicon acceleration engines significantly increase usable memory capacity. The units on a single processor run data decompression with performance that is equivalent to 16 decompression PCI cards or 60 CPU cores. This capability allows compressed databases to be stored in-memory while being accessed and manipulated at full performance.For more information about the chip see this.Read the full news press here.For recent posts about the Oracle Software in Silicon Cloud "Securing a Cloud-Based Data Center","Building a Cloud-Based Data Center".

I'm happy to announce that the Oracle Software in Silicon Cloud is available today.Oracle’s revolutionary Software in Silicon technology extends the design philosophy of engineered systems to the...

Sun

Hadoop Hands-on Lab Oracle OpenWorld 2014

In a few days the largest IT event in world will start.  Oracle OpenWorld 2014 . As in the past, and growing exponentially each year, we will host over 2000 sessions and many ‘hands-on’ labs.These labs are a unique opportunity to familiarize yourselves with the Oracle products which address the entire IT portfolio.Our specific interest is in Big Data, Solaris, and virtualization.  This year Jeff Taylor and I will present the following lab:  Set Up a Hadoop 2 Cluster with Oracle Solaris Zones, Oracle Solaris ZFS, and Unified Archive [HOL2086] This hands-on lab addresse all the requirements and demonstrates how to set up an Apache Hadoop 2 (YARN) cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, Oracle Solaris ZFS, and Unified Archive.  Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model.  It also covers the Hadoop installation process and the cluster building blocks, namely: NameNode, Resource Manager, History Server, and DataNodes.In addition, you will learn how to combine Oracle Solaris 11 technologies for better scalability and data security and will learn how to enable a HDFS high-availability cluster and run a MapReduce job.Please register by using the link below:Set Up a Hadoop 2 Cluster with Oracle Solaris Zones, Oracle Solaris ZFS, and Unified Archive [HOL2086]See you at OpenWorld!Orgad

In a few days the largest IT event in world will start.  Oracle OpenWorld 2014 . As in the past, and growing exponentially each year, we will host over 2000 sessions and many ‘hands-on’ labs.These...

Sun

Securing a Cloud-Based Data Center

No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data and applications. In this article, we discuss the security features of Oracle Solaris 11 that provide a bullet-proof cloud environment. As an example, we show how the Oracle Solaris Remote Lab implementation utilizes these features to provide a high level of security for its users.Note: This is the second article in a series on cloud building with Oracle Solaris 11. See Part 1 here. When we build a cloud, the following aspects related to the security of the data and applications in the cloud become a concern:• Sensitive data must be protected from unauthorized access while residing on storage devices, during transmission between servers and clients, and when it is used by applications.• When a project is completed, all copies of sensitive data must be securely deleted and the original data must be kept permanently secure.• Communications between users and the cloud must be protected to prevent exposure of sensitive information from “man in a middle attacks.”• Limiting the operating system’s exposure protects against malicious attacks and penetration by unauthorized users or automated “bots” and “rootkits” designed to gain privileged access.• Strong authentication and authorization procedures further protect the operating system from tampering.• Denial of Service attacks, whether they are started intentionally by hackers or accidentally by other cloud users, must be quickly detected and deflected, and the service must be restored.In addition to the security features in the operating system, deep auditing provides a trail of actions that can identify violations,issues, and attempts to penetrate the security of the operating system.Combined, these threats and risks reinforce the need for enterprise-grade security solutions that are specifically designed to protect cloud environments. With Oracle Solaris 11, the security of any cloud is ensured. This article explains how.

No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data...

Sun

Network Virtualization High Availability

How to add high availability to the network infrastructure of a multitenant cloud environment using the DLMP aggregation technology introduced in Oracle Solaris 11.1. This article is Part 1 of a two-part series. In Part 1, we will cover how to implement network HA using datalink multipathing (DLMP) aggregation technology, which was introduced in Oracle Solaris 11.1.In Part 2 of this series, we will explore how to secure the network and perform typical network management operations for an environment that uses DLMP aggregations.Once we virtualize a network cloud infrastructure using Oracle Solaris 11 network virtualization technologies—such as virtual network interface cards (VNICs), virtual switches, load balancers, firewalls, and routers—the network itself becomes an increasingly critical component of the cloud infrastructure.In order to add resiliency to the network infrastructure layer, we need to implement an HA solution at this layer, such as we would do for any other mission-critical component of the data center.A DLMP aggregation allows us to deliver resiliency to the network infrastructure by providing transparent failover and increasing throughput. The objects that are involved in the process are VNICs, irrespective of whether they are configured inside Oracle Solaris Zones or in logical domains under Oracle VM Server for SPARC. Using this technology, you can add HA to your current network infrastructure without the cross-organizational complexity that might often be associated with this kind of solution.The benefits of this technology are clear and they take into account the limitations of existing technologies:Since the IEEE 802.3ad trunking standard does not cover the case for building a trunk across multiple network switches, the network switch becomes a single point of failure (SPOF). Some vendors have added this capability to their product, but these implementations are vendor-specific and, therefore, prevent combining switches from multiple vendors when building a multi-switch trunk. Because Oracle Solaris provides resilience, DLMP aggregation can be implemented across two different network switches, thus eliminating the network switch as a SPOF. As an additional benefit, because the aggregation is implemented at the operating system layer, there is no need to set anything up on the switch.Building a network HA solution that is based on previously available IP network multipathing (IPMP) can be a complex task. With IPMP, HA is implemented at Layer 3 (the IP layer), which needs to be configured in the global zones and within each zone, and requires multiple VNICs to be assigned to each zone or virtual machine (VM). This involves more configuration steps, requires spare IP addresses out of the address pool, and generally can be an error-prone process. In contrast, the DLMP aggregation setup is much simpler since all the configuration takes place at Layer 2 in the global zone; therefore, every non-global zone can directly benefit from the underlying technology without the need for additional configuration. In addition, every new Oracle Solaris Zone that is provisioned automatically benefits from this capability. Moreover, we can create an aggregation over four 10 Gb/sec network interfaces; combining all the interfaces together, we can achieve up to 40 Gb/sec of network bandwidth.DLMP can provide additional benefits when employed together with other network virtualization technologies that are implemented in the Oracle Solaris 11 operating system, such as link protection and the ability to configure a bandwidth limit on a VNIC or a traffic flow to meet service-level agreements (SLAs). Combining these technologies provides for a uniquely compelling network solution in terms of HA, security, and performance in a cloud environment.

How to add high availability to the network infrastructure of a multitenant cloud environment using the DLMP aggregation technology introduced in Oracle Solaris 11.1. This article is Part 1 of a...

Sun

How to Set Up a Hadoop 2.2 Cluster From the Unified Archive

Tech Article: How to Set Up a Hadoop 2.2 Cluster From the Unified Archive.Learn how to combine an Apache Hadoop 2.2 (YARN) cluster using Oracle Solaris Zones, the ZFS file system, and the new Unified Archive capabilities of Oracle Solaris 11.2 to set up a Hadoop cluster on a single system. Also see how to configure manual or automatic failover, and how to use the Unified Archive to create a “cloud in a box” and deploy bare-metal system. The article starts with a brief overview of Hadoop and follows with an example of setting up a Hadoop cluster with two NameNodes, a Resource Manager, a History Server, and three DataNodes. As a prerequisite, you should have a basic understanding of Oracle Solaris Zones and network administration. Table of Contents:About Hadoop and Oracle Solaris Zones Download and Install Hadoop Configure the Network Time Protocol Configure the Active NameNodeSet Up the Standby NameNode and the ResourceManagerSet Up the DataNode ZonesFormat the Hadoop File SystemStart the Hadoop ClusterAbout Hadoop High AvailabilityConfigure Manual FailoverAbout Apache ZooKeeper and Automatic FailoverConfigure Automatic FailoverCreate a "Cloud in a Box" Using Unified ArchiveDeploy a Bare-Metal System from a Unified Archive

Tech Article: How to Set Up a Hadoop 2.2 Cluster From the Unified Archive.Learn how to combine an Apache Hadoop 2.2 (YARN) cluster using Oracle Solaris Zones, the ZFS file system, and the new...

Sun

Presentations from Oracle & ilOUG Solaris Forum meeting, Dec 18th, Tel-Aviv

Thank you for attending the Israel Oracle User Group (ilOUG) Solaris Forum meeting on Wednesday. I am posting here presentations from the event. I am also pointing you to the dim_STAT tool, if you are looking for a powerful monitoring solution for Solaris systems.The event was reasonably well attended with 20 people from various companies.The meeting was broken up into two sections: presentations about Solaris eco system, Solaris as a big data platform with Hadoop as a use case; and comparison between Oracle Solaris and Linux and a customer use-case of Peta-scale data migration using Solaris ZFS.I am posting here responses and links for the topics that came up during the evening:During the meeting the participants asked us, “how can we verify if an application which is running in Solaris 10 will run smoothly in Solaris 11 and how to build an application that will able to leverage the new SPARC CPUs”.For this task you can use the Oracle Solaris Preflight Applications Checker which enables you to determine the Oracle Solaris 11 “readiness” of an application by analyzing a working application on Oracle Solaris 10.A successful check with this tool is a strong indicator that an ISV may run a given application without modifications on Oracle Solaris 11.In addition, you can analyze and improve the application performance during the development phase using Oracle Solaris Studio 12.3Another useful tool for performance analysis is the Solaris DTrace. You can leverage the DTrace Toolkit which is a collection of ~200 scripts that help troubleshoot problems on a system. You can find the latest version of these scripts from /usr/demo/dtrace in Oracle Solaris 11.Another topic that came up was, “How to accelerate application deployment time by using pre-built Solaris images”.For this task, you can use Oracle VM Templates which provide an innovative approach to deploying a fully configured software stack by offering per-installed and per-configured software images.Use of Oracle VM Templates eliminates the installation and configuration costs, and reduces the ongoing maintenance costs, thus helping organizations achieve faster time to market and lower cost of operations.During the Solaris vs Linux comparison presentation, the topic that received the most attention from the audience was, “ What are the benefits of Oracle Solaris ZFS versus Linux btrfs?” In addition to the link above, here is a link to COMSTAR which is a Solaris storage virtualization technology.Common Multiprotocol SCSI Target, or COMSTAR, a software framework that enables you to convert any Oracle Solaris 11 host into a SCSI target device that can be accessed over a storage network by initiator hosts.In the final section of the meeting, Haim Tzadok from Oracle partner Grigale and Avi Avraham from Walla presented together the customer use case of Peta-scale data migration using Oracle Solaris ZFS.Walla is one of the most popular web portals in Israel with an unlimited storage for email accounts.Walla uses Solaris ZFS for their Mail server storage, the ZFS feature that allows them to reduce storage cost and improve DISK I/O performance is the ZFS ability to change the block size (4 KB up-to 1 MB) when creating the file system.Like many Mail systems you have many small files and by using the Solaris ZFS ability to change to the ZFS default block size you can optimize your storage subsystem and align it to application (Mail server).The final result is DISK I/O performance improvement without the need to invest extra budget in superfluous storage hardware.For more information about ZFS block size. A big thank-you to the participants for the engaged discussions and to ilOUG for a great event!Register to ilOUG and get the latest updates. Solaris vs Linux Oracle Solaris 11 as a Big Data Platform Apache Hadoop Use case Walla Migration

Thank you for attending the Israel Oracle User Group (ilOUG) Solaris Forum meeting on Wednesday. I am posting here presentations from the event. I am also pointing you to the dim_STAT tool, if you...

Sun

Performance Analysis in a Multitenant Cloud Environment Using Hadoop Cluster and Oracle Solaris 11

Oracle Solaris 11 comes with a new set of commands that provide the ability to conductperformance analysis in a virtualized multitenant cloud environment. Performance analysis in avirtualized multitenant cloud environment with different users running various workloads can be achallenging task for the following reasons:Each virtualization software adds an abstraction layer to enable better manageability. Although this makes it much simpler to manage the virtualized resources, it is very difficult to find the physical system resources that are overloaded.Each Oracle Solaris Zone can have different workload; it can be disk I/O, network I/O, CPU, memory, or combination of these.In addition, a single Oracle Solaris Zone can overload the entire system resources.It is very difficult to observe the environment; you need to be able to monitor the environment from the top level to see all the virtual instances (non-global zones) in real time with the ability to drill down to specific resources. The benefits of using Oracle Solaris 11 for virtualized performance analysis are:• Observability. The Oracle Solaris global zone is a fully functioning operating systems, not a propriety hypervisor or a minimized operating system that lacks the ability to observe the entire environment—including the host and the VMs, at the same time. The global zone can see all the non-global zones’ performance metrics.• Integration. All the subsystems are built inside the same operating system. For example, the ZFS file system and the Oracle Solaris Zones virtualization technology are integrated together. This is preferable to mixing many vendors’ technology, which causes a lack of integration between the different operating system (OS) subsystems and makes it very difficult to analyze all the different OS subsystems at the same time.• Virtualization awareness. The built-in Oracle Solaris commands are virtualization-aware,and they can provide performance statistics for the entire system (the Oracle Solaris global zone). In addition to providing the ability to drill down into every resource (Oracle Solaris non-global zones), these commands provide accurate results during the performance analysis process.In this article, we are going to explore four examples that show how we can monitor virtualized environment with Oracle Solaris Zones using the built-in Oracle Solaris 11 tools. These tools provide the ability to drill down to specific resources, for example, CPU, memory, disk, and network. In addition, they provide the ability to print statistics per Oracle Solaris Zone and provide information on the running applications. Read it Article: Performance Analysis in a Multitenant Cloud Environment

Oracle Solaris 11 comes with a new set of commands that provide the ability to conduct performance analysis in a virtualized multitenant cloud environment. Performance analysis in avirtualized...

Sun

How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab.This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System(HDFS) and the Hadoop MapReduce programming model.We will also cover the Hadoop installation process and the cluster building blocks:NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for betterscalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab ExercisesThis hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:    Install Hadoop.    Edit the Hadoop configuration files.    Configure the Network Time Protocol.    Create the virtual network interfaces (VNICs).    Create the NameNode and the secondary NameNode zones.    Set up the DataNode zones.    Configure the NameNode.    Set up SSH.    Format HDFS from the NameNode.    Start the Hadoop cluster.    Run a MapReduce job.    Secure data at rest using ZFS encryption.    Use Oracle Solaris DTrace for performance monitoring. Read it now

Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab.This hands-on lab presents exercises that demonstrate how to set up an Apache...

Sun

The Network is the Computer -still

In 1984, John Gage was able to predict the future of computing with the phrase, "the network is the computer". In his visionary mind, he saw how computer networks would accelerate the capability of every device that will be connected into the network.  Can you imagine iPhone without app-store? or web search without Google? It goes on and on...Another aspect of this prediction is the ability of the network infrastructure to connect many systems together in order to become a huge computing environment - as we are seeing with the Cloud computing model.As the system's hardware is more powerful in terms of CPU,  "In-Memory" capability, and advanced operating system virtualization, we see that most of the physical network infrastructure can be "virtualized" using software without negligible performance degradation.An excellent example of this capability is the Oracle Solaris 11 Network Virtualization. This allows us to build any physical network topology inside the Oracle Solaris Operating System including virtual network interface cards (VNICs), virtual switches (vSwitches), and more sophisticated network components (for example, load balancers, routers, and firewalls).The benefits for using this networking technology are exemplified in reducing infrastructure cost since there is no need to invest in additional network equipment. In addition, the infrastructure deployment is much faster, since all the network building blocks are based on software and not in hardware.Although we have the flexibility in building rapid network infrastructure, we also need to be able to monitor, benchmark and predict  future growth, so how we can do this?In the upcoming white paper this is demonstrated using three main use cases: how we can analyze, benchmark and monitor our physical and virtualized network environment.  In addition, we can demonstrate the benefits of using the built in Solaris 11 Networking Tools.For further reading see  "Advanced Network Monitoring Using Oracle Solaris 11 Tools"

In 1984, John Gage was able to predict the future of computing with the phrase, "the network is the computer". In his visionary mind, he saw how computer networks would accelerate the capability of...

Sun

Hadoop Cluster with Oracle Solaris Hands on Lab at Oracle Open WORLD 2013

If you want to learn how-to build a Hadoop cluster using Solaris 11 technologies please join me at the following Oracle Open WORLD 2013 lab.How to Set Up a Hadoop Cluster with Oracle Solaris [HOL10182] In this Hands-on-Lab we will preset and demonstrate using exercises how to set up a Hadoop cluster Using Oracle Solaris 11 technologies like: Zones, ZFS, DTrace  and Network Virtualization.Key topics include the Hadoop Distributed File System and MapReduce.We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes.In addition how we can combine the Oracle Solaris 11 technologies for better scalability and data security.During the lab users will learn how to load data into the Hadoop cluster and run Map-Reduce job.This hands-on training lab is for system administrators and others responsible for managing Apache Hadoop clusters in production or development environments.This Lab will cover the following topics:    1. How to install Hadoop    2. Edit the Hadoop configuration files    3. Configure the Network Time Protocol    4. Create the Virtual Network Interfaces    5. Create the NameNode and the Secondary NameNode Zones    6. Configure the NameNode    7. Set Up SSH between the Hadoop cluster member    8. Format the HDFS File System    9. Start the Hadoop Cluster   10. Run a MapReduce Job   11. How to secure data at rest using ZFS encryption   12. Performance monitoring using Solaris DTrace Register Now

If you want to learn how-to build a Hadoop cluster using Solaris 11 technologies please join me at the following Oracle Open WORLD 2013 lab.How to Set Up a Hadoop Cluster with Oracle...

Sun

Oracle SPARC Software on Silicon

This past week (1-July-2013) Oracle ISV Engineering participated in the Oracle Technology Day, one of the largest IT event in Israel with over 1,000 participants.During the event Oracle showed the latest technology including the Oracle Database 12c and the new SPARC T5 CPU.  Angelo Rajadurai presented the new SPARC T5 CPU and covered the latest features of this technology. The topics that Angelo presented were:The SPARC T5 CPU architecture is unique in terms of how it can handle multi-thread workload in addition to very good single thread performance.When Oracle switched from 40 nanometer technology to a 28 nanometer, the T5 performance doubled; for example :16 cores versus 8 cores on the T4 – doubled the throughput of this CPU.Doubled the number of memory links, from 2 in the T4 into 4 on the T5, Each memory link is 12 lanes southbound and 12 lanes northbound and operates at 12.8 Gb/sec.The I/O subsystem use PCI Express Rev 3 vs Rev 2 on the previous model, which means that we doubled the I/O bandwidth also!New coherency protocol (e.g. directory-based) which allows near linear fashion from one to eight sockets as well there are seven coherence links, each with 12 lanes in each direction running at 153.6 Gb/sec.Sixteen cryptography units per SPARC T5 processor.In addition we increased the CPU clock from 3GHz to 3.6 GHz which improves single thread performance by ~20% without any code modification.Another CPU capability of the SPARC T5 CPU is the ability to change the core behavior based on the workload. For example: if the Solaris operating system recognizes that the workload is single threaded, it can change automatically the core characteristic for a single thread performance. If needed the user can change the core behavior manually.SPARC T5 brings all of these great features with the same pricing model so the new SPARC T5 servers double your price/performance metric. No wonder that this server won so many world records! Oracle is the only vendor that published Public CPU roadmap for their future CPUs, and delivered it before time ! Oracle used the SPARC T5 servers as a building block for the Oracle SuperCluster T5-8 which is Oracle’s fastest engineered system. Combining powerful virtualization and unique Exadata and Exalogic optimizationsYou can use the Oracle SuperCluster T5-8 in order to run the most demanding Enterprise applications. Although the SPARC T5 doubled the performance, we're approaching the limit of physics and we need to think about new approaches for CPU performance acceleration.The technology that will allow us to keep doubling the performance every two years is the “Software on Silicon” CPU technology.This technology will run CPU intensive instruction inside the CPU versus in software so it can accelerate the workload performance by order of magnitude.The first implementation of the “Software on Silicon” is the the Encryption Accelerator.This intrinsic CPU capability allows us to accelerate the most common bulk encryption ciphers like AES and DES. SPARC T5 also supports asymmetric key exchange with RSA and ECC and authentication or hash functions like SHA and MD5.This built in encryption capability provides end-to-end data center encryption without the performance penalty usually associated with a multi layer data protection.During our performance encryption benchmarks,we saw negligibly performance overhead when running the same workload using the CPU encryption accelerator (<5%). Potentially, we can use the “Software on Silicon” concept and implement it for other CPU intensive tasks such as :Java accelerationDatabase QueryCompressionCluster InterconnectApplication Data Protection See Angelo's presentation here Conclusion - In this post, we described Oracle has improved the SPARC T5 performance in the CPU subsystem ,I/O and coherency capabilities. In addition, we took a look at what are the possible future plans for the “Silicon on software” CPU technology.

This past week (1-July-2013) Oracle ISV Engineering participated in the Oracle Technology Day, one of the largest IT event in Israel with over 1,000 participants.During the event Oracle showed the...

Sun

How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones. The following are benefits of using Oracle Solaris for a MongoDB cluster:•You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster.•In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort.•You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools.•ZFS built-in compression provides optimized disk I/O utilization for better I/O performance.In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies. Figure 1 shows the architecture:

This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones. The following are benefits of using Oracle...

Sun

Public Cloud Security Anti Spoofing Protection

This past week (9-Jun-2013) Oracle ISV Engineering participated in the IGT cloud meetup, the largest cloud community in Israel with 4,000 registered members.During the meetup, ISV Engineering presented two presentations: •Introduction to Oracle Cloud Infrastructure presented by Frederic Pariente •Use case : Cloud Security Design and Implementation presented by me In addition, there was a partner presentation from ECI Telecom•Using Oracle Solaris11 Technologies for Building ECI R&D and Product Private Clouds presented by Mark Markman from ECI Telecom The Solaris 11 feature that received the most attention from the audience was the new Solaris 11 network virtualization technology.The Solaris 11 network virtualization allows us to build any physical network topology inside the Solaris operating system including virtual network cards (VNICs), virtual switches (vSwitch), and more sophisticated network components (e.g. load balancers, routers and fire-walls).The benefits for using this technology are in reducing infrastructure cost since there is no need to invest in superfluous network equipment. In addition the infrastructure deployment is much faster, since all the network building blocks are based on software and not in hardware.  One of the key features of this network virtualization technology is the Data Link Protection. With this capability we can provide the flexibility that our partners need in a cloud environment and allow them root account access from inside the Solaris zone. Although we disabled their ability to create spoofing attack  by sending outgoing packets with a different source IP or MAC address and packets which aren't types of IPv4, IPv6, and ARP.The following example demonstrates how to enable this feature:Create the virtual VNIC (in a further step, we will associate this VNIC with the Solaris zone):# dladm create-vnic -l net0 vnic0Setup the Solaris zone:# zonecfg -z secure-zoneUse 'create' to begin configuring a new zone:zonecfg:secure-zone> createcreate: Using system default template 'SYSdefault'zonecfg:secure-zone> set zonepath=/zones/secure-zonezonecfg:secure-zone> add netzonecfg:secure-zone:net> set physical=vnic0zonecfg:secure-zone:net> endzonecfg:secure-zone> verifyzonecfg:secure-zone> commitzonecfg:secure-zone> exitInstall the zone: # zoneadm -z secure-zone installBoot the zone: # zoneadm -z secure-zone bootLog In to the zone:# zlogin -C secure-zoneNOTE - During the zone setup select the vnic0 network interface and assign the 10.0.0.1 IP address.From the global zone enable link protection on vnic0:We can set different modes: ip-nospoof, dhcp-nospoof, mac-nospoof and restricted. ip-nospoof: Any outgoing IP, ARP, or NDP packet must have an address field that matches either a DHCP-configured IP address or one of the addresses listed in the allowed-ips link property.mac-nospoof: prevents the root user from changing the zone mac address. An outbound packet's source MAC address must match the datalink's configured MAC address.dhcp-nospoof: prevents Client ID/DUID spoofing for DHCP.restricted: only allows IPv4, IPv6 and ARP protocols. Using this protection type prevents the link from generating potentially harmful L2 control frames.# dladm set-linkprop -p protection=mac-nospoof,restricted,ip-nospoof vnic0Specify the 10.0.0.1 IP address as values for the allowed-ips property for the vnic0 link:# dladm set-linkprop -p allowed-ips=10.0.0.1 vnic0Verify the link protection property values:# dladm show-linkprop -p protection,allowed-ips vnic0LINK PROPERTY PERM VALUE DEFAULT POSSIBLEvnic0 protection rw mac-nospoof, -- mac-nospoof, restricted, restricted, ip-nospoof ip-nospoof, dhcp-nospoof vnic0 allowed-ips rw 10.0.0.1 -- -- We can see that 10.0.0.1 is set as allowed ip address.Log In to the zone # zlogin secure-zoneAfter we login into the zone let's try to change the zone's ip address:root@secure-zone:~# ifconfig vnic0 10.0.0.2 ifconfig:could not create address: Permission denied As we can see we can't change the zone's ip address! Optional - disable the link protection from the global zone: # dladm reset-linkprop -p protection,allowed-ips vnic0 NOTE - we don't need to reboot the zone in order to disable this property.Verify the change # dladm show-linkprop -p protection,allowed-ips vnic0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLEvnic0 protection rw -- -- mac-nospoof, restricted, ip-nospoof, dhcp-nospoof vnic0 allowed-ips rw -- -- -- As we can see we don't have restriction on the allowed-ips property. Conclusion In this blog I demonstrated how we can leverage the Solaris 11 Data link protection in order to prevent spoofing attacks.

This past week (9-Jun-2013) Oracle ISV Engineering participated in the IGT cloud meetup, the largest cloud community in Israel with 4,000 registered members.During the meetup, ISV Engineering...

Sun

How To Protect Public Cloud Using Solaris 11 Technologies

When we meet with our partners, we often ask them, “ What are their main security challenges for public cloud infrastructure.? What worries them in this regard?”This is what we've gathered from our partners regarding the security challenges:1.    Protect data at rest in transit and in use using encryption2.    Prevent denial of service attacks against their infrastructure3.    Segregate network traffic between different cloud users4.    Disable hostile code (e.g.’ rootkit’ attacks)5.    Minimize operating system attack surface6.    Secure data deletions once we have done with our project7.    Enable strong authorization and authentication for non secure protocolsBased on these guidelines, we began to design our Oracle Developer Cloud. Our vision was to leverage Solaris 11 technologies in order to meet those security requirements. First - Our partners would like to encrypt everything from disk up the layers to the application without the performance overhead which is usually associated with this type of technology.The SPARC T4 (and lately the SPARC T5) integrated cryptographic accelerator allow us to encrypt data using ZFS encryption capability.We can encrypt all the network traffic using SSL from the client connection to the cloud main portal using the Secure Global Desktop (SGD) technology and also encrypt the network traffic between the application tier to the database tier. In addition to that we can protect our Database tables using Oracle Transparent Data Encryption (TDE).During our performance tests we saw that the performance impact was very low (less than 5%) when we enabled those encryption technologies.The following example shows how we created an encrypted file system# zfs create -o encryption=on rpool/zfs_file_systemEnter passphrase for 'rpool/zfs_file_system':Enter again:NOTE - In the above example, we used a passphrase that is interactively requested but we can use SSL or a key repository.Second  - How we can mitigate denial of service attacks?The new Solaris 11 network virtualization technology allow us to apply virtualization technologies to  our network by splitting the physical network card into multiple virtual network ‘cards’. in addition, it provides the capability to setup flow which is sophisticated quality of service mechanism.Flows allow us to limit the network bandwidth for a specific network port on specific network interface.In the following example we limit the SSL traffic to 100Mb on the vnic0 network interface# dladm create-vnic vnic0 –l net0# flowadm add-flow -l vnic0 -a transport=TCP,local_port=443 https-flow# flowadm set-flowprop -p maxbw=100M https-flowDuring any (Denial of Service) DOS attack against this web server, we can minimize the impact on the rest of the infrastructure.Third -  How can we isolate network traffic between different tenants of the public cloud?The new Solaris 11 network technology allow us to segregate the network traffic on multiple layers.For example we can limit the network traffic based on the layer two using VLANs# dladm create-vnic -l net0  -v 2 vnic1Also we can be implement firewall rules for layer three separations using the Solaris 11 built-in firewall software.For an example of Solaris 11 firewall seeIn addition to the firewall software, Solaris 11 has built-in load balancer and routing software. In a cloud based environment it means that new functionality can be added promptly since we don't need an extra hardware in order to implement those extra functions.Fourth - Rootkits have become a serious threat is allowing the insertion of hostile code using custom kernel modules.The Solaris Zones technology prevents loading or unloading kernel modules (since local zones lack the sys_config privilege).This way we can limit the attack surface and prevent this type of attack.In the following example we can see that even the root user is unable to load custom kernel module inside a Solaris zone# ppriv -De modload -p /tmp/systracemodload[21174]: missing privilege "ALL" (euid = 0, syscall = 152) needed at modctl+0x52Insufficient privileges to load a moduleFifth - the Solaris immutable zones technology allows us to minimize the operating system attack surfaceFor example: disable the ability to install new IPS packages and modify file systems like /etcWe can setup Solaris immutable zones using the zonecfg command.# zonecfg -z secure-zoneUse 'create' to begin configuring a new zone.zonecfg:secure-zone> createcreate: Using system default template 'SYSdefault'zonecfg:secure-zone> set zonepath=/zones/secure-zonezonecfg:secure-zone> set file-mac-profile=fixed-configurationzonecfg:secure-zone> commitzonecfg:secure-zone> exit# zoneadm -z secure-zone installWe can combine the ZFS encryption and immutable zones for more examples see:Sixth - The main challenge of building secure BIG Data solution is the lack of built-in security mechanism for authorization and authentication.The Integrated Solaris Kerberos allows us to enable strong authorization and authentication for non-secure by default distributed systems like Apache Hadoop.The following example demonstrates how easy it is to install and setup Kerberos infrastructure on Solaris 11# pkg install pkg://solaris/system/security/kerberos-5# kdcmgr -a kws/admin -r EXAMPLE.COM create masterFinally - our partners want to assure that when the projects are finished and complete, all the data is erased without the ability to recover this data by looking at the disk blocks directly bypassing the file system layer.ZFS assured delete feature allows us to implement this kind of secure deletion.The following example shows how we can change the ZFS wrapping key to a random data (output of /dev/random) then we unmount the file system and finally destroy it.# zfs key -c -o  keysource=raw,file:///dev/random rpool/zfs_file_system# zfs key -u rpool/zfs_file_system# zfs destroy rpool/zfs_file_systemConclusionIn this blog entry, I covered how we can leverage the SPARC T4/T5 and the Solaris 11 features in order to build secure cloud infrastructure. Those technologies allow us to build highly protected environments without  the need to invest extra budget on special hardware. They also  allow us to protect our data and network traffic from various threats.If you would like to hear more about those technologies, please join us at the next IGT cloud meet-up

When we meet with our partners, we often ask them, “ What are their main security challenges for public cloud infrastructure.? What worries them in this regard?”This is what we've gathered from our...

Sun

How to Set Up a Hadoop Cluster Using Oracle Solaris Zones

This article starts with a brief overview of Hadoop and follows with an example of setting up a Hadoop cluster with a NameNode, a secondary NameNode, and three DataNodes using Oracle Solaris Zones.The following are benefits of using Oracle Solaris Zones for a Hadoop cluster:· Fast provision of new cluster members using the zone cloning feature· Very high network throughput between the zones for data node replication· Optimized disk I/O utilization for better I/O performance with ZFS built-in compression· Secure data at rest using ZFS encryption Hadoop use the Distributed File System (HDFS) in order to store data. HDFS provides high-throughput access to application data and is suitable for applications that have large data sets.The Hadoop cluster building blocks are as follows:· NameNode: The centerpiece of HDFS, which stores file system metadata, directs the slave DataNode daemons to perform the low-level I/O tasks, and also runs the JobTracker process.· Secondary NameNode: Performs internal checks of the NameNode transaction log.· DataNodes: Nodes that store the data in the HDFS file system, which are also known as slaves and run the TaskTracker process.In the example presented in this article, all the Hadoop cluster building blocks will be installed using the Oracle Solaris Zones, ZFS, and network virtualization technologies.

This article starts with a brief overview of Hadoop and follows with an example of setting up a Hadoop cluster with a NameNode, a secondary NameNode, and three DataNodes using Oracle Solaris Zones.The...

Sun

Accelerate your Oracle DB startup and shutdown using Solaris 11 vmtasks

last week I co-presented the Solaris 11.1 new features at the Solaris user group.I would like to thank the organizers and the audience. You can find the slide deck here.I presented the following Solaris 11.1 new features: 1. Installation enhancement adding iSCSI disks as boot device and Oracle Configuration Manager (OCM) and Auto Service Request (ASR) registration during operating system installation 2. New SMF tool svcbundle for creation of manifests and the brand new svccfg sub-command delcust 3. The pfedit command for secure system configuration editing 4. New logging daemon rsyslog for better scalability and reliable system logs transport 5. New aggregation option for the mpstat , cpustat and trapstat commands in order to display the data in a more condensed format 6. Edge Virtual Bridging (EVB) which extends network virtualization features into the physical network infrastructure 7. Data Center Bridging (DCB) which provides guaranteed bandwidth and lossless Ethernet transport for converged network environments where storage protocols share the same fabric as regular network traffic 8. VNIC Migration for better flexibility in network resource allocation 9.  Fast Zone updates for improved system up-time and short system upgrades 10. Zones on shared storage for a faster Solaris Zones mobility 11. File system statistics for Oracle Solaris Zones for better I/O performance observability 12. Oracle Optimized Shared Memory The Solaris 11 feature that got the most attention was the new memory process called vmtasks. This process accelerates shared memory operation like : creation locking, and destruction. Now you can improve your system up-time because your Oracle DB startup and shutdown are much faster. Any application that needs fast access to shared memory can benefit from this process. For more information about vmtasts and Solaris 11.1 new features

last week I co-presented the Solaris 11.1 new features at the Solaris user group. I would like to thank the organizers and the audience. You can find the slide deck here.I presented the following...

Sun

Automating Solaris 11 Zones Installation Using The Automated Install Server

IntroductionHow to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # shareexport_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install ServiceUse the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Defaults11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it.The -m switch reflects the name of the manifest associated with a service.Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifestYou can verify the manifest creation using the following command # installadm list -n s11x86service  -mService/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service   orig_default        Default  None   gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone.If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is usedThe default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xmlEdit the copy (/var/tmp/zone_default2.xml)The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element    The http_proxy attribute of the ai_instance element    The disk child element of the target element    The noswap attribute of the logical element    The nodump attribute of the logical element    The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router    Root password    DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service.In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service   gz_profile         None We can see that the gz_profie has been created and associated with the s11x86serviceInstall service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xmlValidating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xmlValidating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install serviceThe following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -pService/Profile Name  Criteria --------------------  -------- s11x86service   zone1_profile      zonename = zone1   zone2_profile      zonename = zone2   gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile    zone1            zone1_profile    zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous stepsThe following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -mService/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service   gzmanifest                   mac  = 00:14:4F:02:0A:19   orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service   gz_profile         mac      = 00:14:4F:02:0A:19   zone2_profile      zonename = zone2   zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network BootThen you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed.You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.    A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed    Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Installserv...

Sun

Oracle Solaris 8 P2V with Oracle database 10.2 and ASM

Background informationIn this document I will demonstrate the following scenario:Migration of Solaris 8 Physical system with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of Solaris 11 control domain. In the first example we will preserve the host information.On the second example we will modify the host name.Executive summaryIn this document I will demonstrate how we managed to leverage the Solaris 8 p2v tool in order to migrate a physical Solaris 8 system with Oracle database with ASM file system into a Solaris 8 branded Zone.The ASM file system located on a LUN in SAN storage connected via FC HBA.During the migration we used the same LUN on the source and target servers in order to avoid data migration.The P2V tool successfully migrated the Solaris 8 physical system into the Solaris 8 branded Zone and the Zone was able to access the ASM file system.Architecture layout Source systemHardware details:Sun Fire V440 server with 4 UltraSPARC 3i CPUs 1593MHZ and 8GB of RAMOpreating system Solaris 8 2/04 + latest recommended patch set Target system Oracle's SPARC T4-1 server with A single 8-core, 2.85 GHz SPARC T4 processor and 32GB of RAMInstall Solaris 11 Setting up Control Domainprimary# ldm add-vcc port-range=5000-5100 primary-vcc0 primaryprimary# ldm add-vds primary-vds0 primaryprimary# ifconfig -anet0: flags=1000843 mtu 1500 index 2 inet 10.162.49.45 netmask ffffff00 broadcast 10.162.49.255 ether 0:14:4f:ab:e3:7aprimary# ldm add-vsw net-dev=net0 primary-vsw0 primaryprimary# ldm list-services primaryVCC NAME LDOM PORT-RANGE primary-vcc0 primary 5000-5100VSW NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK primary-vsw0 primary 00:14:4f:fb:44:4d net0 0 switch@0 1 1 1500 onVDS NAME LDOM VOLUME OPTIONS MPGROUP DEVICE primary-vds0 primaryprimary# ldm set-mau 1 primaryprimary# ldm set-vcpu 16 primaryprimary# ldm start-reconf primaryprimary# ldm set-memory 8G primaryprimary# ldm add-config initialprimary# ldm list-configfactory-defaultinitial [current]primary-with-clientsprimary# shutdown -y -g0 -i6Enable the virtual console serviceprimary# svcadm enable vntsdprimary# svcs vntsdSTATE STIME FMRIonline 15:55:10 svc:/ldoms/vntsd:defaultSetting Up Guest Domainprimary# ldm add-domain ldg1primary# ldm add-vcpu 32 ldg1primary# ldm add-memory 8G ldg1primary# ldm add-vnet vnet0 primary-vsw0 ldg1primary# ldm add-vnet vnet1 primary-vsw0 ldg1primary# ldm add-vdsdev /dev/dsk/c3t1d0s2 vol1@primary-vds0primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1primary# ldm set-var auto-boot\?=true ldg1primary# ldm set-var boot-device=vdisk1 ldg1primary# ldm bind-domain ldg1primary# ldm start-domain ldg1primary# telnet localhost 5000{0} ok boot net - installInstall solaris 10 update 10 (Solaris 10 08/11) Verify that all the Solaris services on guest LDom are up & running guest # svcs -xvOracle Solaris Legacy Containers installThe Oracle Solaris Legacy Containers download includes two versions of the product: - Oracle Solaris Legacy Containers 1.0.1 - For Oracle Solaris 10 10/08 or later - Oracle Solaris Legacy Containers 1.0 - For Oracle Solaris 10 08/07 - For Oracle Solaris 10 05/08 Both product versions contain identical features. The 1.0.1 product depends on Solaris packages introduced in Solaris 10 10/08. The 1.0 product delivers these packages to pre-10/08 versions of Solaris. We will use the Oracle Solaris Legacy Containers 1.0.1since our Solaris 10 version is 08/11 To install the Oracle Solaris Legacy Containers 1.0.1 software: 1. Download the Oracle Solaris Legacy Containers software bundle from http://www.oracle.com. 2. Unarchive and install 1.0.1 software package: guest # unzip solarislegacycontainers-solaris10-sparc.zip guset # cd solarislegacycontainers/1.0.1/Product guest # pkgadd -d `pwd` SUNWs8brandk Starting the migrationOn the source system sol8# su - oracle Shutdown the Oracle database and ASM instancesol8$ sqlplus "/as sysdba"SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:19:48 2012Copyright (c) 1982, 2010, Oracle. All Rights Reserved.Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing optionsSQL> shutdown immediatesol8$ export ORACLE_SID=+ASM sol8$ sqlplus "/as sysdba"SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:21:38 2012Copyright (c) 1982, 2010, Oracle. All Rights Reserved.Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing optionsSQL> shutdownASM diskgroups dismountedASM instance shutdownStop the listener sol8 $ lsnrctl stopLSNRCTL for Solaris: Version 10.2.0.5.0 - Production on 26-AUG-2012 13:23:49Copyright (c) 1991, 2010, Oracle. All rights reserved.Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))The command completed successfullyCreating the archivesol8 # flarcreate -S -n s8-system /export/home/s8-system.flarCopy the archive to the target guest domain On the target systemMove and connect the SAN storage to the target systemOn the control domain add the SAN storage LUN into the guest domainprimary # ldm add-vdsdev /dev/dsk/c5t40d0s6 oradata@primary-vds0primary # ldm add-vdisk oradata oradata@primary-vds0 ldg1On the guest domain verify that you can access the LUNguest# formatSearching for disks...doneAVAILABLE DISK SELECTIONS: 0. c0d0 /virtual-devices@100/channel-devices@200/disk@0 1. c0d2 /virtual-devices@100/channel-devices@200/disk@2 Set up the Oracle Solaris 8 branded zone on the guest domainThe Oracle Solaris 8 branded zone s8-zone  is configured with the zonecfg command.Here is the output of zonecfg –z s8-zone info command after configuration is completed: guest# zonecfg -z s8-zone infozonename: s8-zonezonepath: /zones/s8-zonebrand: solaris8autoboot: truebootargs:pool:limitpriv: default,proc_priocntl,proc_lock_memoryscheduling-class: FSSip-type: exclusivehostid:net:        address not specified        physical: vnet1        defrouter not specifieddevice        match: /dev/rdsk/c0d2s0attr:        name: machine        type: string        value: sun4u  Install the Solaris 8 zoneguest# zoneadm -z s8-zone install -p -a /export/home/s8-system.flarBoot the Solaris 8 zoneguest# zoneadm –z s8-zone bootguest # zlogin –C s8-zone sol8_zone# su - oracleModify the ASM disk ownership  sol8_zone# chown oracle:dba /dev/rdsk/c0d2s0 Start the listener sol8_zone$ lsnrctl startStart the ASM instancesol8_zone$ export ORACLE_SID=+ASMsol8_zone$ sqlplus / as sysdbaSQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:36:44 2012Copyright (c) 1982, 2010, Oracle. All Rights Reserved.Connected to an idle instance.SQL> startupASM instance startedTotal System Global Area 130023424 bytesFixed Size 2050360 bytesVariable Size 102807240 bytesASM Cache 25165824 bytesASM diskgroups mountedSQL> quitDisconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing optionsStart the databasesol8_zone$ export ORACLE_SID=ORA10sol8_zone$ sqlplus / as sysdbaSQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:37:13 2012Copyright (c) 1982, 2010, Oracle. All Rights Reserved.Connected to an idle instance.SQL> startupORACLE instance started.Total System Global Area 1610612736 bytesFixed Size 2052448 bytesVariable Size 385879712 bytesDatabase Buffers 1207959552 bytesRedo Buffers 14721024 bytesDatabase mounted.Database opened.Second example In this example we will modify the host name.guest # zoneadm -z s8-zone install -u -a /net/server/s8_image.flarBoot the Zoneguest # zoneadm -z s8-zone bootConfigure the Zone with a new ip address and new host nameguest # zlogin –C s8-zoneModify the ASM disk ownershipsol8_zone # chown oracle:dba /dev/rdsk/c0d2s0sol8_zone # cd $ORACLE_HOME/binreconfigure the ASM parameterssol8_zone # ./localconfig deleteAug 27 05:17:11 s8-zone last message repeated 3 timesAug 27 05:17:28 s8-zone root: Oracle CSSD being stoppedStopping CSSD.Unable to communicate with the CSS daemon.Shutdown has begun. The daemons should exit soon.sol8_zone # ./localconfig addSuccessfully accumulated necessary OCR keys.Creating OCR keys for user 'root', privgrp 'other'..Operation successful.Configuration for local CSS has been initializedsol8_zone # su - oracleStart the listener + ASM + Oracle database

Background information In this document I will demonstrate the following scenario:Migration of Solaris 8 Physical system with Oracle database version 10.2.0.5 with ASM file-system located on a SAN...

Sun

Solaris 11 Firewall

Oracle Solaris 11 includes software firewall.In the cloud, this means that the need for expensive network hardware can be reduced while changes to network configurations can be made quickly and easily.You can use the following script in order to manage the Solaris 11 firewall.The script runs on Solaris 11 (global zone) and Solaris 11 Zone with exclusive ip stack (the default).Script usage and examples:Enable and start the firewall service# fw.ksh startEnable and start the firewall service in addition to that it reads the firewall rules from /etc/ipf/ipf.conf.For more firewall rules examples see here.Disable and stop the firewall service# fw.ksh stopRestart the firewall service after modifying the rules of /etc/ipf/ipf.conf.# fw.ksh restartChecking the firewall status# fw.ksh statusThe script will print the firewall status (online,offline) and the active rules.This section provides the script. The recommendation is to copy the content and paste it into the suggested file name using gedit to create the file on Oracle Solaris 11.# more fw.ksh #! /bin/ksh## FILENAME:    fw.ksh# Manage Solaris firewall script# Usage:# fw.ksh {start|stop|restart|status} case "$1" in start)        /usr/sbin/svcadm enable svc:/network/ipfilter:default          while [[ $serviceStatus != online && $serviceStatus != maintenance ]] ; do            sleep 5            serviceStatus=`/usr/bin/svcs -H -o STATE svc:/network/ipfilter:default`        done        /usr/sbin/ipf -Fa -f /etc/ipf/ipf.conf   ;; restart)        $0 stop        $0 start   ;; stop)        /usr/sbin/svcadm disable svc:/network/ipfilter:default   ;; status)        serviceStatus=`/usr/bin/svcs -H -o STATE svc:/network/ipfilter:default`         if [[ $serviceStatus != "online" ]] ; then            /usr/bin/echo "The Firewall service is offline"        else            /usr/bin/echo "\nThe Firewall service is online\n"            /usr/sbin/ipfstat -io        fi   ;;*)        /usr/bin/echo "Usage: $0 {start|stop|restart|status}"        exit 1   ;;esacexit 0

Oracle Solaris 11 includes software firewall.In the cloud, this means that the need for expensive network hardware can be reduced while changes to network configurations can be made quickly and...

Personal

LDoms with Solaris 11

Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page.Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11.The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In orderto enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page.Oracle VM...

Sun

Oracle Solaris Zones Physical to virtual (P2V)

IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications.A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physicalsystem and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image.2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1.The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris 10 for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> create -bzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle...

Sun

VirtualBox Teleporting

ABSTRACT In this entry, I will demonstrate how to use the new feature ofVirtualBox version 3.1 Migration (a.k.a teleporting) for moving avirtual machine over a network from one VirtualBox host to another,while the virtual machine is running.Introduction to VirtualBox VirtualBox is a general-purpose full virtualizer for x86 hardware.Targeted at server, desktop and embedded use, it is now the onlyprofessional-quality virtualization solution that is also Open SourceSoftware. Introduction to TeleportingTeleporting requires that a machine be currently running on one host,which is then called the "source". The host to which the virtualmachine will be teleported will then be called the "target". Themachine on the target is then configured to wait for the source tocontact the target. The machine's running state will then betransferred from the source to the target with minimal downtime.This works regardless of the host operating system that is running onthe hosts: you can teleport virtual machines between Solaris and Machosts, for example. Architecture layout : Prerequisites : 1. The target and source machines should both be running VirtualBox version 3.1or later.2. The target machine must be configured with the same amount of memory(machine and video memory) and other hardware settings,  as the sourcemachine. Otherwise teleporting will fail with an error message.3. The two virtual machines on the source and the target must share the same storage   they either use the same iSCSI targets or both hosts have access to  the same storage via NFS or SMB/CIFS.4. The source and target machines cannot have any snapshots.5. The hosts must have fairly similar CPUs. Teleporting between Intel and AMD CPUs will probably fail with an error message.Preparing the storage environmentFor this example, I will use OpenSolaris x86 as CIFS server in order toenable shared storage for the source and target machines ,but you canuse any iSCSI NFS or CIFS server for this task.Install the packages from the OpenSolaris.org repository:# pfexec pkg install SUNWsmbs SUNWsmbskrReboot the system to activate the SMB server in the kernel.# pfexec reboot Enable  the CIFS service:# pfexec svcadm enable –r smb/serverIf the following warning is issued, you can ignore it:svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances Verified the service # pfexec svcs smb/serverSTATE          STIME    FMRIonline          8:38:22 svc:/network/smb/server:defaultThe Solaris CIFS SMB service uses WORKGROUP as the default group. Ifthe workgroup needs to be changed, use the following command to changethe workgroup name:# pfexec smbadm join –w workgroup-nameNext edit the /etc/pam. conf file to enable encrypted passwords to be used for CIFS.Add the following line to the end of the file:other password required pam_smb_passwd. so. 1     nowarn# pfexec  echo "other password required pam_smb_passwd.so.1     nowarn" >> /etc/pam.conf Each user currently in the /etc/passwd file needs to re-encrypt to be able to use the CIFS service:# pfexec passwd user-nameNote - After the PAM module is installed, the passwd commandautomatically generates CIFS-suitable passwords for new users. You mustalso run the passwd command to generate CIFS-style passwords forexisting users.Create mixed-case ZFS file system.# pfexec zfs create -o casesensitivity=mixed  rpool/vboxstorageEnable SMB sharing for the ZFS file system.# pfexec  zfs set sharesmb=on rpool/vboxstorageVerify how the file system is shared.# pfexec sharemgr show -vpNow, you can access the share by connecting to \\\\solaris-hostname\\share-name Create new Virtual machine, for the virtual hard disk select “Create new hard disk” then select next Select the next button For the disk location enter the network drive that you mapped from the previous section then press the next button Verify the disk settings and then pressthe finish button Continue with the Install process ,after finishing the install process shutdown the Virtual machine in order to avoid any storage locking. On the target machine : Map the same network drive Configure a new virtual machine butinstead of selecting “Create new hard drive” select use “Use existing hard drive”. In the Virtual Media Manger windowselect the Add button,and point to the same location as thesource machine hard drive ( the network drive). Don't start the Virtual machine yet.To wait for a teleport request to arrive when it is started,use the following VBoxManage command:VBoxManage modifyvm <targetvmname> --teleporter on --teleporterport <port>where <targetvmname> is the name of the virtual machine on thetarget in this use case opensolaris  ,and <port> is a TCP/IPport number to be used on both the source and the target. In this example, I used port 6000.C:\\Program Files\\Sun\\VirtualBox>VBoxManage modifyvm  opensolaris --teleporter on --teleporterport 6000Next, start the VM on the target. You will see that instead of actuallyrunning, it will show a progress dialog. indicating that it is waitingfor a teleport request to arrive. You can see that the machine status changed to Teleporting  On the source machine:  Start the Virtual machineWhen it is running and you want it to be teleported, issue the following command : VBoxManage controlvm <sourcevmname> teleport --host <targethost> --port <port> where <sourcevmname> is the name of the virtual machine on thesource (the machine that is currently running), <targethost> isthe host or IP name of the target that has the machine that is waitingfor the teleport request, and <port> must be the same number asspecified in the command on the target (i.e. for this example: 6000). C:\\Program Files\\Sun\\VirtualBox>VBoxManage controlvm opensolaris  teleport --host target_machine_ip --port 6000 VirtualBox Command Line Management Interface Version 3.1.0(C) 2005-2009 Sun Microsystems, Inc.All rights reserved.0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% You can see that the machine status changed to Teleported The teleporting process took ~5 seconds For more information about VirtualBox

ABSTRACT In this entry, I will demonstrate how to use the new feature of VirtualBox version 3.1 Migration (a.k.a teleporting) for moving avirtual machine over a network from one VirtualBox host...

Sun

Solaris Zones migration with ZFS

ABSTRACT In this entry I will demonstrate how to migrate a Solaris Zone runningon T5220 server to a new  T5220 server using ZFS as file system forthis Zone. Introduction to Solaris ZonesSolaris Zones provide a new isolation primitive for the Solaris OS,which is secure, flexible, scalable and lightweight. Virtualized OSservices  look like different Solaris instances. Together with theexisting Solaris Resource management framework, Solaris Zones forms thebasis of Solaris Containers.Introduction to ZFSZFS is a new kind of file system that provides simple administration,transactional semantics, end-to-end data integrity, and immensescalability. Architecture layout : Prerequisites :The Global Zone on the target system must be running the same Solaris release as the original host.To ensure that the zone will run properly, the target system must havethe same versions of the following required operating system packagesand patches as those installed on the original host. Packages that deliver files under an inherit-pkg-dir resource Packages where SUNW_PKG_ALLZONES=true Other packages and patches, such as those for third-party products, can be different. Note for Solaris 10 10/08: If the new host has later versions of thezone-dependent packages and their associated patches, using zoneadmattach with the -u option updates those packages within the zone tomatch the new host. The update on attach software looks at the zonethat is being migrated and determines which packages must be updated tomatch the new host. Only those packages are updated. The rest of thepackages, and their associated patches, can vary from zone to zone.This option also enables automatic migration between machine classes,such as from sun4u to sun4v. Create the ZFS pool for the zone # zpool create zones c2t5d2 # zpool listNAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOTzones   298G    94K   298G     0%  ONLINE  -Create a ZFS file system for the zone # zfs create zones/zone1 # zfs listNAME          USED  AVAIL  REFER  MOUNTPOINTzones         130K   293G    18K  /zoneszones/zone1    18K   293G    18K  /zones/zone1Change the file system permission # chmod 700 /zones/zone1Configure the zone # zonecfg -z zone1zone1: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:zone1> create -bzonecfg:zone1> set autoboot=truezonecfg:zone1> set zonepath=/zones/zone1zonecfg:zone1> add netzonecfg:zone1:net> set address=192.168.1.1zonecfg:zone1:net> set physical=e1000g0zonecfg:zone1:net> endzonecfg:zone1> verifyzonecfg:zone1> commitzonecfg:zone1> exitInstall the new Zone # zoneadm -z zone1 installBoot the new zone # zoneadm -z zone1 bootLogin to the zone # zlogin -C zone1Answer all the setup questions How to Validate a Zone Migration Before the Migration Is PerformedGenerate the manifest on a source host named zone1 and pipe the outputto a remote command that will immediately validate the target host: # zoneadm -z zone1 detach -n | ssh targethost zoneadm -z zone1 attach -n - Start the migration processHalt the zone to be moved, zone1 in this procedure. # zoneadm -z zone1 haltCreate snapshot for this zone in order to save its original state # zfs snapshot zones/zone1@snap # zfs listNAME               USED  AVAIL  REFER  MOUNTPOINTzones             4.13G   289G    19K  /zoneszones/zone1       4.13G   289G  4.13G  /zones/zone1zones/zone1@snap      0      -  4.13G  -Detach the zone. # zoneadm -z zone1 detachExport the ZFS pool,use the zpool export command # zpool export zones On the target machine  Connect the storage to the machine and then import the ZFS pool on the target machine # zpool import zones # zpool listNAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOTzones   298G  4.13G   294G     1%  ONLINE  - # zfs listNAME               USED  AVAIL  REFER  MOUNTPOINTzones             4.13G   289G    19K  /zoneszones/zone1       4.13G   289G  4.13G  /zones/zone1<zones/zone1@snap  2.94M      -  4.13G  -On the new host, configure the zone. # zonecfg -z zone1You will see the following system message:zone1: No such zone configuredUse 'create' to begin configuring a new zone.To create the zone zone1 on the new host, use the zonecfg command with the -a option and the zonepath on the new host.zonecfg:zone1> create -a /zones/zone1Commit the configuration and exit.zonecfg:zone1> commitzonecfg:zone1> exitAttach the zone with a validation check. # zoneadm -z zone1 attachThe system administrator is notified of required actions to be taken if either or both of the following conditions are present:Required packages and patches are not present on the new machine.The software levels are different between machines. Note for Solaris 10 10/08: Attach the zone with a validation check andupdate the zone to match a host running later versions of the dependentpackages or having a different machine class upon attach. # zoneadm -z zone1 attach -uSolaris 10 5/09 and later: Also use the -b option to back out specified patches, either official or IDR, during the attach. # zoneadm -z zone1 attach -u -b IDR246802-01 -b 123456-08Note that you can use the -b option independently of the -u option.Boot the zone # zoneadm -z zone1 bootLogin to the new zone # zlogin -C zone1[Connected to zone 'zone1' console]Hostname: zone1All the process took approximately five minutesFor more information about Solaris ZFS and Zones

ABSTRACT In this entry I will demonstrate how to migrate a Solaris Zone running on T5220 server to a new  T5220 server using ZFS as file system for this Zone.Introduction to Solaris ZonesSolaris Zones...

Personal

Performance Study / Best Practices for running MySQL on Xen Based Hypervisors

ABSTRACT This blog entry provides technical insight into the benchmark of the MySQL database on Xen Virtualization environment based on the xVM Hypervisor Introduction to the xVM Hypervisor  xVM hypervisor can securely execute multiple virtual machines simultaneously, each running its own operating system, on a single physical system. Each virtual machine instance is called a domain. There are two kinds of domains. The control domain is called domain0, or dom0. A guest OS, or unprivileged domain, is called a domainU or domU. Unlike virtualization using zones, each domain runs a full instance of an operating system. Introduction to the MySQL database  MySQL database is the world's most popular open source database because of its fast performance, high reliability, ease of use, and dramatic cost savings. Tests Objective: The main objective is to bring an understanding on how MySQL behaves within a virtualized environment, using UFS or ZFS file system Tests Description: We built a test environment by using a Sun X4450 MySQL 5.4 was installed on OpenSolaris 2009_06 because of the OS built-in integration with the xVM Hypervisor . A separate set of performance tests was run with MySQL data placed on a SAN disk. xVM guest OS is OpenSolaris 2009_06 . When running under xVM the server resources ( cpu, memory) were divided between the dom0 and domU guest OS.  dom0 - 2 vcpu and 2GB RAM domU -  4 vcpu and 6GB RAM We used paravirtualized domU operating system in order to get the best performance. We chose to analyze the performance behavior for InnoDB storage engine due to its high popularity. We chose to analyze the performance behavior for two file systems (ZFS and UFS) in order to check which file system performs better for MySQL . SysBench was used as loading tool to test base performance for each configuration. The tool is simple to use, modular, cross-platform and multi-threaded. It also and can give a good feeling regarding the performance for a simple database use.   Hardware configuration: Server :SUN X4450 ,with 2X 2.9GHz dual-core CPU,8GB RAM , 2 X 146 GB internal disks. Storage :StorageTek 6140 - configured RAID 0+1, directly attached to the server. Software: MySQL 5.4, OpenSolaris 2009_06 . The SysBench script sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=10000000 --mysql-socket=/tmp/mysql.sock --mysql-user=root prepare sysbench --num-threads=8 --max-time=900 --max-requests=500000 --test=oltp --mysql-user=root --mysql-host=localhost --mysql-port=3306 --mysql-table-engine=innodb --oltp-test-mode=complex --oltp-table-size=80000000 run We tested it with different number of threads  4 ,8 ,16 ,32  (--num-threads=8 )   The benchmark layout After the creation of OpenSolaris 2009_06 in domU we attached the SAN storage Attached the file system to the guest xm block-attach para-opensolaris phy:/dev/dsk/c0t600A0B8000267DD400000A8D494DB1A6d0p0 3 w Verified access to the file system from the guest root@para-opensolaris:~# format Searching for disks...done AVAILABLE DISK SELECTIONS:      0. c7d3 <DEFAULT cyl 4096 alt 0 hd 128 sec 32>         /xpvd/xdf@3      1. c7t0d0 <DEFAULT cyl 3915 alt 0 hd 255 sec 63>         /xpvd/xdf@51712 zpool create -f xvmpool c7d3 root@para-opensolaris:~# zpool list NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT rpool    29.8G  10.6G  19.2G    35%  ONLINE  - xvmpool  117.94G  89.5K  117.94G     0%  ONLINE  - The first result of running this benchmark on UFS The first result of running this benchmark on ZFS The results after match ZFS recored size to block size and limiting  ZFS ARC size zfs create -o recordsize=16k xvmpool/mysql set zfs:zfs_arc_max = 0x10000000   in /etc/system The results after disable ZFS cache flush ( We have battery backed cache)  set zfs:zfs_nocacheflush = 1   in /etc/system Conclusion After ZFS tuning we were able to receive the same results as UFS   Thus, we can benefit from ZFS extra features like snapshot and clone. For more information about ZFS and OpenSolaris

ABSTRACT This blog entry provides technical insight into the benchmark of the MySQL database on Xen Virtualization environment based on the xVM Hypervisor Introduction to the xVM Hypervisor  xVM...

Personal

Logical Domains Physical-to-Virtual (P2V) Migration

The Logical domains P2V migration tool automatically converts an existing physical systemto a virtual system that runs in a logical domain on a chip multithreading (CMT) system.The source system can be any of the following:     \*  Any sun4u SPARC system that runs at least the Solaris 8 Operating System    \*  Any sun4v system that runs the Solaris 10 OS, but does not run in a logical domain In this entry I will demonstrate how to use Logical domains P2V migration tool to migrate Solaris running on V440 server (physical)Into guest domain running on the T5220 server (virtual)Architecture layout : Before you can run the LogicalDomains P2VMigration Tool, ensure that the following are true:     Target system runs at least LogicalDomains 1.1 on the following:      Solaris 10 10/08 OS     Solaris 10 5/08 OS with the appropriate LogicalDomains 1.1 patches     Guest domains run at least the Solaris 10 5/08 OS     Source system runs at least the Solaris 8 OS prerequisites In addition to these prerequisites, configure an NFS file system to be shared by both the sourceand target systems. This file system should be writable by root.However, if a shared file systemis not available, use a local file system that is large enough to hold a file system dump of thesource system on both the source and target systems LimitationsVersion 1.0 of the LogicalDomains P2VMigration Tool has the following limitations:      Only UFS file systems are supported.      Each guest domain can have only a single virtual switch and virtual disk      The flash archiving method silently ignores excluded file systems. The conversion from a physical system to a virtual system is performed in the following phases: Collection phase   Runs on the physical source system. collect creates a file system image of the source system based on the configuration information that it collects about the source system. Preparation phase. Runs on the control domain of the target system. prepare creates the logical domain on the target system based on the configuration information collected in the collect phase. Conversion phase. Runs on the control domain of the target system. In the convert phase,the created logical domain is converted into a logical domain that runs the Solaris 10 OS by using the standard Solaris upgrade process. Collection phaseOn the target machine T5220Prepare the NFS server in order to hold the a file system dump of the source system on both the source and target systems.In this use case I will use the target machine (T5220) as the NFS server# mkdir /p2v# share -F nfs -o root=v440 /p2vVerify the NFS share# share-  /p2v  root=v440  ""Install the Logical Domains P2V MigrationToolGo to the Logical Domains download page at http://www.sun.com/servers/coolthreads/ldoms/get.jsp.Download the P2V software package, SUNWldmp2vUse the pkgadd command to install the SUNWldmp2v package# pkgadd -d . SUNWldmp2vCreate the /etc/ldmp2v.conf file we will use it later# cat /etc/ldmp2v.confVSW="primary-vsw0"VDS="primary-vds0"VCC="primary-vcc0"BACKEND_PREFIX="/ldoms/disks/"BACKEND_TYPE="file"BACKEND_SPARSE="no"BOOT_TIMEOUT=10On the source machine V440Install the Logical Domains P2V MigrationTool# pkgadd -d . SUNWldmp2vMount the NFS share# mkdir /p2v# mount t5220:/p2v /p2vRun the collection command# /usr/sbin/ldmp2v collect -d /p2v/v440Collecting system configuration ...Archiving file systems ...DUMP: Date of this level 0 dump: August 2, 2009 4:11:56 PM IDTDUMP: Date of last level 0 dump: the epochDUMP: Dumping /dev/rdsk/c1t0d0s0 (mdev440-2:/) to /p2v/v440/ufsdump.0.The collection phase took 5 minutes for 4.6 GB dump filePreparation phaseOn the target machine T5220 Run the preparation commandWe will keep the source machine (V440) mac address # /usr/sbin/ldmp2v prepare -d /p2v/v440 -o keep-mac v440 Creating vdisks ...Creating file systems ...Populating file systems ...The preparation phase took 26 minutes We can see that for each physical cpu on the V440 server the LDom P2V Tool create 4 vcpu on the guest domain and assigns the same amount memory that the physical system has# ldm list -l v440 NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIMEv440                   inactive   ------                                   4     8GCONTROL    failure-policy=ignoreDEPENDENCY    master=NETWORK    NAME             SERVICE                     DEVICE     MAC               MODE   PVID VID                  MTU    vnet0            primary-vsw0                           00:03:ba:c4:d2:9d        1DISK    NAME             VOLUME                      TOUT DEVICE  SERVER         MPGROUP    disk0            v440-vol0@primary-vds0Conversion PhaseBefore starting the conversion phase shout down the source server (V440) in order to avoid ip address conflictOn the V440 server# poweroffOn the jumpstart serverYou can use the Custom JumpStart feature to perform a completely hands-off conversion.This feature requires that you create and configure the appropriate sysidcfg and profile files for the client on the JumpStart server.The profile should consist of the following lines: install_type upgrade root_device  c0d0s0 The sysidcfg file : name_service=NONEroot_password=uQkoXlMLCsZhIsystem_locale=Ctimeserver=localhost timezone=Europe/Amsterdamterminal=vt100 security_policy=NONE nfs4_domain=dynamic network_interface=PRIMARY {netmask=255.255.255.192 default_route=none protocol_ipv6=no}On the target server T5220# ldmp2v convert -j -n vnet0 -d /p2v/v440 v440Testing original system status ...LDom v440 startedWaiting for Solaris to come up ...Using Custom JumpStartTrying 0.0.0.0...Connected to 0.Escape character is '\^]'.Connecting to console "v440" in group "v440" ....Press ~? for control options ..SunOS Release 5.10 Version Generic_139555-08 64-bitCopyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.Use is subject to license terms. For information about the P2V migration tool, see the ldmp2v(1M) man page. Edit this page (if you have permission) | Google Docs -- Web word processing, presentations and spreadsheets.

The Logical domains P2V migration tool automatically converts an existing physical system to a virtual system that runs in a logical domain on a chip multithreading (CMT) system.The source system can...

Personal

Storage virtualization with COMSTAR and ZFS

COMSTAR is a software framework that enables you to turn anyOpenSolaris host into a SCSI target that can be accessed over thenetwork by initiator hosts. COMSTAR breaks down the huge task ofhandling a SCSI target subsystem into independent functional modules.These modules are then glued together by the SCSI Target Mode Framework (STMF). COMSTAR features include:    \* Extensive LUN Masking and mapping functions    \* Multipathing across different transport protocols    \* Multiple parallel transfers per SCSI command    \* Scalable design    \* Compatible with generic HBAs COMSTAR is integrated into the latest OpenSolarisIn this entry I will demonstrate the integration between COMSTAR and ZFS Architecture layout : You can install all the appropriate COMSTAR packages # pkg install storage-serverOn a newly installed OpenSolaris system, the STMF service is disabled by default. You must complete this task to enable the STMF service.View the existing state of the service # svcs stmf disabled 15:58:17 svc:/system/stmf:defaultEnable the stmf service # svcadm enable stmfVerify that the service is active. # svcs stmf online 15:59:53 svc:/system/stmf:default Create a RAID-Z storage pool.The server has six controllers each with eight disks and I havebuilt the storage pool to spread I/O evenly and to enable me to build 8 RAID-Z stripes of equal length. # zpool create -f  tank \\raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0 raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0 After the pool is created, the zfs utility can be used to create 50GB ZFS volume. # zfs create -V 50g tank/comstar-vol1 Create a logical unit using the volume. # sbdadm create-lu /dev/zvol/rdsk/tank/comstar-vol1 Created the following logical unit :GUID DATA SIZE SOURCE--------------------------- ------------------- ----------------600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1Verify the creation of the logical unit and obtain the Global Unique Identification (GUID) number for the logical unit. # sbdadm list-luFound 1 LU(s)GUID DATA SIZE SOURCE-------------------------------- ------------------- ----------------600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1This procedure makes a logical unit available to all initiator hosts on a storage network.Add a view for the logical unit. # stmfadm add-view GUID_numberIdentify the host identifier of the initiator host you want to add to your view.Follow the instructions for each port provider to identify the initiators associated with eachport provider. You can see that the port mode is Initiator # fcinfo hba-port        HBA Port WWN: 210000e08b91facd        Port Mode: Initiator        Port ID: 2        OS Device Name: /dev/cfg/c16        Manufacturer: QLogic Corp.        Model: 375-3294-01        Firmware Version: 04.04.01        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;        Serial Number: 0402R00-0634175788        Driver Name: qlc        Driver Version: 20080617-2.30        Type: L-port        State: online        Supported Speeds: 1Gb 2Gb 4Gb        Current Speed: 4Gb        Node WWN: 200000e08b91facd        Max NPIV Ports: 63        NPIV port list:Before making changes to the HBA ports, first check the existing portbindings.View what is currently bound to the port drivers.In this example, the current binding is pci1077,2422. # mdb -kLoading modules: [ unix genunix specfs dtrace mac cpu.genericcpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hookneti sctp arp usba qlc fctl   fcp cpc random crypto stmf nfs lofs logindmux ptm ufs sppp nsctl ipc ]> ::devbindings -q qlcffffff04e058f560 pci1077,2422, instance #0 (driver name: qlc)ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlc) Quit mdb. > $qRemove the current binding, which in this example is qlc.In this example, the qlc driver is actively bound to pci1077,2422.You must remove the existing binding for qlc before you can add thatbinding to a new driver. Single quotes are required in this syntax. # update_drv -d -i 'pci1077,2422' qlc Cannot unload module: qlcWill be unloaded upon reboot.This message does not indicate an error.The configuration files have been updated but the qlc driver remainsbound to the port until reboot.Establish the new binding to qlt.Single quotes are required in this syntax. # update_drv -a -i 'pci1077,2422' qlt  Warning: Driver (qlt) successfully added to system but failed toattach This message does not indicate an error. The qlc driver remains boundto the port, until reboot.The qlt driver attaches when the system is rebooted.Reboot the system to attach the new driver, and then recheck thebindings. # reboot # mdb -kLoading modules: [ unix genunix specfs dtrace mac cpu.genericcpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hookneti sctp arp usba fctl stmf lofs fcip cpc random crypto nfs logindmuxptm ufs sppp nsctl ipc ]> ::devbindings -q qltffffff04e058f560 pci1077,2422, instance #0 (driver name: qlt)ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlt) Quit mdb.  > $q You can see that the port mode is Target # fcinfo hba-port         HBA Port WWN: 210000e08b91facd        Port Mode: Target        Port ID: 2        OS Device Name: /dev/cfg/c16        Manufacturer: QLogic Corp.        Model: 375-3294-01        Firmware Version: 04.04.01        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;        Serial Number: 0402R00-0634175788        Driver Name: qlc        Driver Version: 20080617-2.30        Type: L-port        State: online        Supported Speeds: 1Gb 2Gb 4Gb        Current Speed: 4Gb        Node WWN: 200000e08b91facd        Max NPIV Ports: 63        NPIV port list: Verify that the target mode framework has access to the HBA ports. # stmfadm list-target -vTarget: wwn.210100E08BB1FACDOperational Status: OnlineProvider Name : qltAlias : qlt1,0Sessions : 0Target: wwn.210000E08B91FACDOperational Status: OnlineProvider Name : qltAlias : qlt0,0Sessions : 1Initiator: wwn.210000E08B89F077Alias: -Logged in since: Thu Jul 2 12:02:59 2009 Now for the client setup : On the client machine verify that you can see the new logical unit # cfgadm -alAp_Id                          Type         Receptacle   Occupant     Conditionc0                             scsi-bus     connected    configured   unknownc0::dsk/c0t0d0                 CD-ROM       connected    configured   unknownc1                             scsi-bus     connected    configured   unknownc1::dsk/c1t0d0                 disk         connected    configured   unknownc1::dsk/c1t2d0                 disk         connected    configured   unknownc1::dsk/c1t3d0                 disk         connected    configured   unknownc2                             fc-private   connected    configured   unknownc2::210000e08b91facd           disk         connected    configured   unknownc3                             fc           connected    unconfigured unknownusb0/1                         unknown      empty        unconfigured okusb0/2                         unknown      empty        unconfigured okYou might need to rescan the SAN BUS in order to discover the new logical unit # luxadm -e forcelip /dev/cfg/c2# formatSearching for disks...AVAILABLE DISK SELECTIONS:0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,01. c1t2d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@2,02. c1t3d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,03. c2t210000E08B91FACDd0 <SUN-COMSTAR-1.0 cyl 65533 alt 2 hd 16 sec 100>/pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w210000e08b91facd,0Specify disk (enter its number): You can see the SUN-COMSTAR-1.0 in the disk propertiesNow you can build storage pool on top of it # zpool create comstar-pool c2t210000E08B91FACDd0Verify the pool creation # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOTcomstar-pool 49.8G 114K 49.7G 0% ONLINE - After the pool is created, the zfs utility can be used to create a ZFS volume. #  zfs create -V 48g comstar-pool/comstar-vol1 For more information about COMSTAR please check the COMSTAR  project on OpenSolaris

COMSTAR is a software framework that enables you to turn any OpenSolaris host into a SCSI target that can be accessed over the network by initiator hosts. COMSTAR breaks down the huge task ofhandling...

Personal

IGT Cloud Computing WG Meeting

Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "IGT Cloud Computing WG Meeting" at the Sun office in Herzeliya. During the event Nati Shalom, CTO GigaSpaces, Moshe Kaplan, CEO Rocketier , Haim Yadid, Performance Expert, ScalableJ, presented various cloud computing technologies,There were 50 attendees from a wide breadth of technology firms.For more information regarding using Sun's cloud see http://www.sun.com/solutions/cloudcomputing/index.jsp .meeting agenda : Auto-Scaling Your Existing Web Application Nati Shalom, CTO, Gigaspaces In this session, will cover how to take a standard JEE web application and scale it out  or down dynamically, without changes to the application code. Seeing as most web applications are over-provisioned to meet  infrequent peak loads, this is a dramatic change, because it enables growing your application as needed, when needed, without paying for unutilized resources.  Nati will discuss the challenges involved in dynamic scaling, such as ensuring the integrity and consistency of the application, how to keep the load-balancer in sync with servers' changing location, and how to maintain affinity and high availability of session information with the load balancer. If time permits, Nati  will show a live demo of a Web 2.0 app scaling dynamically on the Amazon cloud. How your very large databases can work in the cloud computing world? Moshe Kaplan, RockeTier, a performance expert and scale out architect Cloud computing is famous for its flexibility, dynamic nature and ability to infinite growth. However, infinite growth means very large databases with billions of records in it. This leads us to a paradox: "How can weak servers support very large databases which usually require several CPUs and dedicated hardware?"The Internet industry proved it can be done. These days many of the Internet giants, processing billions of events every day, are based on cloud computing architecture such and sharding. What is Sharding ? What kinds of Sharding can you implement? What are the best practices?Utilizing the cloud for Performance ValidationHaim Yadid, Performance Expert, ScalableJ Creating Loaded environment is crucial for software performance validation. Execution of such a simulated environment required usually great deal of hardware which is then left unused during most of the development cycle. In this short session I will suggest utilizing cloud computing for performance validation. I will present a case study where loaded environment used 12 machines on AWS for the duration of the test. This approach gives much more flexibility and reduces TCO dramatically. We will discuss the limitation of this approach and suggest means to address them.

Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "IGT Cloud Computing WG Meeting" at the Sun office in Herzeliya. During the event Nati Shalom, CTO GigaSpaces, Moshe...

Personal

Ldom with ZFS

Logical Domains offers a powerful and consistent methodology for creating virtualized server environments across the entire CoolThreads server range:    \* Create multiple independent virtual machines quickly and easily      using the hypervisor built into every CoolThreads system.    \* Leverage advanced Solaris technologies such as ZFS cloning and      snapshots to speed deployment and dramatically reduce disk      capacity requirements. In this entry I will demonstrate the integration between Ldom and ZFS Architecture layout Downloading Logical Domains Manager and Solaris Security Toolkit Download the Software Download the zip file (LDoms_Manager-1_1.zip) from the Sun Software Download site. You can find the software from this web site: http://www.sun.com/ldoms Unzip the zip file. # unzip LDoms_Manager-1_1.zip Please read the REDME file for any prerequisite The installation script is part of the SUNWldm package and is in the Install subdirectory. # cd LDoms_Manager-1_1 Run the install-ldm installation script with no options. # Install/install-ldm Select a security profile from this list: a) Hardened Solaris configuration for LDoms (recommended) b) Standard Solaris configuration c) Your custom-defined Solaris security configuration profile Enter a, b, or c [a]: a Shut down and reboot your server # /usr/sbin/shutdown -y -g0 -i6 Use the ldm list command to verify that the Logical Domains Manager is running # /opt/SUNWldm/bin/ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- SP 32 16256M 0.0% 2d 23h 27m Creating Default Services You must create the following virtual default services initially to be able to use them later: vdiskserver – virtual disk server vswitch – virtual switch service vconscon – virtual console concentrator service Create a virtual disk server (vds) to allow importing virtual disks into a logical domain. # ldm add-vds primary-vds0 primary Create a virtual console concentrator (vcc) service for use by the virtual network terminal server daemon (vntsd) # ldm add-vcc port-range=5000-5100 primary-vcc0 primary Create a virtual switch service (vsw) to enable networking between virtual network (vnet) devices in logical domains # ldm add-vsw net-dev=e1000g0 primary-vsw0 primary Verify the services have been created by using the list-services subcommand. # ldm list-services Set Up the Control Domain Assign cryptographic resources to the control domain. # ldm set-mau 1 primary Assign virtual CPUs to the control domain. # ldm set-vcpu 4 primary Assign memory to the control domain. # ldm set-memory 4G primary Add a logical domain machine configuration to the system controller (SC). # ldm add-config initial Verify that the configuration is ready to be used at the next reboot # ldm list-config factory-default initial [next poweron] Reboot the server # shutdown -y -g0 -i6 Enable the virtual network terminal server daemon, vntsd # svcadm enable vntsd Create the zpool # zpool create ldompool c1t2d0 c1t3d0 # zfs create ldompool/goldimage # zfs create -V 15g ldompool/goldimage/disk_image Creating and Starting a Guest Domain Create a logical domain. # ldm add-domain goldldom Add CPUs to the guest domain. ldm add-vcpu 4 goldldom Add memory to the guest domain # ldm add-memory 2G goldldom Add a virtual network deviceto the guest domain. # ldm add-vnet vnet1 primary-vsw0 goldldom Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain # ldm add-vdsdev /dev/zvol/dsk/ldompool/goldimage/disk_image vol1@primary-vds0 Add a virtual disk to the guest domain. # ldm add-vdisk vdisk0 vol1@primary-vds0 goldldom Set auto-boot and boot-device variables for the guest domain # ldm set-variable auto-boot\\?=false goldldom # ldm set-var boot-device=vdisk0 goldldom Bind resources to the guest domain goldldom and then list the domain to verify that it is bound. # ldm bind-domain goldldom # ldm list-domain goldldom NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME primary          active     -n-cv-  SP      4     4G       0.2%  15m goldldom         bound      ------  5000    4     2G Start the guest domain # ldm start-domain goldldom Connect to the console of a guest domain # telnet 0 5000 Trying 0.0.0.0... Connected to 0. Escape character is '\^]'. Connecting to console "goldldom" in group "goldldom" .... Press ~? for control options .. {0} ok Jump-Start the goldldom{0} ok boot net - install We can login to the new guest and verify that the file system is zfs # zpool list NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT rpool  14.9G  1.72G  13.2G    11%  ONLINE  - Restore the goldldom configuration to an "as-manufactured" state with the sys-unconfig command # sys-unconfig This program will unconfigure your system.  It will cause it to revert to a "blank" system - it will not have a name or know about other systems or networks. This program will also halt the system. Do you want to continue (y/n) y Press ~. in order to return to the primary domain Stop the guest domain # ldm stop goldldom Unbind the guest domain # ldm unbind  goldldom Snap shot the disk image # zfs snapshot ldompool/goldimage/disk_image@sysunconfig Create new zfs file system for the new guest # zfs create ldompool/domain1 Clone the goldldom disk image # zfs clone ldompool/goldimage/disk_image@sysunconfig ldompool/domain1/disk_image # zfs list NAME USED AVAIL REFER MOUNTPOINT ldompool 17.0G 117G 21K /ldompool ldompool/domain1 18K 117G 18K /ldompool/domain1 ldompool/domain1/disk_image 0 117G 2.01G - ldompool/goldimage 17.0G 117G 18K /ldompool/goldimage ldompool/goldimage/disk_image 17.0G 132G 2.01G - ldompool/goldimage/disk_image@sysunconfig 0 - 2.01G - Creating and Starting the second  Domain # ldm add-domain domain1 # ldm add-vcpu 4 domain1 # ldm add-memory 2G domain1 # ldm add-vnet vnet1 primary-vsw0 domain1 # ldm add-vdsdev /dev/zvol/dsk/ldompool/domain1/disk_image vol2@primary-vds0 # ldm add-vdisk vdisk1 vol2@primary-vds0 domain1 # ldm set-var auto-boot\\?=false domain1 # ldm set-var boot-device=vdisk1 domain1 # ldm bind-domain domain1 # ldm list-domain domain1 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME domain1 bound ------ 5001 8 2G Start the domain # ldm start-domain domain1 Connect to the console # telnet 0 5001 {0} ok boot net -s Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Booting to milestone "milestone/single-user:default". Configuring devices. Using RPC Bootparams for network configuration information. Attempting to configure interface vnet0... Configured interface vnet0 Requesting System Maintenance Mode SINGLE USER MODE # zpool import -f rpool # zpool export rpool # reboot Answer the configuration questions Login to the new domain and verify that we have zfs file system # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 14.9G 1.72G 13.2G 11% ONLINE -

Logical Domains offers a powerful and consistent methodology for creating virtualized server environments across the entire CoolThreads server range:   \* Create multiple independent virtual...

Personal

Brief Technical Overview and Installation of Ganglia on Solaris

Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. For this setup we will use the following software packages: 1. Ganglia - the core Ganglia package 2. Zlib - zlib compression libraries 3. Libgcc - low-level runtime library 4. Rrdtool - round Robin Database graphing tool 5. Apache web server with php support You can get the  packagers ( 1-3)  from sunfreeware (depending on your architecture -  x86 or SPARC)Unzip and Install the packages 1. gzip -d ganglia-3.0.7-sol10-sparc-local.gz pkgadd -d ./ganglia-3.0.7-sol10-sparc-local2. gzip -d zlib-1.2.3-sol10-sparc-local.gz pkgadd -d ./zlib-1.2.3-sol10-sparc-local 3. gzip -d libgcc-3.4.6-sol10-sparc-local.gz pkgadd -d ./libgcc-3.4.6-sol10-sparc-local4. You will need pkgutil from blastwave in order to install rrdtool software packages /usr/sfw/bin/wget http://blastwave.network.com/csw/unstable/sparc/5.8/pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg.gz   gunzip pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg.gz   pkgadd -d pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg  Now you can install packages with all required dependencies with a single command:/opt/csw/bin/pkgutil -i rrdtool5. You will need to download Apache ,PHP and Core libraries from Cool stack Core libraries used by other packagesbzip2 -d CSKruntime_1.3.1_sparc.pkg.bz2pkgadd -d ./CSKruntime_1.3.1_sparc.pkgApache 2.2.9,  PHP 5.2.6 bzip2 -d CSKamp_1.3.1_sparc.pkg.bz2 pkgadd -d ./CSKamp_1.3.1_sparc.pkgThe following packages are available: 1 CSKapache2 Apache httpd (sparc) 2.2.9 2 CSKmysql32 MySQL 5.1.25 32bit (sparc) 5.1.25 3 CSKphp5 PHP 5 (sparc) 5.2.6Select package(s) you wish to process (or 'all' to processall packages). (default: all) [?,??,q]:1,3Select the 1 and 3 option Enable the web server service svcadm enable svc:/network/http:apache22-csk Verify it is working svcs svc:/network/http:apache22-cskSTATE          STIME    FMRIonline         17:02:13 svc:/network/http:apache22-csk Locate the Web server  DocumentRoot grep DocumentRoot /opt/coolstack/apache2/conf/httpd.confDocumentRoot "/opt/coolstack/apache2/htdocs" Copy the Ganglia directory tree cp -rp /usr/local/doc/ganglia/web  /opt/coolstack/apache2/htdocs/gangliaChange the rrdtool path on  /opt/coolstack/apache2/htdocs/ganglia/conf.php from /usr/bin/rrdtool  to /opt/csw/bin/rrdtoolStart the gmond daemon with the default configuration/usr/local/sbin/gmond --default_config > /etc/gmond.confEdit /etc/gmond.conf  ,change  name = "unspecified"  to name="grid1"  (This is our grid name.)Verify that it has started : ps -ef | grep gmond nobody 3774 1 0 16::57:41 ? 0:55 /usr/local/gmondIn order to debug any problem, try:/usr/local/sbin/gmond --debug=9Build the directory for the rrd images mkdir -p /var/lib/ganglia/rrdschown -R nobody  /var/lib/ganglia/rrds Add the folowing line to /etc/gmetad.conf data_source "grid1"  localhost Start the gmetad daemon /usr/local/sbin/gmetadVerify it --> ps -ef | grep gmetadnobody  4350     1   0 17:10:30 ?           0:24 /usr/local/sbin/gmetad  To debug any problem /usr/local/sbin/gmetad --debug=9Point your browser to: http://server-name/ganglia

Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. For this...

Personal

Technical overview GlassFish 3.0 on Amazon cloud

The integrated GigaSpaces GlassFish solution with its components is captured in the following diagram :      SLA Driven deployment environment: The SLA Driven deployment environment is responsible for hosting all services in the network. It basically does match making between the application requirements and the availability of the resources over the network. It is comprised of the following components: Grid Service Manager - GSM – responsible for managing the application lifecycle and deployment Grid Service Container – GSC – a light weight container which is essentially a wrapper on top of the Java process that exposes the JVM to the GSM and provides a means to deploy and undeploy services dynamically. Processing-Unit (PU )– Represents the application deployment unit. A Processing Unit is essentially an extension of the spring application context that packages specific application components in a single package and uses dependency injection to mesh together these components. The Processing Unit is an atomic deployment artifact and its composition is determined by the scaling and failover granularity of a given application. It, therefore,  is the unit-of-scale and failover. There are number of pre-defined Processing Unit types : Web Processing Unit – Web Processing Unit is responsible for managing Web Container instances and enables them to run within SLA driven container environment. With a Web Processing Unit, one can deploy the Web Container as group of services and apply SLAs or QoS semantics such as one-per-vm, one-per-machine, etc.  In other words, one can easily use the Processing Unit SLA to determine how web containers would be provisioned on the network. In our specific case most of the GlassFish v3 Prelude integration takes place at this level. Data Grid Processing Unit – Data Grid is a processing unit that wraps the GigaSpaces space instances. By wrapping the space instance it adds SLA capabilities avliable with each processing unit. One of the common SLA is to ensure that primary instances will not be running on the same machine as the backup instances etc. It also determines deployment topology (partitioned, replicated), as well as scaling policy, etc. The data grid includes another instance, not shown in the above diagram, called the Mirror Service. The Mirror Service is responsible for making sure that all updates made on the Data Grid will be passed reliably to the underlying database.   Load Balancer Agent – The Load Balancer Agent is responsible for listening to web-containers availability and add those instances to the Load Balancer list when a new container is added, or remove it when it has been removed. The Load Balancer Agent is currently configured to work with the Apache Load Balancer but can be easily set up to work with any external Load Balancer. How it works: The following section provides a high-level description of how all the above components work together to provide high performance and scaling.  Deployment - The deployment of the application is done through the GigaSpaces processing-unit deployment command. Assigning specific SLA as part of the deployment lets the GSM know how we wish to distribute the web instances over the network. For example, one could specify in the SLA that there would be only one instance per machine and define the minimum number of instances that need to be running. If needed, one can add specific system requirements such as JVM version, OS-Type, etc. to the SLA . The deployment command points to to a specific web application archive (WAR). The WAR file needs to include a configuration attribute in its META-INF configuration that will instruct the deployer tool to use GlassFish v3 Prelude as the web container for this specific web application. Upon deployment the GlassFish-processing-unit will be started on the available GSC containers that matches the SLA definitions. The GSC will assign specific port to that container instance. .  When GlassFish starts it will load the war file automatically and start serving http requests to that instance of the web application. Connecting to the Load Balancer - Auto scaling -The load balancer agent is assigned with each instance of the GSM. It listens for the availability of new web containers and ensures that the available containers will join the load balancer by continuously updating the load-balancer configuration whenever such change happens. This happens automatically through the GigaSpaces discovery protocol and does not require any human intervention. Handling failure - Self healing - If one of the web containers fails, the GSM will automatically detect that and start and new web container on one of the available GSC containers if one exists. If there is not enough resources available, the GSM will wait till such a resource will become available. In cloud environments, the GSM will initiate a new machine instance in such an event by calling the proper service on the cloud infrastructure. Session replication - HttpSession can be automatically backed up by the GigaSpaces In Memroy Data Grid (IMDG) . In this case user applications do not need to change their code. When user data is stored in the HttpSession,  that data gets stored into the underlying IMDG. When the http request is completed that data is flushed into the shared data-grid servers. Scaling the database tier - Beyond session state caching, the web application can get a a reference to the GigaSpaces IMDG and use it to store data in-memory in order to reduce contention on the database. GigaSpaces data grid automatically synchronizes updates with the database. To enable maximum performance, the update to the database is done in most cases asynchronously (Write-Behind). A built-in hibernate plug-in handles the mapping between the in-memory data and the relational data model. You can read more on how this model handles failure as well as consistency, aggregated queries here.

The integrated GigaSpaces GlassFish solution with its components is captured in the following diagram :      SLA Driven deployment environment: The SLA Driven deployment environment is responsible...

Personal

GlassFish 3.0 on Amazon cloud

Here is how you can run the demo of GlassFish 3.0 on Amazon cloud. Where should I start? The best way to get started is to run a demo applicaiton and see for yourself how this integration works. To make it even simpler we offer the demo on our new cloud offering. This will enable you to expereince how a full production ready environment which include full clustering, dynamic scaling, full high avliability and Session replication works in one click. To run the demo on the cloud follow the follwoing steps: 1. Download the GlassFish web scaling deployment file from here to your local machine. 2. Get the mycloud page and get your free access code – this will enable you to get access to the cloud.  3. Select the stock-demo-2.3.0-gf-sun.xml then hit the Deploy button (you first need to save the attached file in one your local machine) – The system will start provisioning the web application on the cloud. This will include a machine for managing the cloud, a machine for the load-balancer and machines for the web and data-grid containers. After approximately 3 minutes the application will be deployed completely. At this point you should see “running” link on the load-balancer machine. Click on this link to open your web-client application. . 4. Test auto-scaling – click multiple times on the “running” link to open more clients. This will enable us to increase the load (request/sec) on the system. As soon as the request/sec will grow beyond a certain threshold you’ll see new machines being provisioned. After two minutes approximately the machine will be running and a new web-container will be auto-deployed into that machine. This new web-container will be linked automatically with the load-balancer and the load-balancer in return will spread the load to this new machine as well. This will reduce the load on each of the servers. 5. Test self-healing – you can now kill one of the machines and see how your web client is behaving. You should see that even though the machine was killed the client was hardly effected and system scaled itself down automatically. Seeing what’s going on behind the scene:All this may seam to be like a magic for you. If you want to access the machines and watch the web containers, the data-grid instances and the machines as well as the real time statistics you can click on the Manage button button. This will open-up a management console that is running on the cloud through the web. With this tool you can view all the components of our systems. You can even query the data using SQL browser and view the data as it enters the system. In addition to that you can choose to add more services, relocate services through a simple click of a mouse or drag and drop action.  For more information regarding using Glassfish see http://www.sun.com/software/products/glassfish_portfolio/

Here is how you can run the demo of GlassFish 3.0 on Amazon cloud.  Where should I start? The best way to get started is to run a demo applicaiton and see for yourself how this integration works. To...

Personal

xVM public API

There are two ways of accessing the public API, the simplest is by writing a Java client using JMX, or alternatively for non-Java client programs using Web Services for Management (WS-MAN, JSR-262). This example demonstrates the usage of the direct access to the read-only copy of the domain model. This example can be run against both Sun xVM Server or Sun xVM Ops Center. This example performs the following functions: Configures the connection Performs security settings Opens the connection (locally or remotely) Queries the domain model for all the OperatingSystems objects and displays the value of their Hosts name Closes the connection ServerClient.java/\*\* \* Copyright 2006 Sun Microsystems, Inc. All rights reserved. \* SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. \*/import com.sun.hss.type.os.OperatingSystem;import com.sun.hss.type.server.Server;import com.sun.hss.type.virtserver.VirtServerContainer;import com.sun.hss.type.virtserver.VirtServerOperatingSystem;import com.sun.hss.type.xvmserver.XVMApplianceDetails;import com.sun.hss.type.xvmserver.XVMServer;import java.util.HashMap;import java.util.Map;import java.security.cert.CertificateException;import java.security.cert.X509Certificate;import javax.net.ssl.X509TrustManager;import javax.net.ssl.TrustManager;import javax.net.ssl.SSLContext;import javax.net.ssl.SSLSocketFactory;import javax.management.MBeanServerConnection;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.management.ObjectName;import com.sun.xvm.services.guest.GuestServiceMXBean;import com.sun.xvm.services.xvmserver.XVMServerServiceMXBean;import com.sun.xvm.services.guest.GuestDetails;public class ServerClient { static private JMXConnector connector = null; static private MBeanServerConnection mbsc = null; static private String hostname = null; /\*\* \* Simple SocketFactory that uses a trust manager that ALWAYS \* accept the certificate provided by the server we try to connect to. \* \* This is unsafe and should NOT be used for production code. \*/ private static SSLSocketFactory getSocketFactory() { X509TrustManager tm = new AnyServerX509TrustManager(); TrustManager[] tms = {tm}; try { SSLContext sslc = SSLContext.getInstance("TLSv1"); sslc.init(null, tms, null); SSLSocketFactory factory = sslc.getSocketFactory(); return factory; } catch (Exception ex) { return null; } } /\*\* \* Small trustmanager that ALWAYS accepts the certificate provided \* by the server we try to connect to. \* \* This is unsafe and should NOT be used for production code. \*/ public static class AnyServerX509TrustManager implements X509TrustManager { // Documented in X509TrustManager public X509Certificate[] getAcceptedIssuers() { // since client authentication is not supported by this // trust manager, there's no certicate authority trusted // for authenticating peers return new X509Certificate[0]; } // Documented in X509TrustManager public void checkClientTrusted(X509Certificate[] certs, String authType) throws CertificateException { // this trust manager is dedicated to server authentication throw new CertificateException("not supported"); } // Documented in X509TrustManager public void checkServerTrusted(X509Certificate[] certs, String authType) throws CertificateException { // any certificate sent by the server is automatically accepted return; } } /\*\* \* Create a WSMAN connection using the given credentials \* \* @param host \* @param userName \* @param userPass \* @return MBeanServerConnection \*/ private static void setupConnection(String host, String user, String pass) throws Exception { try { int port = 443; String urlPath = "/wsman/ea/jmxws"; Map env = new HashMap(); // credentials for basic authentication with the server String[] creds = new String[2]; creds[0] = user; creds[1] = pass; env.put(JMXConnector.CREDENTIALS, creds); // provide a specific socket factory to avoid the need to setup // a truststore env.put("com.sun.xml.ws.transport.https.client.SSLSocketFactory", getSocketFactory()); // Create JMX Agent URL over https JMXServiceURL url = new JMXServiceURL("ws-secure", host, port, urlPath); // System.out.println("WSMAN client opening a connection with url " + url.toString()); // Connect the JMXConnector connector = JMXConnectorFactory.connect(url, env); // Get the MBeanServerConnection mbsc = connector.getMBeanServerConnection(); } catch (Exception ex) { System.out.println("Got an exception while trying to open a WSMAN connection : " + ex.toString()); throw ex; } } public static void main(String[] args) { if ((args.length == 0) || (args.length > 3)) { System.err.println("Usage: user password [target]"); System.exit(1); } String userName = args[0]; String userPass = args[1]; hostname = "localhost"; if (args.length == 3) { hostname = args[2]; } try { // Open the WSMAN connection and get the MBeanServerConnection setupConnection(hostname, userName, userPass); // get details on these xVM servers serverService(); } catch (Exception ex) { System.out.println("WSMAN client error : " + ex.toString()); ex.printStackTrace(); System.exit(1); } finally { // close connection if necessary if (connector != null) { try { connector.close(); } catch (Exception dc) { } } } } private static void serverService() throws Exception { try { XVMServerServiceMXBean xssmxb = ServerClientServices.getXVMServerService(mbsc, false); // get the list of xVM servers ObjectName[] servers = xssmxb.getXVMApplianceDetailsObjectNames(null, null); if ((servers == null) || (servers.length == 0)) { System.out.println("No xVM server detected on " + hostname); return; } GuestServiceMXBean guestmxb = ServerClientServices.getGuestService(mbsc, false); // get details on these xVM servers for (int i = 0; i < servers.length; i++) { XVMApplianceDetails details = xssmxb.getXVMApplianceDetails(servers[i]); if (details != null) { OperatingSystem os = details.getOperatingsystem(); Server svr = details.getServer(); VirtServerContainer vsc = details.getVirtservercontainer(); XVMServer xsvr = details.getXvmserver(); if (xsvr != null) { System.out.println("xVM Server name = " + xsvr.getApplianceName()); } } // get guests on this xVM server ObjectName[] guests = guestmxb.getGuestObjectNames(servers[i], null, null); Java.util.Map<java.lang.String, Java.util.Set<java.lang.string>> map = null; GuestDetails guestDetails[] = guestmxb.getGuestDetails(guests, map); String s = guests[0].getCanonicalName(); if ((guests == null) || (guests.length == 0)) { System.out.println("No guest on this xVM server"); } if (guestDetails != null) { for (int k = 0; k < guestDetails.length; k++) { VirtServerOperatingSystem virt = guestDetails[k].getVirtServerOperatingSystem(); System.out.println("guest hostname is " + k + ": " + virt.getHostname()); } } } } catch (Exception ex) { System.err.println("Got Exception while testing the xVM server service : " + ex.toString()); throw ex; } }} ServerClientServices.javapublic class ServerClientServices {    /\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*     \* getCacheManagerService     \*     \* @param mbsc     \* @return CacheManagerMXBean     \* @throws java.lang.Exception     \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*/    public static CacheManagerMXBean getCacheManagerService(MBeanServerConnection mbsc, boolean verbose)           throws Exception    {        CacheManagerMXBean cmmxb = null;        try {            // build the objectname to access the service            ObjectName cmson = new ObjectName(CacheManagerMXBean.DOMAIN +                                              ":type=" + CacheManagerMXBean.TYPE);            // verify that the service is currently deployed            if (!mbsc.isRegistered(cmson)) {                System.out.println("Cache Manager service is not registered : aborting");                throw new Exception("MXBean for Cache Manager service not registered");            }            // create a proxy to access the service            cmmxb = JMX.newMXBeanProxy(mbsc,                                       cmson,                                       CacheManagerMXBean.class,                                       false);            if (verbose) {                System.out.println("Proxy for Cache Manager service : OK");            }            return cmmxb;        } catch (Exception ex) {            System.err.println("Got Exception while creating the Cache Manager service proxy : " + ex.toString());            ex.printStackTrace();            throw ex;        }    }

There are two ways of accessing the public API, the simplest is by writing a Java client using JMX, or alternatively for non-Java client programs using Web Services for Management (WS-MAN, JSR-262). Th...

Personal

Solaris iSCSI Server

This document describes how to build Iscsi server based on Solaris platform on sun X4500 server. On the target (server) The server has six controllers each with eight disks and I have built the storage pool to spread I/O evenly and to enable me to build 8 RAID-Z stripes of equal length. zpool create -f tank \\raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0 \\raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0 After the pool is created, the zfs utility can be used to create 50GB ZFS volume. zfs create -V 50g tank/iscsivol000 Enable the Iscsi service svcadm enable iscsitgt Verify that the service is enabled. svcs –a | grep iscsitgt To view the list of commands, iscsitadm can be run without any options: iscsitadm Usage: iscsitadm -?,-V,--help Usage: iscsitadm create [-?] [-?] [ Usage: iscsitadm list [-?] [-?] [ Usage: iscsitadm modify [-?] [-?] [ Usage: iscsitadm delete [-?] [-?] [ Usage: iscsitadm show [-?] [-?] [ For more information, please see iscsitadm(1M) To begin using the iSCSI target, a base directory needs to be created. This directory is used to persistently store the target and initiator configuration that is added through the iscsitadm utility. iscsitadm modify admin -d /etc/iscsi Once the volumes are created, they need to be exported to an initiator iscsitadm create target -b /dev/zvol/rdsk/tank/iscsivol000 target-label Once the targets are created, iscsitadm's "list" command and "target" subcommand can be used to display the targets and their properties: iscsitadm list target -vOn the initiator (client)Install iscsi client from http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en 

This document describes how to build Iscsi server based on Solaris platform on sun X4500 server. On the target (server) The server has six controllers each with eight disks and I have built the storage...

Oracle

Integrated Cloud Applications & Platform Services