Sunday Jun 14, 2009

3rd annual Java Technology Day June 22nd, 2009

Lots of cool staff  JavaFX,  Virtualization , Cloud computing , Mysql


I will have two sessions in the event :


1. Virtualization from the Desktop to the Enterprise


2. Sun in front of the cloud.


I am looking forward to see you 




Tuesday Apr 21, 2009

IGT Cloud Computing WG Meeting

Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "IGT Cloud Computing WG Meeting" at the Sun office in Herzeliya. During the event Nati Shalom, CTO GigaSpaces, Moshe Kaplan, CEO Rocketier , Haim Yadid, Performance Expert, ScalableJ, presented various cloud computing technologies,There were 50 attendees from a wide breadth of technology firms.


For more information regarding using Sun's cloud see http://www.sun.com/solutions/cloudcomputing/index.jsp .


meeting agenda :

Auto-Scaling Your Existing Web Application Nati Shalom, CTO, Gigaspaces

 


In this session, will cover how to take a standard JEE web application and scale it out  or down dynamically, without changes to the application code. Seeing as most web applications are over-provisioned to meet  infrequent peak loads, this is a dramatic change, because it enables growing your application as needed, when needed, without paying for unutilized resources.  Nati will discuss the challenges involved in dynamic scaling, such as ensuring the integrity and consistency of the application, how to keep the load-balancer in sync with servers' changing location, and how to maintain affinity and high availability of session information with the load balancer. If time permits, Nati  will show a live demo of a Web 2.0 app scaling dynamically on the Amazon cloud.

 


How your very large databases can work in the cloud computing world?
Moshe Kaplan, RockeTier, a performance expert and scale out architect
Cloud computing is famous for its flexibility, dynamic nature and ability to infinite growth. However, infinite growth means very large databases with billions of records in it. This leads us to a paradox: "How can weak servers support very large databases which usually require several CPUs and dedicated hardware?"
The Internet industry proved it can be done. These days many of the Internet giants, processing billions of events every day, are based on cloud computing architecture such and sharding. What is Sharding ? What kinds of Sharding can you implement? What are the best practices?

Utilizing the cloud for Performance Validation
Haim Yadid, Performance Expert, ScalableJ

Creating Loaded environment is crucial for software performance validation. Execution of such a simulated environment required usually great deal of hardware which is then left unused during most of the development cycle. In this short session I will suggest utilizing cloud computing for performance validation. I will present a case study where loaded environment used 12 machines on AWS for the duration of the test. This approach gives much more flexibility and reduces TCO dramatically. We will discuss the limitation of this approach and suggest means to address them.


 

Sunday Apr 19, 2009

Ldom with ZFS

Logical Domains offers a powerful and consistent methodology for creating virtualized server environments across the entire CoolThreads server range:

   \* Create multiple independent virtual machines quickly and easily
     using the hypervisor built into every CoolThreads system.
   \* Leverage advanced Solaris technologies such as ZFS cloning and
     snapshots to speed deployment and dramatically reduce disk
     capacity requirements.

In this entry I will demonstrate the integration between Ldom and ZFS

Architecture layout





Downloading Logical Domains Manager and Solaris Security Toolkit

Download the Software

Download the zip file (LDoms_Manager-1_1.zip) from the Sun Software Download site. You can find the software from this web site:

http://www.sun.com/ldoms

 Unzip the zip file.
# unzip LDoms_Manager-1_1.zip
Please read the REDME file for any prerequisite
The installation script is part of the SUNWldm package and is in the Install subdirectory.


# cd LDoms_Manager-1_1


Run the install-ldm installation script with no options.
# Install/install-ldm

Select a security profile from this list:

a) Hardened Solaris configuration for LDoms (recommended)
b) Standard Solaris configuration
c) Your custom-defined Solaris security configuration profile

Enter a, b, or c [a]: a


Shut down and reboot your server
# /usr/sbin/shutdown -y -g0 -i6

Use the ldm list command to verify that the Logical Domains Manager is running
# /opt/SUNWldm/bin/ldm list


NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-c-- SP 32 16256M 0.0% 2d 23h 27m

Creating Default Services

You must create the following virtual default services initially to be able to use them later:
vdiskserver – virtual disk server
vswitch – virtual switch service
vconscon – virtual console concentrator service

Create a virtual disk server (vds) to allow importing virtual disks into a logical domain.
# ldm add-vds primary-vds0 primary

Create a virtual console concentrator (vcc) service for use by the virtual network terminal server daemon (vntsd)
# ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Create a virtual switch service
(vsw) to enable networking between virtual network
(vnet) devices in logical domains
# ldm add-vsw net-dev=e1000g0 primary-vsw0 primary

Verify the services have been created by using the list-services subcommand.


# ldm list-services

Set Up the Control Domain

Assign cryptographic resources to the control domain.
# ldm set-mau 1 primary

Assign virtual CPUs to the control domain.
# ldm set-vcpu 4 primary

Assign memory to the control domain.
# ldm set-memory 4G primary

Add a logical domain machine configuration to the system controller (SC).
# ldm add-config initial

Verify that the configuration is ready to be used at the next reboot
# ldm list-config

factory-default
initial [next poweron]

Reboot the server
# shutdown -y -g0 -i6

Enable the virtual network terminal server daemon, vntsd
# svcadm enable vntsd

Create the zpool

# zpool create ldompool c1t2d0 c1t3d0

# zfs create ldompool/goldimage

# zfs create -V 15g ldompool/goldimage/disk_image



Creating and Starting a Guest Domain

Create a logical domain.
# ldm add-domain goldldom

Add CPUs to the guest domain.
ldm add-vcpu 4 goldldom

Add memory to the guest domain
# ldm add-memory 2G goldldom

Add a virtual network deviceto the guest domain.
# ldm add-vnet vnet1 primary-vsw0 goldldom

Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain
# ldm add-vdsdev /dev/zvol/dsk/ldompool/goldimage/disk_image vol1@primary-vds0

Add a virtual disk to the guest domain.
# ldm add-vdisk vdisk0 vol1@primary-vds0 goldldom

Set auto-boot and boot-device variables for the guest domain
# ldm set-variable auto-boot\\?=false goldldom
# ldm set-var boot-device=vdisk0 goldldom


Bind resources to the guest domain goldldom and then list the domain to verify that it is bound.
# ldm bind-domain goldldom
# ldm list-domain goldldom


NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      4     4G       0.2%  15m
goldldom         bound      ------  5000    4     2G

Start the guest domain
# ldm start-domain goldldom
Connect to the console of a guest domain
# telnet 0 5000
Trying 0.0.0.0...
Connected to 0.
Escape character is '\^]'.
Connecting to console "goldldom" in group "goldldom" ....
Press ~? for control options ..

{0} ok

Jump-Start the goldldom

{0} ok boot net - install
We can login to the new guest and verify that the file system is zfs

# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool  14.9G  1.72G  13.2G    11%  ONLINE  -
Restore the goldldom configuration to an "as-manufactured" state with the sys-unconfig command


# sys-unconfig
This program will unconfigure your system.  It will cause it
to revert to a "blank" system - it will not have a name or know
about other systems or networks.
This program will also halt the system.
Do you want to continue (y/n) y

Press ~. in order to return to the primary domain

Stop the guest domain
# ldm stop goldldom
Unbind the guest domain

# ldm unbind  goldldom
Snap shot the disk image
# zfs snapshot ldompool/goldimage/disk_image@sysunconfig

Create new zfs file system for the new guest
# zfs create ldompool/domain1

Clone the goldldom disk image
# zfs clone ldompool/goldimage/disk_image@sysunconfig ldompool/domain1/disk_image

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ldompool 17.0G 117G 21K /ldompool
ldompool/domain1 18K 117G 18K /ldompool/domain1
ldompool/domain1/disk_image 0 117G 2.01G -
ldompool/goldimage 17.0G 117G 18K /ldompool/goldimage
ldompool/goldimage/disk_image 17.0G 132G 2.01G -
ldompool/goldimage/disk_image@sysunconfig 0 - 2.01G -

Creating and Starting the second  Domain


# ldm add-domain domain1
# ldm add-vcpu 4 domain1
# ldm add-memory 2G domain1
# ldm add-vnet vnet1 primary-vsw0 domain1
# ldm add-vdsdev /dev/zvol/dsk/ldompool/domain1/disk_image vol2@primary-vds0
# ldm add-vdisk vdisk1 vol2@primary-vds0 domain1
# ldm set-var auto-boot\\?=false domain1
# ldm set-var boot-device=vdisk1 domain1

# ldm bind-domain domain1
# ldm list-domain domain1
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
domain1 bound ------ 5001 8 2G

Start the domain
# ldm start-domain domain1

Connect to the console
# telnet 0 5001
{0} ok boot net -s

Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Booting to milestone "milestone/single-user:default".
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Configured interface vnet0
Requesting System Maintenance Mode
SINGLE USER MODE

# zpool import -f rpool
# zpool export rpool
# reboot


Answer the configuration questions

Login to the new domain and verify that we have zfs file system

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 14.9G 1.72G 13.2G 11% ONLINE -

Monday Mar 16, 2009

Brief Technical Overview and Installation of Ganglia on Solaris

Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters.


For this setup we will use the following software packages:


1. Ganglia - the core Ganglia package


2. Zlib - zlib compression libraries


3. Libgcc - low-level runtime library


4. Rrdtool - round Robin Database graphing tool


5. Apache web server with php support


You can get the  packagers ( 1-3)  from sunfreeware (depending on your architecture -  x86 or SPARC)


Unzip and Install the packages



1. gzip -d ganglia-3.0.7-sol10-sparc-local.gz

pkgadd -d ./ganglia-3.0.7-sol10-sparc-local

2. gzip -d zlib-1.2.3-sol10-sparc-local.gz

pkgadd -d ./zlib-1.2.3-sol10-sparc-local

3. gzip -d libgcc-3.4.6-sol10-sparc-local.gz

pkgadd -d ./libgcc-3.4.6-sol10-sparc-local


4. You will need pkgutil from blastwave in order to install rrdtool software packages



/usr/sfw/bin/wget http://blastwave.network.com/csw/unstable/sparc/5.8/pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg.gz


 
gunzip pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg.gz


 
pkgadd -d pkgutil-1.2.1,REV=2008.11.28-SunOS5.8-sparc-CSW.pkg

 



Now you can install packages with all required dependencies with a single command:



/opt/csw/bin/pkgutil -i rrdtool
5. You will need to download Apache ,PHP and Core libraries from Cool stack

Core libraries used by other packages



bzip2 -d CSKruntime_1.3.1_sparc.pkg.bz2

pkgadd -d ./CSKruntime_1.3.1_sparc.pkg



Apache 2.2.9,  PHP 5.2.6



bzip2 -d CSKamp_1.3.1_sparc.pkg.bz2

pkgadd -d ./CSKamp_1.3.1_sparc.pkg

The following packages are available:

1 CSKapache2 Apache httpd

(sparc) 2.2.9

2 CSKmysql32 MySQL 5.1.25 32bit

(sparc) 5.1.25

3 CSKphp5 PHP 5

(sparc) 5.2.6

Select package(s) you wish to process (or 'all' to process

all packages). (default: all) [?,??,q]:1,3


Select the 1 and 3 option


Enable the web server service



svcadm enable svc:/network/http:apache22-csk

Verify it is working



svcs svc:/network/http:apache22-csk

STATE          STIME    FMRI

online         17:02:13 svc:/network/http:apache22-csk


 Locate the Web server  DocumentRoot



grep DocumentRoot /opt/coolstack/apache2/conf/httpd.conf


DocumentRoot "/opt/coolstack/apache2/htdocs"

Copy the Ganglia directory tree



cp -rp /usr/local/doc/ganglia/web  /opt/coolstack/apache2/htdocs/ganglia

Change the rrdtool path on  /opt/coolstack/apache2/htdocs/ganglia/conf.php


from /usr/bin/rrdtool  to /opt/csw/bin/rrdtool




 




 


Start the gmond daemon with the default configuration



/usr/local/sbin/gmond --default_config > /etc/gmond.conf

Edit /etc/gmond.conf  ,change  name = "unspecified"  to name="grid1"  (This is our grid name.)


Verify that it has started : 



ps -ef | grep gmond
nobody 3774 1 0 16::57:41 ? 0:55 /usr/local/gmond

In order to debug any problem, try:



/usr/local/sbin/gmond --debug=9

Build the directory for the rrd images



mkdir -p /var/lib/ganglia/rrds
chown -R nobody  /var/lib/ganglia/rrds
Add the folowing line to /etc/gmetad.conf

data_source "grid1"  localhost

Start the gmetad daemon



/usr/local/sbin/gmetad

Verify it -->


 ps -ef | grep gmetad

nobody  4350     1   0 17:10:30 ?           0:24 /usr/local/sbin/gmetad
 


To debug any problem



/usr/local/sbin/gmetad --debug=9

Point your browser to: http://server-name/ganglia







Wednesday Mar 04, 2009

Technical overview GlassFish 3.0 on Amazon cloud

The integrated GigaSpaces GlassFish solution with its components is captured in the following diagram :


 


   SLA Driven deployment environment:


The SLA Driven deployment environment is responsible for hosting all services in the network. It basically does match making between the application requirements and the availability of the resources over the network. It is comprised of the following components:




    • Grid Service Manager - GSM – responsible for managing the application lifecycle and deployment

    • Grid Service Container GSC – a light weight container which is essentially a wrapper on top of the Java process that exposes the JVM to the GSM and provides a means to deploy and undeploy services dynamically.

    • Processing-Unit (PU )– Represents the application deployment unit. A Processing Unit is essentially an extension of the spring application context that packages specific application components in a single package and uses dependency injection to mesh together these components. The Processing Unit is an atomic deployment artifact and its composition is determined by the scaling and failover granularity of a given application. It, therefore,  is the unit-of-scale and failover. There are number of pre-defined Processing Unit types :






      • Web Processing Unit Web Processing Unit is responsible for managing Web Container instances and enables them to run within SLA driven container environment. With a Web Processing Unit, one can deploy the Web Container as group of services and apply SLAs or QoS semantics such as one-per-vm, one-per-machine, etc.  In other words, one can easily use the Processing Unit SLA to determine how web containers would be provisioned on the network. In our specific case most of the GlassFish v3 Prelude integration takes place at this level.

      • Data Grid Processing Unit Data Grid is a processing unit that wraps the GigaSpaces space instances. By wrapping the space instance it adds SLA capabilities avliable with each processing unit. One of the common SLA is to ensure that primary instances will not be running on the same machine as the backup instances etc. It also determines deployment topology (partitioned, replicated), as well as scaling policy, etc. The data grid includes another instance, not shown in the above diagram, called the Mirror Service. The Mirror Service is responsible for making sure that all updates made on the Data Grid will be passed reliably to the underlying database.




 



  • Load Balancer Agent – The Load Balancer Agent is responsible for listening to web-containers availability and add those instances to the Load Balancer list when a new container is added, or remove it when it has been removed. The Load Balancer Agent is currently configured to work with the Apache Load Balancer but can be easily set up to work with any external Load Balancer.


How it works:


The following section provides a high-level description of how all the above components work together to provide high performance and scaling.


 



  • Deployment - The deployment of the application is done through the GigaSpaces processing-unit deployment command. Assigning specific SLA as part of the deployment lets the GSM know how we wish to distribute the web instances over the network. For example, one could specify in the SLA that there would be only one instance per machine and define the minimum number of instances that need to be running. If needed, one can add specific system requirements such as JVM version, OS-Type, etc. to the SLA . The deployment command points to to a specific web application archive (WAR). The WAR file needs to include a configuration attribute in its META-INF configuration that will instruct the deployer tool to use GlassFish v3 Prelude as the web container for this specific web application. Upon deployment the GlassFish-processing-unit will be started on the available GSC containers that matches the SLA definitions. The GSC will assign specific port to that container instance. .  When GlassFish starts it will load the war file automatically and start serving http requests to that instance of the web application.

  • Connecting to the Load Balancer - Auto scaling -The load balancer agent is assigned with each instance of the GSM. It listens for the availability of new web containers and ensures that the available containers will join the load balancer by continuously updating the load-balancer configuration whenever such change happens. This happens automatically through the GigaSpaces discovery protocol and does not require any human intervention.

  • Handling failure - Self healing - If one of the web containers fails, the GSM will automatically detect that and start and new web container on one of the available GSC containers if one exists. If there is not enough resources available, the GSM will wait till such a resource will become available. In cloud environments, the GSM will initiate a new machine instance in such an event by calling the proper service on the cloud infrastructure.

  • Session replication - HttpSession can be automatically backed up by the GigaSpaces In Memroy Data Grid (IMDG) . In this case user applications do not need to change their code. When user data is stored in the HttpSession,  that data gets stored into the underlying IMDG. When the http request is completed that data is flushed into the shared data-grid servers.

  • Scaling the database tier - Beyond session state caching, the web application can get a a reference to the GigaSpaces IMDG and use it to store data in-memory in order to reduce contention on the database. GigaSpaces data grid automatically synchronizes updates with the database. To enable maximum performance, the update to the database is done in most cases asynchronously (Write-Behind). A built-in hibernate plug-in handles the mapping between the in-memory data and the relational data model. You can read more on how this model handles failure as well as consistency, aggregated queries here.

Tuesday Mar 03, 2009

GlassFish 3.0 on Amazon cloud

Here is how you can run the demo of GlassFish 3.0 on Amazon cloud. 


Where should I start? The best way to get started is to run a demo applicaiton and see for yourself how this integration works. To make it even simpler we offer the demo on our new cloud offering. This will enable you to expereince how a full production ready environment which include full clustering, dynamic scaling, full high avliability and Session replication works in one click. To run the demo on the cloud follow the follwoing steps:


1. Download the GlassFish web scaling deployment file from here to your local machine.


2. Get the mycloud page and get your free access code – this will enable you to get access to the cloud.


 


 


3. Select the stock-demo-2.3.0-gf-sun.xml then hit the Deploy button (you first need to save the attached file in one your local machine) – The system will start provisioning the web application on the cloud. This will include a machine for managing the cloud, a machine for the load-balancer and machines for the web and data-grid containers. After approximately 3 minutes the application will be deployed completely. At this point you should see “running” link on the load-balancer machine. Click on this link to open your web-client application.


.


4. Test auto-scaling – click multiple times on the “running” link to open more clients. This will enable us to increase the load (request/sec) on the system. As soon as the request/sec will grow beyond a certain threshold you’ll see new machines being provisioned. After two minutes approximately the machine will be running and a new web-container will be auto-deployed into that machine. This new web-container will be linked automatically with the load-balancer and the load-balancer in return will spread the load to this new machine as well. This will reduce the load on each of the servers.



5. Test self-healing – you can now kill one of the machines and see how your web client is behaving. You should see that even though the machine was killed the client was hardly effected and system scaled itself down automatically.


Seeing what’s going on behind the scene:


All this may seam to be like a magic for you. If you want to access the machines and watch the web containers, the data-grid instances and the machines as well as the real time statistics you can click on the Manage button button. This will open-up a management console that is running on the cloud through the web. With this tool you can view all the components of our systems. You can even query the data using SQL browser and view the data as it enters the system. In addition to that you can choose to add more services, relocate services through a simple click of a mouse or drag and drop action.


 


For more information regarding using Glassfish see http://www.sun.com/software/products/glassfish_portfolio/


 

OpenSolaris on Amazon EC2


Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "Amazon AWS Hands-on workshop" at the Sun office in Herzeliya. During the event Simone Brunozzi, Amazon Web Services Evangelist, demonstrated Amazon's Ec2 and S3 using the AWS console. There were 40 attendees from a wide breadth of technology firms.

For more information regarding using OpenSolaris on amazon EC2 see http://www.sun.com/third-party/global/amazon/index.jsp.


Wednesday Jan 28, 2009

xVM public API

There are two ways of accessing the public API, the simplest is by writing a Java client using JMX, or alternatively for non-Java client programs using Web Services for Management (WS-MAN, JSR-262).

 



This example demonstrates the usage of the direct access to the read-only copy of the domain model. This example can be run against both Sun xVM Server or Sun xVM Ops Center.


This example performs the following functions:



  • Configures the connection

  • Performs security settings

  • Opens the connection (locally or remotely)

  • Queries the domain model for all the OperatingSystems objects and displays the value of their Hosts name

  • Closes the connection


 



ServerClient.java

/\*\*
\* Copyright 2006 Sun Microsystems, Inc. All rights reserved.
\* SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
\*/
import com.sun.hss.type.os.OperatingSystem;
import com.sun.hss.type.server.Server;
import com.sun.hss.type.virtserver.VirtServerContainer;
import com.sun.hss.type.virtserver.VirtServerOperatingSystem;
import com.sun.hss.type.xvmserver.XVMApplianceDetails;
import com.sun.hss.type.xvmserver.XVMServer;
import java.util.HashMap;
import java.util.Map;

import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;

import javax.net.ssl.X509TrustManager;
import javax.net.ssl.TrustManager;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocketFactory;

import javax.management.MBeanServerConnection;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
import javax.management.ObjectName;
import com.sun.xvm.services.guest.GuestServiceMXBean;
import com.sun.xvm.services.xvmserver.XVMServerServiceMXBean;
import com.sun.xvm.services.guest.GuestDetails;

public class ServerClient {

static private JMXConnector connector = null;
static private MBeanServerConnection mbsc = null;
static private String hostname = null;

/\*\*
\* Simple SocketFactory that uses a trust manager that ALWAYS
\* accept the certificate provided by the server we try to connect to.
\*
\* This is unsafe and should NOT be used for production code.
\*/
private static SSLSocketFactory getSocketFactory() {
X509TrustManager tm = new AnyServerX509TrustManager();
TrustManager[] tms = {tm};

try {
SSLContext sslc = SSLContext.getInstance("TLSv1");
sslc.init(null, tms, null);

SSLSocketFactory factory = sslc.getSocketFactory();

return factory;
} catch (Exception ex) {

return null;
}
}

/\*\*
\* Small trustmanager that ALWAYS accepts the certificate provided
\* by the server we try to connect to.
\*
\* This is unsafe and should NOT be used for production code.
\*/
public static class AnyServerX509TrustManager implements X509TrustManager {
// Documented in X509TrustManager

public X509Certificate[] getAcceptedIssuers() {
// since client authentication is not supported by this
// trust manager, there's no certicate authority trusted
// for authenticating peers
return new X509Certificate[0];
}

// Documented in X509TrustManager
public void checkClientTrusted(X509Certificate[] certs, String authType)
throws CertificateException {
// this trust manager is dedicated to server authentication
throw new CertificateException("not supported");
}

// Documented in X509TrustManager
public void checkServerTrusted(X509Certificate[] certs, String authType)
throws CertificateException {
// any certificate sent by the server is automatically accepted
return;
}
}

/\*\*
\* Create a WSMAN connection using the given credentials
\*
\* @param host
\* @param userName
\* @param userPass
\* @return MBeanServerConnection
\*/
private static void setupConnection(String host, String user, String pass)
throws Exception {
try {
int port = 443;

String urlPath = "/wsman/ea/jmxws";

Map env = new HashMap();

// credentials for basic authentication with the server
String[] creds = new String[2];
creds[0] = user;
creds[1] = pass;

env.put(JMXConnector.CREDENTIALS, creds);

// provide a specific socket factory to avoid the need to setup
// a truststore
env.put("com.sun.xml.ws.transport.https.client.SSLSocketFactory",
getSocketFactory());

// Create JMX Agent URL over https
JMXServiceURL url = new JMXServiceURL("ws-secure", host, port, urlPath);

// System.out.println("WSMAN client opening a connection with url " + url.toString());

// Connect the JMXConnector
connector = JMXConnectorFactory.connect(url, env);

// Get the MBeanServerConnection
mbsc = connector.getMBeanServerConnection();

} catch (Exception ex) {
System.out.println("Got an exception while trying to open a WSMAN connection : " + ex.toString());
throw ex;
}
}

public static void main(String[] args) {
if ((args.length == 0) || (args.length > 3)) {
System.err.println("Usage: user password [target]");
System.exit(1);
}

String userName = args[0];
String userPass = args[1];

hostname = "localhost";

if (args.length == 3) {
hostname = args[2];
}

try {
// Open the WSMAN connection and get the MBeanServerConnection
setupConnection(hostname, userName, userPass);

// get details on these xVM servers
serverService();

} catch (Exception ex) {

System.out.println("WSMAN client error : " + ex.toString());
ex.printStackTrace();
System.exit(1);

} finally {
// close connection if necessary
if (connector != null) {
try {
connector.close();
} catch (Exception dc) {
}

}
}
}

private static void serverService() throws Exception {

try {
XVMServerServiceMXBean xssmxb = ServerClientServices.getXVMServerService(mbsc, false);

// get the list of xVM servers
ObjectName[] servers = xssmxb.getXVMApplianceDetailsObjectNames(null, null);

if ((servers == null) || (servers.length == 0)) {
System.out.println("No xVM server detected on " + hostname);
return;
}

GuestServiceMXBean guestmxb = ServerClientServices.getGuestService(mbsc, false);

// get details on these xVM servers
for (int i = 0; i < servers.length; i++) {

XVMApplianceDetails details = xssmxb.getXVMApplianceDetails(servers[i]);

if (details != null) {
OperatingSystem os = details.getOperatingsystem();

Server svr = details.getServer();

VirtServerContainer vsc = details.getVirtservercontainer();

XVMServer xsvr = details.getXvmserver();

if (xsvr != null) {
System.out.println("xVM Server name = " + xsvr.getApplianceName());
}

}

// get guests on this xVM server
ObjectName[] guests = guestmxb.getGuestObjectNames(servers[i], null, null);

Java.util.Map<java.lang.String, Java.util.Set<java.lang.string>> map = null;
GuestDetails guestDetails[] = guestmxb.getGuestDetails(guests, map);

String s = guests[0].getCanonicalName();

if ((guests == null) || (guests.length == 0)) {
System.out.println("No guest on this xVM server");
}
if (guestDetails != null) {
for (int k = 0; k < guestDetails.length; k++) {
VirtServerOperatingSystem virt = guestDetails[k].getVirtServerOperatingSystem();
System.out.println("guest hostname is " + k + ": " + virt.getHostname());

}
}
}
} catch (Exception ex) {
System.err.println("Got Exception while testing the xVM server service : " + ex.toString());
throw ex;
}

}
}


ServerClientServices.java

public class ServerClientServices
{
   /\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
    \* getCacheManagerService
    \*
    \* @param mbsc
    \* @return CacheManagerMXBean
    \* @throws java.lang.Exception
    \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*/
   public static CacheManagerMXBean getCacheManagerService(MBeanServerConnection mbsc, boolean verbose)
          throws Exception
   {
       CacheManagerMXBean cmmxb = null;

       try {
           // build the objectname to access the service
           ObjectName cmson = new ObjectName(CacheManagerMXBean.DOMAIN +
                                             ":type=" + CacheManagerMXBean.TYPE);

           // verify that the service is currently deployed
           if (!mbsc.isRegistered(cmson)) {
               System.out.println("Cache Manager service is not registered : aborting");
               throw new Exception("MXBean for Cache Manager service not registered");
           }

           // create a proxy to access the service
           cmmxb = JMX.newMXBeanProxy(mbsc,
                                      cmson,
                                      CacheManagerMXBean.class,
                                      false);

           if (verbose) {
               System.out.println("Proxy for Cache Manager service : OK");
           }
           return cmmxb;

       } catch (Exception ex) {
           System.err.println("Got Exception while creating the Cache Manager service proxy : " + ex.toString());
           ex.printStackTrace();
           throw ex;
       }
   }

Monday Jan 12, 2009

Solaris iSCSI Server

This document describes how to build Iscsi server based on Solaris platform on sun X4500 server.



On the target (server)


The server has six controllers each with eight disks and I have built the storage pool to spread I/O evenly and to enable me to build 8 RAID-Z stripes of equal length.


zpool create -f  tank \\
raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0 \\
raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0

After the pool is created, the zfs utility can be used to create 50GB ZFS volume.


zfs create -V 50g tank/iscsivol000

Enable the Iscsi service


svcadm enable iscsitgt

Verify that the service is enabled.


svcs –a | grep iscsitgt


To view the list of commands, iscsitadm can be run without any options:


iscsitadm

Usage: iscsitadm -?,-V,--help Usage: iscsitadm create [-?]  [-?] [ Usage: iscsitadm list [-?]  [-?] [ Usage: iscsitadm modify [-?]  [-?] [ Usage: iscsitadm delete [-?]  [-?] [ Usage: iscsitadm show [-?] 

[-?] [ For more information, please see iscsitadm(1M)



To begin using the iSCSI target, a base directory needs to be created.


This directory is used to persistently store the target and initiator configuration that is added through the iscsitadm utility.


iscsitadm modify admin -d /etc/iscsi




Once the volumes are created, they need to be exported to an initiator


iscsitadm create target -b /dev/zvol/rdsk/tank/iscsivol000 target-label


Once the targets are created, iscsitadm's "list" command and "target" subcommand can be used to display the targets and their properties:


iscsitadm list target -v

On the initiator (client)


Install iscsi client from http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en 

Wednesday Jan 07, 2009

IGT2008 video and presentation

IGT2008 video and presentations are now available online:

http://video.new-app.com/customers/grid/IGT2008/main.html

Wednesday Dec 03, 2008

IGT2008 - The World Summit of Cloud Computing

Yesterday  I want to the  IGT 2008  event  In addition to exhibiting at the event, I  delivered a opening demo presentation and a hands-on


xVM server workshop


.


About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today