Tuesday Oct 27, 2015

Partition Targeting and Virtual Targets in WebLogic Server 12.2.1

If you are not familiar with WebLogic Server multi-tenancy support, you should read Tim Quinn's blog: Domain Partitions for Multi-tenancy in WebLogic Server.  It gives a nice overview of the partition feature in WLS 12.2.1 and covers concepts that are used in this article.

Partition Targeting

As you learned in Tim's blog, a domain partition (partition for short) is an administrative and runtime slice of a WebLogic domain that contains a collection of application and resources. Those applications and resources need to run somewhere in your domain: on a cluster, on a specific managed server, or on the admin server itself.

In 12.1.3 and before you target applications and resources individually -- typically directly to a managed server or cluster.  In a partition this is a bit different:

  • In a partition resources and applications are grouped into resource groups, and it is the resource group that is targeted. When a target is set on a resource group it applies to all of the resources and applications inside of the resource group (you don't target them individually).
  • Resource groups inside partitions are targeted to virtual targets – not directly to managed servers or clusters

Virtual Targets

So what are virtual targets? If you are familiar with virtual hosts in WLS (or the general concept of virtual hosts in web servers), then you can think of virtual targets as similar to virtual hosts. A virtual target accomplishes three things:

  1. It provides a physical target (cluster or server). The virtual target is deployed to the physical target, and any resources deployed to the virtual target are deployed to the physical target that the virtual target wraps.
  2. It provides a separate HTTP server. So any web application deployed to the virtual target runs in its HTTP server (and not the default HTTP server).
  3. It provides a mapping of incoming requests (in the form of a URL) to a specific virtual target and therefore to a specific partition

Let's look at each of these a bit closer.

Physical Targets

You can specify a managed server or a cluster as a physical target for a virtual target (using the set or addTarget methods on VirtualTargetMBean). Note that even though the VirtualTargetMBean methods support multiple physical targets, there is currently validation in place to prevent setting more than one physical target on a VirtualTarget.

The physical target accomplishes two things:

  1. It is the target that the virtual target is deployed to (and therefore where the virtual HTTP server will run)
  2. It is the target that is used when resources and apps are deployed to the virtual target

HTTP Server

Each Virtual Target has its own HTTP server that is lazily initialized on the managed server (or cluster of managed servers) where the virtual target is running (targeted). Any web application or resource that is deployed to the virtual target will run in this HTTP server. This avoids context path clashes with applications running in other virtual targets (or in the default HTTP Server).

URL mapping

When a request enters a WLS instance two key things need to occur:

  1. The partition that the URL "belongs to" needs to be identified
  2. For HTTP requests, the VirtualTarget needs to be located so that the proper HTTP server can be used.

This means that the virtual target needs information to match against incoming URLs to see if they "match" the virtual target

A Virtual Target has three attributes that can be set to accomplish this matching:

  1. Virtual hostnames: one or more virtual hostnames
  2. uriPrefix: a single URI prefix path
  3. port: a port number

Virtual Hostnames

A Virtual Target can have zero or more virtual hostnames. These hostnames are matched against the hostname used by the client (with HTTP this is provided by the Host: field introduced in HTTP 1.1). The matching is a literal string comparisons, but you can specify more than one hostname. This is handy for aliasing all possible hostnames that might be used in requests (myserver, myserver.us.com, etc). Also, the virtual hostname is not required. So if you want to use uriPrefix only (or port number only) that's fine.

URI Prefix

In addition to matching on the hostname, you can match on the URI of the incoming request, so that all requests that start with a specific URI path match that virtual target. For example "/partition1".

Port Number

Port number based routing is provided primarily for backwards compatibility with older clients where changing the URL is not an option. You can specify an explicit port number on a virtual target, or a port offset. If a port offset is used then the offset is applied to the default channel of the server, or to a specific base channel if you specify a base channel name on the Virtual Target.

If you specify a port number on a Virtual Target, then WLS will automatically create channels for the virtual target at runtime. You don't need to create these channels yourself. Any requests that come in on that channel (port number) will go to the virtual target.  In this regard the port number trumps both uri prefix and virtual hostname.


Here are some examples to illustrate how the matching works. This assumes the server's default port number is 7001.

Hostnames uriPrefix Port Num Matching URLs Non-matching URLs Notes
- - http://red:7001/myapp
http://red.us.com:7001/myapp The host name must be an exact string match.
/red - http://colors:7001/red/myapp
http://red.com/red/myapp Both hostname and uriPrefix must match
- -
7277 http://anyhost.com:7277/myapp http://anyhost.com:7001/myapp Any request coming to 7277 will match, otherwise no match.
- /red - http://anyhost.com:7001/red/myapp http://colors.com:7001/blue/myapp No hostname specified. In that case the hostname is essentially a wildcard.


Pick a scheme and stay with it and don't get fancy probing strange combinations. Typically you'll want to use one of these patterns:

  1. Differentiate by virtual hostname only: red.com, blue.com, green.com
  2. Differentiate by uriPrefix only: colors.com:7001/red, colors.com:7001/blue, colors.com:7001/green
  3. Differentiate by port number only: colors.com:7277, colors.com:7278, colors.com:7279
    1. Only differentiate by port number when you have to for backwards compatibility with existing clients.

Also, keep in mind that VirtualTargets do not have anything to do with how a request gets to a specific server/port. So if you use virtual hostnames you have to handle this, typically by creating the appropriate hostname entries in DNS, or  by using a load balancer.

If you are not familiar with the general concept of HTTP virtual servers (aka virtual hosts) it is probably worth reading up on that topic.

Partition Targeting

As mentioned in the introduction resources/apps in a partition are grouped into resource groups, and it is the resource groups that are targeted. There are a couple important concepts:

  1. Partition's available targets (PartitionMBean availableTargets): this is a list of virtual targets that can be used within the partition to target resource groups. Typically a WLS admin will create a number of virtual targets in the domain, and then assign a subset of them to a specific partition. Resource groups within the partition can only be targeted to one of these available targets.
  2. Partition's default target (PartitionMBean defaultTargets): a partition can also have zero or more default targets. Resource groups in the partition can be explicitly targeted (by using the methods on ResourceGroupMBean to add or set targets). But if no explicit targets are set on a resource group, then by default it picks up the partition's default targets.
    1. Note that there is a boolean attribute on ResourceGroupMBean: useDefaultTargets. If this is true (and by default it is) then the resource group will use the partition's default targets for targeting if no explicit target is set. If this is false then the resource group will not use the partition's defaults and you must set the targets on the resource group explicitly. Also, if you explicitly set targets on a resource group then this flag is flipped to "false" with the anticipation that you never  want to pick up the partition's defaults.
  3. Partition's effective targets: the (sub)set of available targets that are in use at any given time by resource groups in the partition.

So a common sequence for setting up partition targeting is:

  1. WLS admin creates some virtual targets at the domain level for use by partitions
  2. WLS admin creates partitions, and sets the available targets on the partitions to the set of virtual targets they want each partition to use
  3. WLS (or Partition) admin creates resource groups in the partition and sets the resource group targets based on the available targets for the partition (or use the partition default)

Note that in the simplest case you just assign one virtual target to a partition (as an available target) and make it the partition's default target. Then all resource groups in the partition will use that one target.


Some things to watch out for:

  1. Hogging virtual targets: if you define a virtual target with no hostname and a uriPrefix of "/", then that virtual target would hog all requests coming into the domain.  You never want that. Things will break. WLS configuration validation tries to prevent the creation of such virtual targets, but it can't catch all cases. So be mindful that you want the virtual target to only match requests specifically intended for it.
  2. Restrictions: in 12.2.1 there are some targeting restrictins. Some examples:
    1. A virtual target can not be shared by more than one partition
    2. A virtual target can have only one (server or cluster) set on it as a target
    3. A resource group can be targeted to more than one virtual target
      1. Unless the resource group contains one or more of: JMSServer, MessagingBridge, PathService, JMSBridgeDestination, FileStore, JDBCStore, JMSSystemResource

End-to-End Example

Here is a simple example using WLST that creates a partition named "Acme" with a single resource group and virtual target. An application is deployed to that resource group. The virtual target has "myserver" (the admin server) as its physical target and uses a uriPrefix of "/acme" -- so to access the application you must prefix the URL with "/acme". Finally, the partition is started, since partitions are not running by default.

import os,sys,urllib,time

# Change this to the location of a simple web application. 
# Change these to your admin login credentials and connection URL
# Change this to be the name of the server you want to deploy the app to

print "Connecting to " + T3_URL


tgt=getMBean('/Servers/' + SERVER)

# Create "Acme" virtual target
# We set a uriPrefix only (no virtual host names)
# The VT contains a virtual HTTP server. We can configure it

# Create Acme partition and ResourceGroup

# Deploy the application to SimpleRG in partition Acme
while not (progress.isCompleted() or progress.isFailed()) :

# You must start the partition to get the stuff in it running
# This is a convenience command to start the partition and
# wait for it to come up
# Now hit the webapp at (for example) http://localhost:7001/acme/webapp/webapp.jsp

Monday Mar 25, 2013

Troubleshooting GlassFish Update Center

After installing GlassFish and running either the bin/pkg or bin/updatetool commands for the first time you may see a message like this:

 The software needed for this command (updatetool) is not installed.

and you will be asked if you would like to install the software or not. GlassFish distributions do not always contain the full Update Center Toolkit, instead it is downloaded on demand. If you answer "yes" then the software needed to support the command (pkg or updatetool) will be installed on your system. This is commonly referred to as Update Center bootstrapping.

Sometimes during this bootstrap process you may see a message like:

 Could not download application packages. This could be because:
   - a proxy server is needed to access the internet. Please ensure that
     the system proxy server settings in your Internet Options control panel
     (under Connections:LAN Settings) are correct, or set the HTTP_PROXY
     environment variable to the full URL of the proxy server.
   - the package server or network connection is slow.
     If you are getting time out errors you can try setting the
     environment variables and try again. For example to increase
     the timeouts to 300 seconds set them to 300
   - the package server is down or otherwise inaccessible or it is
     generating invalid data. Please contact the provider of the package

This is basically saying that there was a problem communicating with the GlassFish package repositories during the bootstrap process. This could be caused by an HTTP proxy setting issue, or a bandwidth issue (causing timeouts) or there could be a problem with the GlassFish package repositories themselves (usually not the case, but it happens).

If you get this message, here are some things to try:

Do you need to set an HTTP Proxy?

In your company, or home, network, do you need to set an HTTP proxy? Depending on the OS platform the Update Center Toolkit will try to automatically pick up your HTTP proxy settings. But sometimes this does not work. If you require an HTTP proxy in your network environment then try setting the HTTP_PROXY environment variable explicitly, and then re-run the command (pkg or updatetool). For example:

Unix (bash):

export http_proxy=http://www-myproxy.com:80


set HTTP_PROXY=http://www-myproxy.com:80

Is your network connection to the US slow?

The GlassFish package repositories are currently hosted in the US and not mirrored. If your bandwidth is low, or latency is slow to the US, then the bootstrap process can timeout. To work-around this try setting the following timeouts:

Unix (bash):






Try pkg first, then pkg install updatetool.

Another option is to bootstrap the (smaller) 'pkg' command first, and then use the pkg command to install updatetool (instead of bootstrapping updatetool). For example:

pkg list

Answer 'y' to bootstrap the pkg command. Then:

pkg install updatetool

to install updatetool using the pkg command.

Friday May 11, 2012

Installing MacPorts Behind a Proxy

I needed to build a project on my MacBook that required GNU msgfmt. I could have built it from source, but instead I decided to install MacPorts in case I needed other open source type packages in the future.

Installing MacPorts is fairly straightforward, but the first thing you must do after installing the base image is synchronize with the MacPorts repository by running:

$ sudo port -d sync

That failed for me because MacPorts uses rsync to access its repositories, and I was sitting behind a firewall that requires the use of an HTTP proxy to get to the internet. Setting RSYNC_PROXY didn't help since (apparently) our proxy does not support rsync.

Fortunately there is an alternative. You can use Subversion to sync up with the repository content, and Subversion works over http. After making sure my HTTP proxy for svn was set (by checking my ~/.subversion/servers file). I followed the directions described in How to sync your ports tree using Subversion (over HTTP) with the following modification:

  1. Before checking out the initial copy of the ports tree I changed permission on /opt/local/var/macports/sources/trunk/dports to be owned by me (jdipol) and not root.
  2. I did the initial "svn co" as me and not root. This left the subversion workspace owned by me and not root.

Why did I do this? Because  "port sync" runs svn to make sure the ports tree is up to date. It runs svn as the user that owns the subversion workspace. I wanted that to be jdipol to always pick up my subversion settings from ~jdipol/.subversion -- not from root's environment. So now even though I run "port sync" as root it still runs "svn update" as me.

And don't forget you must install Apple's Xcode even if you don't plan on doing any development. Why? Because MacPorts requires "make" to  process some of the ports. So make sure you install Xcode including CLI commands:

  1. Install Xcode from the AppStore -- it's free
  2. Start Xcode
  3. Select Preferences->Downloads and install Command Line Tools

Once you do that, then you are ready to run:

$ sudo port -d sync

To get msgfmt all I needed to do was:

$ sudo port install gettext

If you want to see all the packages available to you, run

$ port list

Thursday Mar 01, 2012

GlassFish 3.1.2: Secure By Default Changes

Secure by default is the characteristic of software where its default installation results in a secure configuration. Often there is a trade off between ease of use and the degree of default security.

In GlassFish 3.1.2 we have improved the secure by default behavior, and we've tried to do so without impacting the ease of use that GlassFish is known for (at least not impacting it too much). We had two main goals we were trying to satisfy in 3.1.2:

  1. More actively encourage the user to set an admin password at installation time.
  2. Require an admin password whenever remote administration (aka "secure admin") is enabled.

To achieve this you'll notice the following changes when you use 3.1.2:

  1. The installer now prompts for an admin password even in the default installation mode. You are still allowed to choose no password because remote administration is not enabled out of the box.
  2. Any time remote administration is enabled (by running the enable-secure-admin command for example), you will be required to have an admin password set. Basically GlassFish will do what it can to prevent you from enabling remote administration while not having an admin password.
  3. For the Oracle GlassFish Server commercial zip distributions (where there is no installer) you will be prompted for an admin password the first time you start the default domain (DAS). And just like with the installer, you are still allowed to choose no password because remote administration is not enabled out of the box. The open source / community zips continue to behave as they did in 3.1.1 (no prompting).

One issue our QA organization ran into when running their automated tests on the Oracle GlassFish Server commercial zip bundles had to do with their scripts that automatically installed (unzipped) GlassFish and started the default domain. These scripts started failing because the server was prompting for an admin password at startup.

The solution is to use the change-admin-password command to set an admin password before starting the domain. As part of the 3.1.2 changes we enhanced change-admin-password so that it could be run without the domain (DAS) running if you use the "--domain_name" option.  Here is an example of how to set the admin password on a domain before starting the domain the first time (command output removed for brevity):

$ unzip ogs-3.1.2-web.zip
$ cd glassfish3/glassfish
$ touch /tmp/password.txt
$ chmod 600 /tmp/password.txt
$ echo "AS_ADMIN_PASSWORD=" > /tmp/password.txt
$ echo "AS_ADMIN_NEWPASSWORD=newadminpassword" >> /tmp/password.txt
$ bin/asadmin --user admin --passwordfile /tmp/password.txt change-admin-password \
    --domain_name domain1
$ rm /tmp/password.txt
$ bin/asadmin start-domain

This does the following:

  • Installs glassfish by unzipping the zip
  • Creates a file (/tmp/password.txt) to supply passwords to the asadmin command. We make sure the file is readable only by the user running asadmin and remove the file when we are done for security purposes. The file has two lines that look like:

  •  Uses the asadmin change-admin-password command to set the admin password on the domain. Now the domain can be started.

For more information about security changes in GlassFish 3.1.2 see Tim Quinn's blog entry, and as always the GlassFish Security Guide is recommended reading.

Sunday Feb 27, 2011

GlassFish 3.1: Clusters Without SSH

GlassFish 3.1 has the ability to use SSH to centrally administer some basic instance lifecycle operations (such as creating and starting an instance). But in some cases you may not want to deal with SSH, and would rather just create and start those instances directly yourself. GlassFish 3.1 supports that too. This blog describes how to set up a two instance cluster across three machines without using SSH.

As part of this procedure you will be enabling secure admin on the GlassFish Domain Admin Server. You can read more about secure admin at Tim's blog.

1) Install GlassFish 3.1

First download install GlassFish 3.1 on three machines. One machine will run the Domain Admin Server (DAS) and the other two machines will each run one GlassFish instance. Let's call these systems dashost, host1 and host2.

Some things to keep in mind:

  1. Install JDK 6 Update 24 or later on each system. You absolutely will have problems if you run GlassFish with anything earlier than update 22.
  2. If you run the installer make sure to choose this JDK to use. If you use the zip distribution of GlassFish you can configure GlassFish to use a particular JDK by adding a line like this to glassfish3/glassfish/config/asenv.conf (asenv.bat for windows):  AS_JAVA="/opt/jdk/jdk1.6.0_24" (make sure to specify the correct path for your system).

2) Start the DAS, enable secure admin and create the cluster

On the system dashost start the domain and verify your JDK version:

dashost$ asadmin start-domain
dashost$ asadmin version --verbose
Version = GlassFish Server Open Source Edition 3.1 (build 43), JRE version 1.6.0_24
Command version executed successfully.

Verify that the JRE version is 1.6.0_24 or newer. If it isn't, make sure asenv.conf has AS_JAVA set to the correct path and restart the domain.

Next, enable secure admin and verify it is working:

dashost$ asadmin enable-secure-admin
dashost$ asadmin stop-domain
dashost$ asadmin start-domain 
dashost$ asadmin version --verbose
. . .
0070: 38 67 51 0D F8 AF 3E 6F   C7 58 02 DB 3B 70 22 66  8gQ...>o.X..;p"f

Do you trust the above certificate [y|N] --> y
Version = GlassFish Server Open Source Edition 3.1 (build 43), JRE version 1.6.0_24
Command version executed successfully.

Note that you are prompted to accept a certificate. This happens the first time you connect to the DAS from a host after enabling secure admin. Answer 'y' and you should see the version information reported.

If you get an error about the server not accepting a secure connection, or if the command hangs then odds are you are not running with JDK 1.6 update 24 or newer.

Next, create the cluster that the instances will be in:

dashost$ asadmin create-cluster cluster1

3) Create instances on the remote systems

Now log into your remote systems and create the instances. We will once again verify the JDK version first. It should look something like this:

$host1 asadmin version --local --verbose
Using locally retrieved version string from version class.
Version = GlassFish Server Open Source Edition 3.1-b43 (build 43)
asadmin Java Runtime Environment version: 1.6.0_24
Command version executed successfully.

The --local option says to report the version of the JDK from the local GlassFish installation that asadmin is running out of and not the version information from the DAS. Now run the version command one more time against the DAS to ensure secure admin is working when we connect from host1:

host1$ asadmin --host dashost version --verbose
. . .
0070: 38 67 51 0D F8 AF 3E 6F   C7 58 02 DB 3B 70 22 66  8gQ...>o.X..;p"f

Do you trust the above certificate [y|N] --> y
Version = GlassFish Server Open Source Edition 3.1 (build 43), JRE version 1.6.0_24
Command version executed successfully.

Once again you may need to accept the certificate if you are connecting from the host for the first time. Now we're ready to create and start the instance:

host1$ asadmin --host dashost create-local-instance --cluster cluster1 instance1
. . .
Command create-local-instance executed successfully.
host1$ asadmin start-local-instance instance2
. . .
Admin Port: 24849
Command start-local-instance executed successfully.

Now repeat this on host2, and your cluster is up and running.

4) Inspect your setup

Now that the cluster is up and running you can take a look at a few things:

dashost$ asadmin list-clusters
cluster1 running
dashost$ asadmin list-instances --long
NAME       HOST   PORT   PID    CLUSTER   STATE         
instance1  host1  24848  17744  cluster1   running
instance2  host2  24848  21768  cluster1   running      
Command list-instances executed successfully.
dashot$ asadmin list-nodes --long
localhost-domain1   CONFIG   localhost     /export/glassfish3
host1               CONFIG   host1         /export/glassfish3    instance1
host2               CONFIG   host2         /export/glassfish3    instance2

Note that in addition to the instances being created, a node was created for each remote GlassFish installation where create-local-instance was run from. This is a feature of create-local-instance -- it will auto-create a node for you when the first instance on the system is created.

If you rather name the nodes yourself (instead of having create-local-instance use the host name as the node name) you can run create-node-config nodename to create an empty node, and then pass this node to create-local-instance using the --node option. Create-local-instance will then populate the node with the correct information.

The third node listed, localhost-domain1, is a built-in node that is used when creating instances local to the DAS (on the same host and using the same GlassFish installation as the DAS).

5) Starting the Instances

Since the instances are remote from the DAS, and SSH is not being used, the DAS can't start the instances. That means the commands start-instance and start-cluster will not work on these instances. Instead you must always use start-local-instance directly on the instance system to start the instance.

A second option is to install the instance as a native OS service, and use the OS service framework to start and stop the instance. See the asadmin create-service command. For more information see Byron's blog on OS services and GlassFish.

Tuesday Nov 23, 2010

GlassFish 3.1: Using SSH to Manage Instance Lifecycles

GlassFish 3.1 introduces the use of SSH to manage the lifecycle of remote instances. This document gives an overview of how to use SSH with GlassFish and describes some alternatives if SSH is not practical in your environment.

1) SSH use in GlassFish 3.1

The Domain Admin Server (DAS) for GlassFish must be able to communicate with remote instances to perform various administrative operations. For most operations this communication occurs directly between the DAS and the running instance. For example when you deploy an application to an instance the DAS connects to the instance and deploys the application to it. But what about starting a remote instance? Or creating it in the first place? In these cases there is no running instance for the DAS to talk to.

In v2 these operations where handled by another process that had to be installed and running on the remote system -- the GlassFish node agent. To start a remote instance the DAS contacted the node agent, and the node agent started the instance.

In 3.1 there is no node agent. Instead the DAS uses SSH to perform certain operations on the remote system. The use of SSH is optional, and it is only needed for specific commands when those commands are used to manage remote instances (instances on systems other than the one the DAS is running on -- instances local to the DAS are supported without SSH). This table lists the commands that have a dependency on SSH when the instance is remote, and lists a command that can be used as an alternative when SSH is not available. The alternate commands are local commands that must be run directly on the system where the instance resides.

Alternate Local Command
 create-instance  create-local-instance
 delete-instance  delete-local-instance
 start-instance  start-local-instance Or use the native service support in GlassFish (create-service)
 start-cluster  start-local-instance For each instance. Or use the native service support in GlassFish (create-service)

So that's the first decision you need to make when setting up a GlassFish cluster. Do you want to leverage SSH to manage the instance lifecycle? Or do you want to avoid the dependency and perform those operations directly on the local machines yourself.

The benefit of using SSH is that, once it is set up, it makes your life easier since you can perform the above operations centrally via the DAS. The downside is that it requires some effort to get SSH set up (especially for Windows users).

Note: In GlassFish a "remote" command is a command that talks to the DAS and the DAS performs the operation. A "local" command is a command where the asadmin client performs the operation directly (and therefore the command must be run from the GlassFish installation you want to operate on). Some GlassFish "local" commands, like create-local-instance, are actually a hybrid since it needs to contact the DAS to inform it of the new instance. As you might expect the remote commands listed in the table above use SSH to execute the local version of the command directly on the remote instance.

2) What Is SSH?

If you are a Unix user you are likely already familiar with SSH, but Windows users may not be.

SSH consists of two parts: a client and a server.

The most common client is the ssh(1) command. This lets you securely log into a remote system and gives you a remote shell. Other client commands are scp and sftp which let you perform file transfers. The server is sshd or secshd and it provides the SSH service.

The GlassFish DAS uses its own Java based SSH client for communicating with the instance systems (which must be running the SSH service). But the native SSH client is still needed for things like key generation and helping to verify connectivity. In a typical GlassFish deployment the DAS acts as the SSH client, while the instance systems must be running the SSH service.

3) Getting SSH

Most all Unix based systems include SSH support. On the instance systems you need to make sure the SSH service is running (on the Mac you'll need to turn on Remote Login in the Sharing System Preferences).

On Windows you have more work to do. You'll need to choose an SSH provider and install it on all instance systems and the DAS. GlassFish has been tested with Cygwin and MKS:

Product Version Tested
Cygwin  1.7.6 Installing Windows Cygwin sshd
 MKS Toolkit for Developers

Once you install your SSH provider, make sure that the provider's "bin" directory is in your (Windows) PATH so that you have easy access to the SSH client commands (ssh, sftp, ssh-keygen, etc).

Once you have SSH installed and the SSH service running, verify you can login from the DAS host to the instance systems using the ssh(1) client: ssh username@instance.hostname.com. It should prompt for your password (unless you already have key authentication set up). If you can't ssh into the remote system then you'll have to sort that out before using SSH with GlassFish.

Note: Yamini K B has a nice blog entry (Using GlassFish v3.1 SSH Provisioning Commands) that does a good job summarizing what needs to be done to get SSH working on different operating systems.

4) Mixing Different OS Types

The DAS and all GlassFish instances in a cluster must be running on the same OS type and version. Running a cluster of mixed OS types is not supported. In practice mixing different flavors of Unix usually works OK (but is not officially supported), but mixing Windows and Unix is asking for trouble.

5) SSH Authentication

When the SSH client connects to the SSH service it needs to authenticate the user. The most basic form of authentication is password authentication. SSH also supports other forms of authentication. In particular it support public key authentication.

GlassFish 3.1 supports 3 forms of SSH authentication. They each have trade-offs:

 Authentication Scheme
Pro Con
 Username / Password
 Easy to configure in SSH.
A bit more work configuring GlassFish to use this in a secure fasion.
 Public Key (unencrypted)
 Easy to configure GlassFish to use.
Can be trickier to get configured correctly in SSH.
 Public Key (encrypted)
More secure than unencrypted public key.
More work to configure both SSH and GlassFish to use this.

To use Username/Password securely with GlassFish you will need to use a GlassFish feature called "password aliases" so that your SSH password is not stored in the clear. This isn't too bad, but does require a little extra fiddling to create and use the password alias.

The unencrypted public key approach requires that you generate an SSH key pair and then copy the public key to the remote systems. GlassFish provides an asadmin command, setup-ssh, to help you do this. Since there is no password you don't need to mess with password aliases, and the DAS looks in the default places to locate the key to authenticate with -- so you often don't even need to provide that info. But folks sometimes run into trouble getting key authentication working with SSH. It usually works great, but if it doesn't it can be tricky to figure out why.

In the encrypted key file approach you encrypt your private key file with a passphrase that must be provided to use the private key. This means you must configure GlassFish to know about the passphrase -- which means using password aliases again. So this method combines the complexity of the first two approaches. You probably won't choose this approach unless it is the preferred SSH authentication scheme at your site.

If you already have SSH deployed at your site you may have the preferred authentication scheme already defined for you.

 At this point you should decide what form of authentication you want to use.

6) What user?

One basic decision you will need to make is concerning what SSH user you want GlassFish to use when it connects to remote systems. Some considerations:

  1. For key authentication the user that the DAS process is running as must be able to read the SSH user's private key file (typically in ~/.ssh).
  2. Remote instances will be started, and therefore be running as, the SSH user.
  3. By default the DAS will assume the SSH user is the same as the user running the DAS process.
So it ends up being easiest if the SSH user is the same as the user that starts the DAS. This doesn't have to be the case -- it just makes things simpler.

7) Setting Up SSH Key Authentication

GlassFish provides a command, setup-ssh, that sets up SSH key authentication for you. You run this command directly on the DAS system. Here is an example run. In this example adc2101159 is the name of the remote host you will be running an instance on, and we assume the user running the command (dipol) is the same user that will start the DAS and the same user that we want GlassFish to use when connecting to remote systems over SSH:

  $ asadmin setup-ssh adc2101159
  SSH key not found for user dipol
  Would you like to generate an SSH key pair (without a key passphrase) for dipol to access [adc2101159]? [y/n]: y
  Enter SSH password for dipol@adc2101159
  Created directory /export/home/dipol/.ssh
  /usr/bin/ssh-keygen successfully generated the identification /export/home/dipol/.ssh/id_rsa
  Copied keyfile /export/home/dipol/.ssh/id_rsa.pub to dipol@adc2101159
  Successfully connected to dipol@adc2101159 using keyfile /export/home/dipol/.ssh/id_rsa
  Command setup-ssh executed successfully.

As you can see setup-ssh does a couple things:

  1. It checks to see if you have an SSH key already generated. If you don't it gives you the option to generate one.
  2. After generating the key it copies it to the remote host. To do this it needs your SSH password to make the connection to copy the key.
  3. It then connects to the remote system to verify that key authentication is working

After key authentication is set up you can re-run setup-ssh any time and it will detect the key and verify the connection with it:

  $ asadmin setup-ssh adc2101159
  Successfully connected to dipol@adc2101159 using keyfile /export/home/dipol/.ssh/id_rsa
  SSH public key authentication is already configured for dipol@adc2101159
  Command setup-ssh executed successfully.

If setup-ssh fails you need to decide if you want to debug why key authentication is failing, our give up and use password authentication. A couple things to check if key authentication fails:

  1. Make sure on the remote system that the file system permissions for the SSH user are not too permissive. The SSH user's home directory, ~/.ssh directory, and authorized_key file must be writeable only by the SSH user. If they are writeable by others then the secure service may not trust the authorized_key file and may reject public key authentication.
  2. On MKS we have run into a case where fixing #1 required:
    1. Copying the .ssh\\id_rsa key file over to the remote system manually and importing it with the MKS configuration GUI (Passwordless button under Secure Service tab) or
    2. Using the MKS configuration GUI to turn off "enforce strict mode".

8) Installing GlassFish

GlassFIsh must be installed on each system that will host an instance. You have two choices on how to do this:

  1. Install manually on each system
  2. Install manually on the DAS system, then use the install-node command to create a GlassFish install image based on the DAS install and then install that image on the remote instance systems.

Option 1 is best if you want complete control over the installation and install time settings. Option 2 works well if all of your systems are set up the same and you want to duplicate the DAS GlassFish installation.

Here is an example of an install-node run. This is run on the DAS system:

  asadmin install-node --installdir /export/tmp/glassfish3 adc2101159
  Created installation zip /export2/glassfish3/glassfish.zip
  Copying /export2/glassfish3/glassfish.zip (83527743 bytes) to adc2101159:/export/tmp/glassfish3 
  Installing /export2/glassfish3/glassfish.zip into adc2101159:/export/tmp/glassfish3
  Removing adc2101159:/export/tmp/glassfish3/glassfish.zip
  Fixing file permissions of all files under adc2101159:/export/tmp/glassfish3/bin
  Command install-node executed successfully.

Note: Currently install-node works best with SSH key authentication.

9) Nodes and Instances

In GlassFish 3.1 a Node is a configuration object in the DAS that represents a GlassFish installation on a host. There are two types of nodes:

  1. SSH nodes
  2. CONFIG nodes

Both node types contain some basic information about the node like:

  • The hostname of the remote host
  • The location of the GlassFish installation on the remote host

An SSH node also contains information needed to connect to the host via SSH such as the SSH username to use, the SSH port number, etc.

The DAS ships with one node already created named "localhost-<domainname>". This is a CONFIG node that can be used when creating instances local to the DAS. See the next section for more info on creating instances local to the DAS. CONFIG nodes are also used with remote instances when SSH is not going to be used. See section 11 for more info on that. But if you want to manage a remote instance via SSH then you need to first create an SSH node to represent a GlassFish installation on the remote system.

You use the create-node-ssh command to create SSH nodes. If you have SSH key authentication already set up (section 7) and the DAS is running as the sshuser (section 6), then things are really easy:

  $ asadmin create-node-ssh --nodehost adc2101159 --installdir /export/glassfish3 node1
  Command create-node-ssh executed successfully.

You can use list-nodes to see what nodes are defined:

  $ asadmin list-nodes --long
  localhost   CONFIG  localhost   /export2/glassfish3   
  node1       SSH     adc2101159  /export/glassfish3
  Command list-nodes executed successfully.

If you are using SSH password authentication then you need to do a little more work to securely set the SSH password on the SSH node. 3.1 SSH Authentication has some more details on running create-node-ssh with other SSH authentication schemes.

create-node-ssh will validate the parameters you pass it. For example it makes sure it can connect to the remote host using SSH, and it verifies that the installdir you pass it is a GlassFish install. Therefore you should have SSH set up and GlassFish installed on the remote instances before you start creating SSH nodes.

Once you have the SSH node created, you can then create an instance on the node:

  $ asadmin create-instance --node node1 instance1
  Command _create-instance-filesystem executed successfully.
  Port Assignments for server instance instance1:
  The instance, instance1, was created on host adc2101159
  Command create-instance executed successfully.

And start it:

  $ ./asadmin start-instance instance1
  Waiting for instance1 to start ......
  Successfully started the instance: instance1
  instance Location: /export/glassfish3/glassfish/nodes/node1/instance1
  Log File: /export/glassfish3/glassfish/nodes/node1/instance1/logs/server.log
  Admin Port: 24848
  Command start-local-instance executed successfully.
  The instance, instance1, was started on host adc2101159
  Command start-instance executed successfully.

If at any time you want to test the connection to an SSH node you can do it using the asadmin ping-node-ssh command:

 $ ./asadmin ping-node-ssh node1
Successfully made SSH connection to node node1 (adc2101159)
Command ping-node-ssh executed successfully.

10) Local Instances

To create instances on the same system as the DAS (typically using the same GlassFish installation as the DAS) you do not need SSH and you don't need to worry about nodes at all. Run this command from the same installation as the DAS:

  $ ./asadmin create-local-instance localinstance1
  Port Assignments for server instance localinstance1:
  The instance, localinstance1, was created on host localhost
  Command create-local-instance executed successfully.
This creates an instance local to the DAS using the build in localhost node.

11) Doing Remote Instances Without SSH 

If you do not want to use SSH for managing remote instances you can still create and manage them using the local version of asadmin commands as mentioned in section 1. Also, due to an "auto-node creation" feature you don't have to  explicitly create the node before hand. For example if you have this setup:

  • GlassFish installation on two systems: systemA and systemB
  • DAS running on systemA

Then you can create an instance on systemB by doing the following:

Log into systemB and run the following using the GlassFish installation on systemB:

  $ asadmin --host systemA create-local-instance myinstance1
  Rendezvoused with DAS on systemA:4848.
  Port Assignments for server instance myinstance1:
  Command create-local-instance executed successfully.
Note: Since create-local-instance connects to the DAS from a remote system, the DAS must be running with secure admin enabled. See the asadmin enable-secure-admin command -- you will need to restart the DAS after enabling secure admin.

This command does the following:

  1. Creates a CONFIG node in the DAS configuration for systemB if such a node does not already exist. The node name is the hostname of systemB.
  2. Creates a server instance object in the DAS configuration for instance myinstance1
  3. Creates the instance files in the filesystem on systemB for instance myinstance1

You can then start the instance while still logged into systemB:

  $ asadmin start-local-instance myinstance1
  Waiting for myinstance1 to start ......
  Successfully started the instance: myinstance1
  instance Location: /export/glassfish3/glassfish/nodes/adc2101159/myinstance1
  Log File: /export/glassfish3/glassfish/nodes/adc2101159/myinstance1/logs/server.log
  Admin Port: 24849
  Command start-local-instance executed successfully. 

As mentioned in section 1 if you are managing remote instances without SSH then you will not be able to use start-cluster to start instances in a cluster. You will need to use start-local-instance on each instance or use GlassFish's native service support to start the instances.

Wednesday Jan 06, 2010

Windows TCP Receive Window Auto-Tuning

During the final testing of GlassFish Enterprise Server v3 the GlassFish QA team reported this Update Center bug:

11185 Installer hangs at 41% intermittently

We knew that at this phase of the install the update center was bootstrapping (downloading and installing) itself from the network repository into the installation image.  After some initial analysis we determined a few more things: this was only on Windows Vista and Windows 7, it has a hard hang, it was intermittent, we could only reproduce it inside SWAN (Sun's internal network). Further analysis showed that the client was blocked on a read from an HTTP connection to the repository, the repository thought everything was fine (and in fact showed no open connection from the client at all !?). Tom, through the use of WireShark, observed the TCP window size fluctuating significantly before the hang.

We were stumped -- so we Googled it. And the culprit was: TCP Receive Window Auto-Tuning

This feature is enabled by default on Vista, and attempts to automatically tune some TCP parameters including the Receive Window size to improve network performance. Unfortunately this can interact poorly with some pieces of network gear or firewalls leading to slowness or hangs. In our case disabling Auto-Tuning cleared up the problem.

For more details including how to disable auto-tuning check out the Microsoft blog posting on the subject or this article from www.vistax64.com.

Wednesday May 13, 2009

Compiling with wxWidgets on OpenSolaris

wxWidgets has been integrated into OpenSolaris. Here are a couple tips for building against these libraries that worked for me (I am running on OpenSolaris 2008.11).

  1. Install the wxWidgets package:

    $ pfexec pkg install SUNWwxwidgets
    $ wx-config --version
  2. Install the Sun Studio Compiler:

    $ pfexec pkg install sunstudioexpress@0.2008.11
    Note that the 2009.3 version of the compiler is also available in the repository, but I had better luck with the 2008.11 version.
  3. Build your application:

          $ /opt/SunStudioExpress/bin/CC `wx-config --cxxflags --libs` yourapp.cpp -o yourapp

Friday Jan 30, 2009

Screencast Tips

I recently produced a screencast for Update Center 2, and I've had some folks ask me how I did it. There are a zillion tools and techniques for producing screencasts. This blog describes one approach. I in no way claim it's the best approach -- and in fact I'm certain it is not. But it worked OK for me.

I did this on a Mac laptop running Mac OS 10.5. Windows users will likely need to look elsewhere for guidance.

I've broken down the job of creating a screencast into four steps:

  1. Recording raw content
  2. Editing content into a video
  3. Converting video into final screencast format
  4. Publishing screencast

Recording Raw Content

To record the raw content off of the screen I used  iShowU HD  from shinywhitebox. Benefits to this software:

  • It's inexpensive: $30
  • It's easy to use
  • It seems to do enough for basic screencasts

The main drawback is it's pretty basic. I imagine there are more sophisticated solutions out there. For example I needed to use some additional software to convert my Quicktime video into Flash Video for publishing to the web.

I really did not evaluate any other solution. You may want to before blindly following me.

Here are the iShowU HD settings I used for screen recording:

Output Video Size 720x540, 4:3
Output Video Codec Apple Animation, High Quality
Output Video Framerate 4 fps
Output Audio AAC, 44100 khz, 2 channel
Capture Area Same as video size (720x540)
Mouse Mode Fixed / Stretch


  1. To set these I first selected "iMovie '08 Standard (4:3) Large", then changed the framerate
  2. 720x540 seemed to be a reasonable compromise. Large enough to be legible but not giant
  3. Throughout this process I made sure to always keep my video size the same so I never scaled the original content
  4. Note the low framerate. If your software uses snazzy animations you may want to bump this up a bit.
  5.  Audio quality is better than needed, but I down-converted it later
  6. I set my capture area to be the upper left corner of the screen. So the area of the screen being recorded is a 720x540 box in the upper left of the screen -- except when the mouse moves out of that box. See next item.
  7. The mouse mode means the capture area is fixed, but if the mouse leaves the capture area the video zooms out to keep the mouse in frame. More on this later.

General Recording Tips:

  1. Practice a few times before recording.
  2. I found the built-in mic on the Mac to be of pretty good quality, so I just used that. If you have a high quality external mic you may want to give it a try.
  3. Make sure you are in a quiet area where you won't be interrupted.
  4. Be aware of the location of your mouse. When it's "in frame" you will have a tight shot of the upper left corner of the desktop. If you move out of frame the video will zoom out to keep the mouse in frame. You don't want to oscillate back and forth. Don't wiggle the mouse nervously (talking with your hands!)
  5. Keep the mouse pointer near the upper left when you want to show detail. To show your entire app move the mouse to the lower right corner of your app. To show the entire desktop move your mouse to the lower right corner of the desktop.
  6. If you are using a terminal increase the font size until it just fits in 720x540. This will be legible.
  7. iShowU HD will preview what it will record when it is up. Use that to get a feel for the framing and zooming.
  8. I put post-its on the edge of my screen to mark the 720x540 boundary and remind me what was in frame.

Once you are ready to record, click Rec and go! Do your best to get it done in a single take. You want to minimize the amount of editing required. Consider breaking down your screencast to 5 minute clips (I violated this with my 15 minute take).

When you're done recording you will have a mov file in Movies/iShowU HD/.

Editing Content

I used iMovie '08 to assemble the content into a video. Import your video using File->Import Movies. I performed very minor edits -- basically splicing in an intro and conclusion that I had to "re-shoot". iMovie '08 is pretty weak -- rumor has it '09 is a significant improvement.

I then exported the video using "Share->Export Quicktime" with the following settings (click the "Options..." button in the export dialog)

Export Movie to QuickTime Movie
Video Settings
Compression Type H.264
Frame Rate 4
Key Frames Automatic
Compressor Quality High
Encoding Best quality
Data Rate Automatic
Audio Settings
Format Linear PCM
Channels Mono
Rate 22.050 kHz
Quality Normal
Sample Size 1

After this you will have a new .mov file with your screencast

Convert Video to Final Format

The problem with video formats is that it is still, after all these years, difficult to find one format that plays universally on all platforms. To get around this I converted to flash video format. To do this conversion I used  ffmpegx  to convert my Quicktime movie to flash video (.flv).

Here are the ffmpegX settings I used:

Codec Flass Video (.FLV)
Size 720x540
Auto Size 4:3
Frame Rate 4 fps
Codec MP3
Bitrate 64 kbit/s
Sampling 22050
Channels Mono
High Quality Two pass encoding|

Publishing Screencast

So, now you have a .flv file -- what the heck do you do with it?  Well, first you want to preview it, then you want to publish it to the world.

To preview it, place the file someplace accessible via an http URL, then put the following code in an HTML page:

<object type="application/x-shockwave-flash" width="720" height="556" wmode="transparent" 
    <param name="movie"
        http://yourserver.sun.com/path/to/video/screencast.flv&autoStart=false" />
    <param name="wmode" value="transparent" />

This uses the Flash NLV player from mediacast.sun.com to play the video in your web page.

For final publication Sun employees can upload the flv file to http://mediacast.sun.com. Once you do that you can click the "Share" area on the video to see various options for sharing.

Update Center 2 Overview Screencast

I've just published an introductory screencast on Upate Center 2. If you're interested in what Update Center 2 is, then check it out:

For further information see our project wiki at http://wiki.updatecenter.java.net

Thursday Nov 06, 2008

1680x1050 video resolution on XVR-100

I recently got a new Sun 22" Widescreen LCD monitor to use on my trusty desktop: a SunBlade 1500 with an XVR-100 graphics card runing Solaris 10u4.

Initially I could not get the system to run at the 1680x1050 native resolution of the monitor due to Solaris bug 6362624. Apparently this is fixed in patch 118717 although I ended up using the work-around from the bug report (add an entry to /etc/openwin/server/etc/OWconfig and run fbconfig). This worked great and I'm now running at 1680x1050.

Friday Aug 15, 2008

Dynamic Libraries, RPATH, and Mac OS

Note:This posting was originally written based on MacOS 10.4. See the Comments section for some updates since then.

One of my responsibilities on the Update Center 2.0 project is to perform builds of Python and wxPython for all of our supported platforms. One unique aspect of our environment is that we need these builds to be relocatable as far as the filesystem is concerned in order to support our multi-install requirement. That is: plop the build anywhere on your system and it must work.

A key to making relocatable software work is relative paths. All references to files within the relocatable install image must be relative (or at least start out relative and only made absolute dynamically at runtime). This includes any dependencies that binaries in the install image might have on dynamic libraries in the install image.

You can always use LD_LIBRARY_PATH (or DYLD_LIBRARY_PATH on the Mac) to force the runtime linker to locate the right dynamic libraries -- but that's cheating, and should only be used as a hack of last resort. Better to construct the binaries correctly so that they can locate their dependencies without the need to alter the user's environment.

I'm pretty familiar with dynamic libraries on Solaris and Linux. On those platforms you can embed an RPATH into a binary that is searched by the runtime linker to locate libraries. Plus both Solaris and Linux support the $ORIGIN token so you can make these paths relative to the install location of the binary.

On Solaris you can see these settings by using dump -Lv:

$ dump -Lv wx/_core_.so


[INDEX] Tag         Value
[1]     NEEDED          libCrun.so.1
[2]     NEEDED          libwx_GTK2u_richtext-2.8.so.0.4.0
[3]     NEEDED          libwx_GTK2u_aui-2.8.so.0.4.0
[4]     NEEDED          libwx_GTK2u_xrc-2.8.so.0.4.0
[5]     NEEDED          libwx_GTK2u_qa-2.8.so.0.4.0
[6]     NEEDED          libwx_GTK2u_html-2.8.so.0.4.0
[7]     NEEDED          libwx_GTK2u_adv-2.8.so.0.4.0
[8]     NEEDED          libwx_GTK2u_core-2.8.so.0.4.0
[9]     NEEDED          libwx_baseu_xml-2.8.so.0.4.0
[10]    NEEDED          libwx_baseu_net-2.8.so.0.4.0
[11]    NEEDED          libwx_baseu-2.8.so.0.4.0
[12]    INIT            0xc8ef4
[13]    FINI            0xc9000
[14]    RUNPATH         $ORIGIN/../wxWidgets/lib
[15]    RPATH           $ORIGIN/../wxWidgets/lib
[16]    HASH            0x94

Here the _core_.so binary has dependencies on a number of wx libraries. RPATH is set to look for libraries relative to the install location of _core_.so, and therefore the linker can find these libraries without needing to set LD_LIBRARY_PATH. You specify the value for RPATH at link time using the -R option.

Linux is similar. In this case you use objdump -p to inspect the binaries:

$ objdump -p wx/_core_.so

_core_.so:     file format elf32-i386
. . .

Dynamic Section:
  NEEDED      libwx_gtk2u_richtext-2.8.so.0
  NEEDED      libwx_gtk2u_aui-2.8.so.0
  NEEDED      libwx_gtk2u_xrc-2.8.so.0
  NEEDED      libwx_gtk2u_qa-2.8.so.0
  NEEDED      libwx_gtk2u_html-2.8.so.0
  NEEDED      libwx_gtk2u_adv-2.8.so.0
  NEEDED      libwx_gtk2u_core-2.8.so.0
  NEEDED      libwx_baseu_xml-2.8.so.0
  NEEDED      libwx_baseu_net-2.8.so.0
  NEEDED      libwx_baseu-2.8.so.0
  NEEDED      libstdc++.so.6
  NEEDED      libm.so.6
  NEEDED      libgcc_s.so.1
  NEEDED      libpthread.so.0
  NEEDED      libc.so.6
  RPATH       $ORIGIN/../wxWidgets/lib
  INIT        0x1d384

. . .

But what about the Mac? That's new territory for me. Here is what I learned -- I welcome comments since I likely know just enough to be dangerous.

On the Mac a dynamic library (dylib) has an "install name". The install name is a path baked into the dynamic library that says where to find the library at runtime. When you link against the dylib this path is saved in your binary so that your binary can find the dylib at runtime. Seems a bit backwards to me -- but that's how it works.

You can see the install name of a dylib by using otool. For example:

$ otool -D libwx_macu-

You can also use otool to inspect binaries and list their dependencies:

$ otool -L wx/_core_.so
        /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.4.0)
                                  (compatibility version 5.0.0, current version 5.0.0)
        . . .

So wx/_core_.so depends on libwx_macu-2.8.0.dylib, which is fine. But notice that evil absolute path. That won't work in our "install anywhere" world. So how do we fix this? What's the Mac OS equivalent of $ORIGIN and RPATH?

Well, there isn't any. But there is something we can do instead. After the build is complete I use the install_name_tool utility to fix up the dylib install names and dependencies in our binaries. Fortunately the Mac also supports a magic token: @loader_path that can be used in a fashion similar to $ORIGIN on Solaris/Linux.

So when building wxPython on the Mac I first do a complete build. This results in the absolute paths being used as mentioned above. Then as part of my "make install" step I fix up these paths to be relative using the install_name_tool utility.

For example, this command changes the install name of libwx_macu- to be relative to the location of the binary using it:

$ install_name_tool -id "@loader_path/../wxWidgets/lib/libwx_macu-"

And this changes the dependency in a binary to use a relative path to locate the library (relative to the install location of the binary).

$ install_name_tool -change "/Users/dipol/wxPython/dist-darwin/wx-2.8/wxWidgets/lib/libwx_macu-2.8.0.dylib"
            "@loader_path/../wxWidgets/lib/libwx_macu-2.8.0.dylib" wx/_core_.so

This mechanism is still not as flexible as the Solaris/Linux approach, since binaries can't specify a search path. It's also a bit more difficult to automate since you must determine the current install name in order to replace it with the new install name. But it does work.

Note that for your project you may be able to simplify this. For example, if you build your dynamic libraries with the correct install names first, then your binaries will pick up the correct install names at link time and you shouldn't need to change the dependencies post-build.


Joe Di Pol-Oracle


« January 2017