Tuesday Jan 20, 2009

Hardware-accelerated remote 3D WINDOWS desktops using VirtualGL and VirtualBox

Sun Shared Visualization software gives users the ability to run 3D OpenGL applications on servers with graphics and take advantage of hardware acceleration. To date, one of the biggest limitations is that, while we could let multiple users "share" the resources on a Linux or Solaris server taking advantage of as many cores and GPUs we could put on a system, on Windows, we were stuck with getting acceleration for only one user - the "owner" of the desktop.

Enter the marriage of VirtualGL and VirtualBox. With VirtualBox, of course, one can run several VMs on a system. With the latest VirtualBox, 3D hardware acceleration is enabled. So what if you start a VirtualBox Windows VM remotely on a Linux or Solaris server using VirtualGL from the Shared Visualization software? You get remote access to a Windows desktop that has hardware acceleration for any OpenGL application that you run.

This is what you do:
1. Download the Shared Visualization 1.1.1 software from the Sun Download Center.
2. Install both the VirtualGL and TurboVNC components.
3. Get the latest VirtualBOx
4. Suppose you have a Linux or Solaris x64 server with one or more nVIDIA GPUs. Suppose its name is "3Dserver"
/opt/VirtualGL/bin/vglconnect 3Dserver
5. Launch a TurboVNC server on 3Dserver
6. Create a Windows VirtualBox VM, remembering to enable 3D hardware acceleration in the General preferences.
Save this as a .vdi file.
7. From ANY remote client, start a TurboVNC viewer connected to the TurboVNC viewer
/opt/TurboVNC/bin vncviewer 3Dserver:N
8. Start the Windows VirtualBox VM using VirtualGL
/opt/VirtualGL/bin/vglrun VirtualBox -startvm {your VM's name or ID}

Et voila! you have a remote Windows desktop with hw acceleration for 3D applications, and you can have more than one!

Friday Dec 19, 2008

Announcing the availability of Shared Visualization 1.1.1

We've just released a new version of the Sun Shared Visualization software. It is available for download from the Sun Download Center. Just go to /www/sun.com/download and select "Shared Visualization" under the High Performance Computing category.

Sun Shared Visualization software includes VirtualGL and TurboVNC and integrates it with Sun Grid Engine software to provide the capability to run Solaris and Linux OpenGL applications on a central resource with high-performance graphics, while interacting and viewing from a wide variety of clients, including Sun Ray thin clients.
It also provides the ability for remote users to collaborate in viewing and running the 3D application.

Some of the features of this release are:
-- additional platform support:
OS/X 10.5 "Leopard"

-- fixes for application-specific issues
Catia V4 (SPARC)

-- improved performance and decreased bandwidth usage for TurboVNC

-- improved performance of VirtualGL on Solaris platforms

Tuesday Jun 10, 2008

TeraGrid '08, Las Vegas, NV

The annual TeraGrid Users' meeting is in Las Vegas this week. Since SuperComputing was in Reno last November and now the TeraGrid meeting is here, it appears that either Nevada is the hub of the HPC universe, or people who build big computers like to gamble.

There were tutorial sessions yesterday. I attended the Data Management and Visualization sessions. The overall theme seemed to be that the biggest problem associated with using TeraGrid resources is moving large data sets around. The Data Management session in the morning outlined ways to move data to a TG compute resource. "Your mileage may vary" was the big theme. It sounds like you CAN transfer data at 600 MB/sec if you do it just the right way with full knowledge of the file system architecture on both sides, BUT you are more likely to get 1 MB/sec. This sounds like a real opportunity for better software to make this easier and faster. "WAN Lustre" and TeraGrid-aware scp were proposed.

Remote visualization using TG resources was one answer to the data-moving problem. Just don't move it, use it where it is. Kelly Gaither from TACC talked about how to use Maverick (uses Shared Visualization software (VirtualGL, TurboVNC, and Sun Grid Engine) for remote visualization and collaboration and Joe Insley from Argonne National Laboratory talked about using remote ParaView on their graphics cluster. One thing I took away from this session was that parallel visualization applications (like ParaVIew) are really hard for the casual scientific user to use. Also, the mechanics of using the visualization applications remotely seemed complex. At least for TACC, we need to do a better job of educating them on how to use the improvements in Shared Viz 1.1 for ease of use. Paul Navratil talked about using the Ranger cluster for visualization. They have users who use ParaView with software rendering (Mesa). This should be greatly improved in performance when we get the hardware-accelerated graphics cluster attached to Ranger!

This morning's keynote address was by Daniel Reed, currently at Microsoft. It was a though-provoking talk. The main themes were how to deal with the data tsuname, available systems shape research agenda (a corollary of the Saper-Whorf hypothesis that language influences the habitual thought of its speakers), and bulk computing is almost free, but software and power are not, moving data is still hard, people are incredibly expenisve, and robust software remains extremely labor-intensive.
Faster processors enable more new software features which result in slower programs which create demand for even faster processors.
Dr. Reed highlighted the impact that multi-core processors will/are having and the problems this will cause in creating software that can take advantage of them to gain performance.
MPI still dominates, but the level of abstraction needs to be raised. He asserted that purpose of the national investment in computer hardware should be to enable science and not to turn researchers into computer scientists.
The other wave of the future that he focussed on is "cloud computing". Microsoft's "cloud" is growing at an exponential rate. Their new data center in Chicago is just under the 200 MW power envelope. He said the physical plant and power are the real cost, hardware is almost free. Currently, funding agencies pay for hardware acquisition, while the institution pays for physical plant and power. This means that the funding model needs to change.

Monday Apr 14, 2008

CORRECTION - the right URL for Shared Visualization 1.1 download

It has come to my attention that the URL in my previous post doesn't actually get you to the place to download the new Shared Visualization 1.1.

It is REALLY at:

OR go to /www/sun.com/download and select "Shared Visualization" under the High Performance Computing category.

Friday Apr 11, 2008

SSV Namespace Collision

Having products named Sun Shared Visualization and Sun Scalable Visualization evidently can lead to some confusion for those who like to refer to products by three letter acronyms.

Sun Scalable Visualization solutions can be either one system with multiple GPUs (like an x4600 with two nVIDIA Quadro PLex VCS Model IV which has 4 GPUs) or a cluster of systems each with multiple GPUs.

Sun Scalable Visualization software addresses the "running the application across multiple GPUs in possibly multiple boxes" issue. It does this in a number of different ways:
1. workstation application runs with no modification using Chromium to display across multiple screens.
2. workstation application might be modified using Chromium APIs to run faster by splitting data up among different render nodes.

Scalable Viz also includes applications Paraview and OpenScenegraph viewer which take advantage of multiple GPUs. As with Chromium, a Scalable Viz solution is sold all set up with configuration files and scripts for the specific solution, so that it will run these applications out of the box.

Sun Shared Visualization software is used to separate the display client from the application and render server.


Baseline - I run a Computational Fluid Dynamics application (henceforth known as CFD app} on my Linux workstation with 1 nVIDIA Quadro FX5600 and 8 GB of memory.

Shared Viz - I run CFD app from my Apple Macbook Pro using VirtualGL and use my Linux workstation as an application server. I might be in the office, in a conference room, at home, or at Starbucks.

Scalable Viz - I run CFD app on my Scalable Viz solution (x4600 with 128GB of memory and two nVIDIA QuadroPlex M4's). I can run MUCH bigger models with much higher resolution (up to eight displays). Application and render server are the same machine.

Scalable Viz (separate application and render server)- OR, I run CFD app on my Scalable Viz solution which is a Chromium cluster of x4600 server and four Ultra24's each with one Quadro FX5600.

Shared plus Scalable Viz - from my Macbook Pro, I can run a large model (much bigger than my Linux workstation can) on my Scalable Viz solution.
The engineer in the next office, can run from his Windows PC to access the same Scalable Viz server. SGE will make him wait, if I am using up all the resource at that time, or let him go ahead if there is resource available.
The engineer in the next building can also run CFD app on his Sun Ray on this same Scalable Viz server using Shared Viz.

None of these users need to know or care whether the application or render server are the same or different machines.

Friday Apr 04, 2008

Solaris support for High Performance Visualization in Scalable Viz 1.1

We've just released the Scalable Visualization 1.1 Solutions with support for Solaris 10 and Suse 10, as well as Red Hat 4 and 5.
Lots of other exciting updates as well - support for new Sun platforms like the Intel-based quad-core SunFire x4150 and x4450 servers,
and 10GbE interconnect as well as InfiniBand, and support for the latest nVIDIA graphics products including the Quadro Plex Model S4 with 4 GPUs in a 1U box.

The Scalable Visualization Solution is a completely integrated (tested and tuned) hardware and software stack. Pick your favorite high-performance systems and graphics devices, and choose how many of each, and your favorite OS (which should be Solaris, of course). We've just installed one of our first Scalable Viz 1.1 installations at a customer site in the UK - an octagonal display configuration with 10 GbE interconnect between the nodes. This should be a really fun site to visit!

Announcing the release of Shared Visualization 1.1 software

Sun Shared Visualization 1.1 software is, at long last, ready for download on the Sun DownLoad Center.
Some people wear ties everyday or don't cut their hair, until a milestone is met, but I just stopped writing in my blog until I could say that Sun Shared Visualization 1.1 is ready for download at http://www.sun.com/download/products.xml?id=465f0f01

So, why am I so excited about Shared Visualization 1.1? We got a lot of feedback from our original release of Shared Viz that it was really cool that one could now run high performance 3D graphics applications from Sun Rays and laptops, BUT it was a pain to install and configure. The combination of trying to maintain security while achieving performance meant that a LOT of complex configuration had to be done on the server and using it from the client wasn't completely straightforward. Our main goal for Shared Viz 1.1 was to make installation, configuration, and start-up simple and bullet-proof. I think we've got it! We've finally got it!

Shared Viz 1.1 is based on VirtualGL 2.1.

Wednesday Oct 31, 2007

IEEE Viz Days 2 and 3

Day 2 I went to a workshop on Knowledge-Assisted Visualization. So what the heck does that mean? Mostly it meant using additional a priori information to add to the visualization (which seems kind of obvious) but there was one interesting paper about using statistics of the data to remove the expected (as in the statistical sense) information so the visualization only contains the interesting information. The results looked pretty good. It does seem to me that what is really of interest is visualization-assisted knowledge, but that is a different topic.

Day 3 started off with a keynote address by Rick Stevens of Argonne National Labs on "Visualization Challenges at the Intersection of PetaScale Computing and Biological Sciences". It was an amazing - to me, eye-opening - talk. I've been only peripherally observing the amazing advances in exploring genomes. But I had no idea the complexity of understanding the masses of data that have been uncovered. It is interesting that so much computer power has been applied to getting this information, but not so much to helping to understand it. Stevens showed a great animation that Harvard put together for their biology students ("The Inner Life of the Cell"} that was truly inspiring. I googled it and it can be found at http://www.studiodaily.com/main/searchlist/6850.html.

In the afternoon I went to another "Meet the Scientists" panel. The theme was too much data, inadequate visualization tools, help, help, help. It is really clear that more needs to be invested in visualization infrastructure to keep up with the massive investments in computing that is creating all this data that needs to be analyzed. Scientists want to be able to do interactive visualization to understand and analyze the terabytes (soon petabytes) of data they are continually acquiring. There seems to be a slight mis-match between the many in the vis community thinking that the visualization is the end result, and the scientific community needing visualization to be part of the analysis process.

Sunday Oct 28, 2007

IEEE Viz 2007 - Day 1

Today was the first day of this year's IEEE Conference on Visualization in Sacramento. I went to a really interesting workshop sponsored by the NSF on "Enabling Science through Visual Exploration." It was the report out from a workshop organized by Kelly Gaither at TACC and the NSF Office of Cyberinfrastructure with a group of scientists from different domains describing what their challenges were and what they needed to get from visualization (and what they mostly are NOT getting now).
The first speaker was Chris Gilpin from UT Southwestern Medical Center. He's a cell biologist. trying to visualization from electron microscopy. He has lots of data, no viz software expertise, only 100 Mbits/sec network access. It was so cool that he was talking about the need for access to an easy-to-use, large-scale, visualization resource and then he talked about using Maverick (our first Shared Viz site, E25K with 16 high-performance graphics cards) for visualization! It was like opening up a newspaper, and reading that your child just won an award (that you didn't even know about).

The theme of these talks was mostly:
1. There is too much data to move around and too much data to visualize on a desktop system.
2. Visualization is really needed to help in the analysis and understanding of data.
3. but scientists are not getting what they need. Existing tools are not easy enough to use and don't provide enough value.

The good news is that there are lots of interesting problems to solve and there is an increasing awareness of the role of visualization in getting from data to understanding.

Friday Oct 12, 2007

Turning off the lights when you leave the room

Or how do you get people to conserve a free resource?

I've set up a graphics server on SWAN that anyone at Sun can use to try out running a 3D application with hardware acceleration from any Sun Ray (or other kind of client) on SWAN. (email me for instructions).

I want to set up a similar system on the public internet for anyone to try, but I have a problem.

The resource, of course, is limited. We use Sun Grid Engine to manage graphics resources to keep too many people from running on a graphics card at one time. On a wide-area network. we use a variant of vnc called TurboVNC, the vncserver session gets assigned a graphics resource and then the user can run various graphics applications and then connect from his/her client with vncviewer to see and interact with the application.

All fine, with enough clever scripting, really easy to use. The only problem is that the user has to quit the vncserver session when they are done in order to free up the graphics card for a subsequent user to use. There might be multiple users viewing the same session so you can not automatically get rid of it when the viewer exits. We put up a window with a message that says "exit this window when you are done with your vnc session", but it seems like lots of times that doesn't happen and people just quit the viewer and happily leave the vncserver tying up the graphics card.

I need to solve this problem before putting up the public graphics application server demo.
Right now, I end up cleaning up after users myself, making an ad hoc determination that they are unwanted leftovers. Since it is just a demo, that's not terrible but it is not very satisfactory. I can make a policy that jobs can only run so long, but that seems heavy-handed.

What to do?

Tuesday Oct 02, 2007

Pretty easy 3D on a Sun Ray or other client

website statistics

I did (with lots of help from my local Script Expert) come up with a script for showing ProE running on a graphics server on SWAN that anyone can run from a Sun Ray or other client (workstation, laptop, etc.)
One does have to install TurboVNC, though, to overcome latency and bandwidth issues. (If you are on one of the Bay Area Sun campuses this isn't necessary.)

Email me if you want to try this out and I'll send you the script.

Friday Sep 28, 2007

Visiting the Allosphere

Last week, I visited the Allosphere facility in the California Nanosystems Institute at UC Santa Barbara.
This is a 10 m diameter spherical projection screen with 3D spatially-localized audio and stereo projectors. There is a bridge inside where up to 15 users can stand and experience fully immersive virtual reality.

This is a picture looking into the Allosphere (with the technical director and director of the Allosphere facility). That is the bridge that the viewers will stand on, while being surrounded by their data.

This is the outside of the sphere. There are two hundred of these speakers positioned around the sphere. There are carefully calibrated delay lines to get spatialized audio synchronized with the 3D visuals.

The sphere sits inside a 3-story high anechoic chamber. It is REALLY difficult to get a decent picture of something this big, this close-up. Also it is dark.
Right now, they have one quadrant (top front) of the sphere lit-up with stereo projectors. I saw a really interesting demo program of flying through the brain from fMRI data. When they have the full sphere, looking at geophysical data in this facility should be really awesome. Talk about Journeys to the Center of the Earth!

Thursday Sep 20, 2007

Think Globally, Act Locally

Well, I was a little too "California-centric" when I said it was trivially easy to use 3D on Sun Ray within Sun. My Menlo Park Shared Viz server worked great for me from the McLean office, but now that I think about it Sun Ray now makes sure that I use an mpk Sun Ray server wherever I go.

My first "customer" was in China, and network latency (big) and bandwidth (little) got in the way of an optimal experience.

For local area networks (think sfbay), my trivially easy approach works fine. For high latency, wide-area networks, some more effort on the part of the user is involved. We use TurboVNC (our version of VNC with optimized JPEG codec) to alleviate X latency problems. Instead of just running the 3D application on the server with graphics acceleration, and seeing the result on the Sun Ray client, one would also run vncserver on the graphics server and a vncviewer on the Sun Ray session. If I can't come up with a simple run script for this, I'll provide instructions for the extra steps.

Monday Sep 17, 2007

Trying out 3D on a Sun Ray

Inside Sun, it is trivially easy to try this out since Sun Rays are everywhere.
Just email me and I'll send you a script and guest password to log into a system we are using as a test 3D graphics server for the ProE application.

Outside Sun, it is not trivial but still easy if you have a Sun Ray. You can download the Shared Visualization software from the Sun Download Center at http://www.sun.com/download/products.xml?id=465f0f01
and install it on your own system with 3D graphics to turn it into a graphics server.

If you don't have a Sun Ray, you can use your laptop or desktop system as a client. There is a little client piece of software to install on the clients as well, that comes with the Shared Visualization software download. Shared Viz software is based on the VirtualGL open source project. It adds management of shared graphics resources to VirtualGL's transparent remoting of OpenGL applications. You can also download VirtualGL from http://www.virtualgl.org/

Friday Sep 14, 2007

Shared Vision

We are developing software for visualization solutions. Why?

Visualization provides essential tools for UNDERSTANDING the masses of data that people are grinding out with all those teraflops of computing and storing on all those petabytes of storage in the "red shift" industries. Visualization is the process of converting large amounts of complex, multi-dimensional data into images so people can more quickly and easily SEE patterns and anomalies in the data.

We are trying to solve two problems - build systems that can handle these massive amounts of data with sufficient performance AND provide easy access to any user anywhere to such systems. Sort of breaking the workstation chains. Not quite as dramatic as changing the name of SUNW.

The purpose of this weblog is to:

1. Let people know about the tools we've developed and where to find and how to use them.

2. Talk about (and also find out about more) some of the really cool things people are doing with this technology.

For example - did you know you can run 3D applications like ProE and Project Wonderland (with graphics hw acceleration) from any Sun Ray?

Another example is the Allosphere at the California Nanosystems Institute at UC Santa Barbara.
It is (and I am not kidding!) a 30 foot diameter sphere that is a projection screen with a bridge inside that can hold 15 people. Imagine being completely immersed in high-resolution stereo 3D graphics and surrounded by 200 Surround Sound speakers. I am going to see the facility next Wednesday. (I SO want to build the visualization system for this!)


Linda Fellingham


« July 2016