By user12609114 on Jan 17, 2010
The summit is free to attend, but registration is required. Please check out the website for more information.
I look forward to seeing you there!
With the release of Sun Ray Software 5, we have released a soft client that runs on windows XP, Vista and 7 and connects to Sun Ray Servers. The software and documentation can be obtained from this link. Installing the client and using is fairly straight forward, normal windows setup.exe steps.
There are configuration changes to the Sun Ray Server Software that must take place in order for the soft client to connect and that will be the focus of this post.
The first screen shot below shows what the client looks like when it is pointed at a Sun Ray Server that either does not support the soft client, version previous to 4.2, or a 4.2 server that does not have the soft client functionality enabled.
Configure the Sun Ray Server to allow connections from the soft client
To get started we need to point a web browser at the Sun Ray server. http://<yourhost>:1660. You will then need to provide the administrative credentials to log into the site. Once into the site you need take the following steps:
Once your settings are saved you will be presented with a dialog that you need to restart the service. Click on the link to switch to the servers tab.
Once on the servers tab
The system will go through a dialog about the server restarting.
Finally we need to go back to our Soft Ray. It should now be connected to your Sun Ray Server.
Note that my Sun Ray server is configured to give out the Solaris desktop. The Soft Client is capable of displaying any kiosk session that the Sun Ray server is configured to display, windows connector, VDI, etc.
One of the very powerful things of the Sun Ray solution is the ability to display windows desktops. The Sun Ray Windows Connector allows us to connect to Windows Terminal Servers or a single XP instance. This how to is for a quick setup to get windows running on Sun Ray for a POC. The Windows Connector Software can be found here.
Extract the software:
# unzip srwc_2.2_solaris.zip
Install the connector:
Create a group of the connector proxy to run as
# groupadd utsrwc
Run the installer
Accept the license
Enter the name of the group we just created - utsrwc
Run the automatic configuration script
# /opt/SUNWuttsc/sbin/uttscadm -c
Restart the Sun Ray Server Software
# /opt/SUNWut/sbin/utrestart -c
Note: Now is when we would apply patches. Since this a new release there are not any yet.
Configure Kiosk Mode:
We will use the web interface for the Sun Ray server to configure the Sun Ray server to present windows desktops. Log into your web admin port http://<name of run ray server>:1660. The username is admin and the password is the one you gave it during set up.
We need to create our kiosk mode as the windows connector.
At this point you will have a kiosk mode defined and then you will need to tell the server when to use it. This is accomplished by using the the System Policy to turn Kiosk Mode on.
Changing the system policy requires a restart of the Sun Ray Services
Quick and dirty about how to get a Sun Ray server up and running for a proof of concept. Please note that while this will get things up and running, there are many items outside of the scope of this document that need to be taken into consideration for a full production enterprise deployment. The documentation has been moved to the wiki format. Don't be bashful click around there is a ton of great straight forward information in the site. which can be found here.
Get the goods
The first step is to have an available server to install on. You will need a Solaris 10, Redhat or SUSE server. The directions here will be for installing on Solaris 10. Next you will need to download the software. You also need to check this patch list, and get the latest patches.
You need to use one of the provisioning methods in this article to set up how your Sun Rays will find the Sun Ray server.
SRSS 4.2 requires Solaris 05/09 (u7) verify you have the correct version.
I was working on a VDI 3.0 install today and I got an error that led me on a very interesting journey. As I was attempting to install the VirtualBox component on Solaris 10 10/08 I got an error stating that my swap needed to be equal or greater to my memory.
When I do a Solaris install I normally select custom and create 2 partitions. One for root and one for swap. I know the old adage that swap should be twice RAM , but in these days with so much RAM in the systems, I don't normally follow it. Faced with learning how to resize swap or reinstalling I took the lazy way out and reinstalled Solaris 10. This time I gave the swap partition 10GB.
Thinking I would be merrily moving along my way I ran the VirtualBox installer script again, and bam same error. The error said I had 4GB of swap and 8GB of memory. Stop the presses, I know I said 10GB's. df -h clearly showed 10GB's. Obviously the script was calculating swap differently then I was. I cracked the script open and found that it was running the command swap -l. Low and behold I only had 4GB of swap, per the command.
Turns out that there are 2 types of swap in Solaris, "system virtual memory" and "disk paging space". Seems as if the GUI installer allows you to adjust one but not the other. To be fair the settings might be there I have just never noticed. Now I was tempted at first to just comment out the check in the script and to keep moving, but I decide it must be their for a reason. After a bit of research it looks like there is a known issue. While it is a remote possibility, some customers ran into the issue in the EA releases of VDI 3.0, and hence why the check was added. In other words the proper thing to do was to fix the swap space sizing. Looks like I was going to have to figure out how to adjust that swap size.
After some research I found this link, which shows how to increase the size of your swap and your dump for a zfs file system. Due to a known issue it is recommend that you delete the partition first and then recreate it. Here are the steps:
# swap -d /dev/zvol/dsk/rpool/swap
# zfs volsize=2G rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap
I tried to delete the swap partition and got an error that it did not exist. While investigating this, I realized I had no rpool. Then it dawned on me, I had a UFS installation! I have been doing a lot of work with OpenSolaris recently and it installs zfs as the root file system by default. I was positive Solaris 10/08 has bootable ZFS. I was also positive I had not missed a question in the gui installer. After doing a bit more research I found this great link, which identifies that in order to choose ZFS as your root file system in Solaris 10 10/08 you need to use the text based installer not the gui. Looks like another install was forthcoming!
Third time the charm? I booted to the install DVD again and this time I choose the text installer. I was asked the normal questions of the gui installer, but also what type of file system I wanted, I selected ZFS, and I was asked to set the size of my swap and dump. Both went to 8GB which equals the amount of RAM I have in the box.
After Solaris installed I ran the swap command again;
# swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 181,2 8 16777208 16777208
Note the response is in blocks so you have to divide by 2 to get kb. I have 8GB of swap and am good to go!
Index/Listing of the How To's I have published on my blog
With the release of VirtualBox 2.1 we no longer need to use these virtual adapters. The install program for 2.1 will remove the virtual driver that previous version used and will allow direct access to the Network Adapters in Windows. This is a great update and all should rejoice!
Well what if someone actually built a demo assuming that virtual network interface was there? Today I got a call from an SE on our IDM team, because they upgraded to 2.1 and their IDM demo broke. After a bit of debugging, I discovered the demo works as a prepackaged VirtualBox VM. You then access the services running in the VM from your host operating systems. The demo assumes your host has the IP 192.168.100.100 and that the IDM VM has an IP of 192.168.100.101. Seems as though the IDM Demo guys, cleverly used the Virtual Network interface that installed with previous version of VirtualBox to give the Windows host an IP. While this was a very clever trick, and solved the problem of having to deal with getting the correct IP on the Windows host, obviously it no longer works with VirtualBox 2.1 because the virtual interface is gone!
My initial solution to the problem was to hard code the IP to the Ethernet Adapter in Windows. While not very effective for other uses of the system it would allow the demo to work. The catch here is that Windows network adapters only plumb up when they detect a link. Not very professional to walk into a customer and say you need to jack in to their network, don't even need an IP, just the link light to run your demo. Obviously this was not going to fly.
After some more research we found a Microsoft Knowledge Base Article that describes how to add a loopback interface that will plumbed up even without a network cable, thus allowing the demo to run.
If you have VM's that are hard coded to a fixed IP and you want to add a fixed IP to your Windows host, so that you can communicate with your VM's regardless as to if your Network Adapter has a link status, this Microsoft Knowledge Base Article is for you!
Technorati Tags: VirtualBox
The following items need to be up and running before we can proceed with the SRSS 4.1 connector for View. If you are starting from scratch there is a lot of steps to get through. Most likely you will be asked to deploy in front of a working View environment and can skip most of the prep work.
Install of ESX (Directions)
Install of Virtual Center (Directions) - note 32 bit windows is required and do not install on the AD Server
Install of View Connector Server (Directions) - note 32 bit windows is required and do not install on the AD Server
Install of XP with View Agent (Directions)
Configuration settings in View:
We need to make a couple of configuration changes to View. I recommend getting things working without SSL first, and then coming back and turning on SSL if your environment requires it.
First lets change View to accept non-ssl connections. Log into your View administrative website. Go to the configurations tab. Edit your global settings to turn require ssl to off. When you make the change View is going to state that it needs to be restarted. Hold off for now.
On the pop up screen un-tick require ssl
View by default tries to tunnel the connection. We need to change it to direct connect. In the View administrator, on the configuration tab you need to select your server and click on edit.
On the pop up screen. Click on direct connect.
At this point we need to restart the View service. You will find it in the Windows Service manager as VMWare View Connection Server.
Sun Ray Connector for VMware Virtual Desktop Manager(SRVDM):
Now that we have a working View environment and a working SRSS environment we can get to the steps to tie the 2 together. First we need to download SRVDM to our Sun Ray Server. The bits can be found here.
# unzip srvdm_1.0.zip
# cd srvdm_1.0
# pkgadd -d Packages/Solaris_10+/i386/
accept the defaults and you should get a message that the install finished correctly.
We will use the web interface for the Sun Ray server to configure the Sun Ray server to present windows desktops.
Log into your web admin port http://<name of run ray server>:1660
The username is admin and the password is the one you gave it during set up.
Click on the advanced tab:
Then on the Kiosk Sub tab:
If you are setting up your Kiosk mode for the first time you will see a message about no Kiosk Mode settings. Click the edit button on the right. If you have kiosk mode setup already jump to the next step:
Change the session drop down to VMWare Virtual Desktop Manager Session.
We are going to start our tests without SSL turned on. In the arguments field add
-http -s <servername> and click on OK
At this point you will have a kiosk mode defined and then you will need to tell the server when to use it. This is accomplished by using the the System Policy to turn Kiosk Mode on for card users and non card users. Click on the System Policy Sub Tab on the Advanced Menu and then click on the enable check box for Kiosk Mode under both non card users and card users. Then click on the save button.
Select your server and click on cold restart.
If you are need to enable SSL the steps can be found here. Remember to recheck the use SSL setting that we shut off above, and restart the View Connection service. Also remember to go back into the kiosk config and take out the -http argument and restart the Sun Ray Server.
This entry assumes that you have a non ssl working SRVDM View environment. If you don't check out this entry on how to get one.
The SSL certificate that comes with the default install of View is not a valid one. You will get hostname mismatch errors if you use the VMWare clients, and you will not be able to connect through the Sun Ray client. In order to get the Sun Ray connector for VMWare View to connect we need to either move a valid certificate in place, or create a self signed one. The steps below can be found in the View Documentation.
First lets create a self signed certificate. If you have a signed certificate already skip this step. On your VMWare View server start a command prompt and switch to the following directory:
C:\\Program Files\\VMware\\VMware View\\Server\\jre\\bin>
Once there execute the following command;
keytool -genkey -keyalg "RSA" -keystore keys.p12 -storetype pkcs12 -validity 360
You will be asked a series of questions which will be used to create your certificate. Make sure you remember what you make the password! Also the first question which is your name is somewhat misleading. It needs to be the name of the server.
We need to move the certificate we created, keys.12, from the C:\\Program Files\\VMware\\VMware View\\Server\\jre\\bin to C:\\Program Files\\VMware\\View Manager\\Server\\sslgateway\\conf.
Next we need to create the file, C:\\ProgramFiles\\VMware\\View Manager\\Server\\sslgateway\\conf\\locked.properties and insert the following 2 lines into it:
Where secret is the password you used to create the certificate above.
Restart the VMWare View Connection Server.
In the View admin site, in the event log you should see a line about using the keys.p12 file.
Now when you go back to your View site, through the web interface, you should be able to connect without getting name errors. Note you will still get an error about a self signed cert, but that is the only one you should get now.
Install the certificate on Sun Ray Servers:
The readme that comes with the SRVDM provides us a command on how to import the certificate into SRVDM. That is all well and good, if we have the certificate! When you go to the View Admin Site, you needed to add a security exception because it is a self signed certificate. If you have a non-self signed certificate, Firefox will automatically store the certificate for you. In either case the following steps using firefox can be used to get the certificate.
We can use firefox to export the certificate. The challenge is that since we are using a self signed certificate you can only do it while you are adding the security exemption. In firefox go to preferences. Click on the advanced tab, encryption, view certificates.
You should see your certificate, but notice the export button is grayed out.
We need to click on delete and start the process over to get our cert. Once the certificate is deleted, return to the View admin site. You will get the cert error again, and click on add exception. Click on Get Certificate, before clicking on confirm exception click on the view button.
Next we need to click on the details tab and then export
Name the cert and save it someplace appropriately. Close out the windows and confirm the security exemption to get back into the View website.
Now that we have the cert in hand we can import into our Sun Ray servers. First you need to copy (scp) the cert we just saved to the the Sun Ray server. Once there we need to run the following command changing VDM certificate to the file name you gave the cert during the export above. Also make sure to note the password you use.
#keytool -import -file <VDM certificate> -trustcacerts -v -keystore /etc/opt/SUNWkio/sessions/vdm/keystore
Next we need to edit /etc/opt/SUNWkio/sessions/vdm/vdm and insert the password
Line 17 has the word javaKeyStorePass, we need to add the password we set in the step above into the file.
NOTE! There is a typo that will prevent things from working. You must correct the typo with the following 2 commands:
#sed 's/trustStore=$javaKeyStorePass /trustStorePassword=$javaKeyStorePass /' /etc/opt/SUNWkio/sessions/vdm/vdm > /tmp/vdm
#cp /tmp/vdm /etc/opt/SUNWkio/sessions/vdm/vdm
We need to restart the kiosk sessions on the Sun Ray server. Since this a POC server and we have made lots of changes, I suggest doing a cold restart.
# /opt/SUNWut/sbin/utrestart -c
When the Sun Rays come back up, you should receive the VIew log in and be good to go.
If things are not working for you, one of my colleagues wrote a great blog entry about how to debug things which can be found here.
My same colleague also wrote an entry about how to get the certificate working in VDM versions prior to view which can be found here. Note the typo directions above are from this entry.
This post is a follow up to a previous post I wrote on how to get bridged networking working in VirtualBox on Windows. The great news is that with the release of VirtualBox 2.1, these steps are no long necessary! VirtualBox can now communicate directly with the network interfaces on the host. All you now need to do is go to the network settings of your virtual machine and select an interface! Details highlighted in the picture below.
It is a with a great deal of melancholy that I sit down to write this entry. It is that time of year, when we say goodbye to the old and in with the new. One of the systems that I was in charge of at my previous employer is officially going off line at midnight tonight.
I used to work for Sprint. Over the years I held many, many titles. Seems as though we re-orged every other day. At one point I was responsible for working with Sprint's customers to move their applications into our co-location facilities. One day I received a call from an exec, because there was a problem. Sprint had an application hosted by an outside company that they needed to move in house ASAP. There had been a project to do this that was WAY off track, and since I had been so successful moving our customers apps into the data center, maybe I could get this one moved.
Always up to a challenge I of course said yes. I then asked what the app was. It was a thing called shortmail and webmail. Shortmail allowed you to send text messages to a phone. Umm yeah in 2008 this is not a very novel idea, but in 2000, I had never heard of it. Webmail was a web based email application.
Diving into the project I realized things where very very bad. Not only did we have to get the applications working, it was discovered that the new 3G network, which was supposed to be ready, wasn't. We had to take over getting the 3G network running as well, so that our application would work. The date that everything had to work, was set based on marketing, not based on reality. Our CEO was going to ring the bell at the stock market and announce the new network. The project was what is called a death march in the software industry. To make a very long story short, I was able to put together a core team of people, 6 of us, who literally worked 21/2 man years in one year, each, and we pulled it off! We quite frankly left many many bodies along the way. There were many people who rolled on and off the project because they could not handle it. But the core team managed to stay together and pull the project off. I still feel to this day, that it was an amazing feat.
I made some of the best friends I have on this project. How could we not become friends? We were literally spending 20 hours a day together. Till this day I still stay in touch with most of them. Unfortunately one has moved on to greater things. Goodbye Drew, we miss you!
In '04, the application had to be moved from one data center to another. It became the reunion tour. The team was put back together and we moved it again. This time the project was not a death march. It was great working with the team again. Ironically, that project lead to another one at Sun, which led to my current position.
As I logged into pay my Sprint bill this morning, I saw the final warning message. Webmail will be shutdown this evening for good. Honestly not a lot of people really want their email address to be firstname.lastname@example.org. These days with aol, hotmail, yahoo, and google, there were very few users on the system and I do not blame Sprint for making a sound economic decision.
It is interesting to remember the good and yes the bad times we had pulling off this project. My memories are really attached to the people on the project more than the systems. So webmail, while you might be going off line, you will live on in the hearts and minds of the people who made the project real!
Obviously I have an interest in OpenSolaris because I work for Sun. There have been several recent events that have caused me to have more than a passing interest with OpenSolaris and more specifically ZFS.
I have been in the computer industry for a long time now. While I have seen enterprise disks, ones that are in many different forms of redundant setups fail, I had not personally had a disk failure. I actually have quite a sizable stack of disks from old systems, that I have kept, because you know some day I might need them.
Starting the last 2 years I have started to see my personal disks fail. I have actually had 4 disks fail in the last 2 years. One was 12 months and 3 days old. Yes it had a one year warranty. I am pretty obsessive about my personal backups, so far I have not lost any data, but I need something more to feel comfortable.
First I started with how much back up do I need? I have a sizable collection of songs ripped in from my CD's into iTunes. I have also started downloading songs from iTunes, and based on their DRM, if you loose the file, you have to repurchase the track. Next we have a HDD camcorder. This leads to a lot of digital files for the home videos. Beyond that I have the normal amount of work and personal files. All said and done, I have over 500GB of data that I need to back up. Also this data resides on 3 separate systems, 2 Macs and one windows machine.
I started by looking at on-line backup services. I did quite an extensive search. The services fell into two categories. The first where inexpensive or free, and quite frankly I would not trust them with my data. The second while I would trust my data, where cost prohibitive. In both cases the challenge of getting the initial data load to the system was enormous. Pushing 5000 GB up a 1MB upstream will take a VERY long time. Conclusion is that on-line backup is not right for me.
I then started to research home NAS solutions. Again this left me wanting for a solution. The home NAS's are either ridiculous expensive, or proprietary and not very expandable. I wanted a flexible solution that I knew I could easily upgrade and was not willing to pay $1000's for NAS's that fit this criteria.
Then like Sir Isaac Newton getting hit on the head with an apple it dawned on me. Why not build a ZFS file server? Unlike Newton my idea was not so revolutionary. A quick google search turns up tons of hits of people who have done just this. Another search of blogs.sun.com will turn up many hits of people at Sun who have done this.
When I original sat down to write this entry it was going to be a how to. The how to's are already well documented, so instead I have decide to write about my journey through the process. The links below are the steps that I went through to get my home NAS up and running
Building my home NAS was quite and exciting journey for me. Thank you for coming along with me through these entries!
My goal when I set out was to build a home ZFS Fileserver to provide a safe backup point for my data. My other goal was to do it as inexpensive as possible. My total build cost was ~$225 for the purchase of the the Hard Drives, SATA card, and network card. Truly an exceptional value for what I now have!
From a performance standpoint, the file server is extremely responsive. To be fair if there where several users hitting it, it may not perform, but for my home office environment it is spectacular.
I hope that this journey inspires you to take on the challenge of building a home NAS for yourself. I know many home users have that old computer sitting in the basement or closet wondering what to do with it. Put it to good use and do yourself a huge favor by backing up your data.
And while it it sad to see this journey come to end, maybe it quite hasn't? There is a new feature in OpenSolaris 2008.11, called Time Slider, that is a gui interface to build snapshots. Sounds like an opportunity to let the adventure continue!
Note: This entry is part of series which starts here
Note: This entry is part of series which starts here
My journey really started with this blog. It has a great overview of why you want to use ZFS and how to build your hardware, how to setup ZFS and how to use it. At first the information might seem daunting, but like most new projects once you get into you realize the project is broken down into pieces that can be easily followed.
The first step of the project is identifing and putting together your hardware. One of our Sun colleagues has documented a spectacular build. I seriously started going down this road. To get the parts in the US you are looking at about $800-$900. While this is still an exceptional value, it is quite frankly still a bit more than I was looking to spend.
My build started in another direction. Cheap. My goal was to build the system for as cheaply as I could. Again it is going to be open, and I can always upgrade the different components as needed. I have a difficult time parting with old gear. I have a Dell 4550 that has long given up the ghost for running Windows. Step one of my build was cracking it open and giving it a good dose of canned air to get the cobwebs out of it. The system only had 512MB of RAM, but a quick exploration through my stock piles found a compatible DIMM to bring the system to 1GB. The system also has P4 running at 2.5 gHz.
Now if you have read about ZFS you will probably asking yourself what was I thinking running on this minimum of specs? Doesn't ZFS need a 64Bit chip? Don't I need more RAM? Well to be honest I had no idea what my performance was going to be like. But again my goal for the system was to be a backup server. I have 2 programs, one for the MAC and one for Windows, that copy files from the systems I run, to the backup server. These programs are all scheduled to run in the middle of the night. It really doesn't matter if they take 30 minutes or an hour. Also after the initial load I would only be moving incremental changes, which is really not a significant amount of data. Therefore I decided to plug ahead with my 7 year old box.
I started by installing OpenSolaris 2008.11 on the box. All of the critical devices in the box where found, and the system was up and running in no time. Now came some interesting architectural decisions. My goal was to have 4 drives for the ZFS pool, and a separate drive for the OS. This leads to the ability to update the OS or replace the OS drive without interfering with the storage pool.
My first thought was to boot the system to a USB drive that was running the OpenSolaris operating system. The 4550 did not allow for boot to USB. I did find I was 8 revisions behind on the bios, and upgraded it, but alas still no way to boot to USB.
At this point my original plan changed. The case has slots for more drives and I decide to leave the IDE interfaces, one for a hard drive and one for the dvd and to add a SATA card to hang the rest of the drives off of. When finished the box has 5 HD's, one IDE and 4 SATA, an IDE DVD, and I decided to leave the floppy in it as well.
I decided that since I have a GigE switch I would upgrade to the the network card as well. After the checking the HCL, I was off to my local computer store to see what parts I could come up with.
And as often goes with adventures like this my original goal was to buy 4 HD's. I was hoping for 250GB to 300GB to get me to a pool of 750GB or 900GB. The store I went to was having a sale on the 500GB drives, and I was able to pick 3 of them up for less than 4 of the others. I was able to get a 1TB pool and this sets me up for adding another 500GB to my ZFS pool very easily in the future, by simply getting another disk when I am out of space.
My Adventure continues with upgrading to GigE
Note: This entry is part of series which starts here
I have an old computer that I decided to install OpenSolaris on. I have a GigE switch in my lab and decided the onboard, 100MB card was not good enough, and decided to go GigE. The old adage that if ain't broke don't fix it, should have come to mind here, but....
The first thing I did was check the HCL list. With a few models I headed off to MicroCenter. They happened to have a D-Link DGE-530T and since it was on the compatibility list I decided to go for. I installed the card, and followed the instructions on the HCL list. During the installation I got errors and the card does not work.
I went back to the HCL list and looked at the details. The card that is known to work has the following config:
compatible: 'pci1186,4b01.1186.4b01.11' + 'pci1186,4b01.1186.4b01' + 'pci1186,4b01' + 'pci1186,4b01.11' + 'pci1186,4b01' + 'pciclass,020000' + 'pciclass,0200'
model: 'Ethernet controller'
My card has:
compatible: 'pci8086,24c3.1028.142.1' + 'pci8086,24c3.1028.142' + 'p model: 'Ethernet controller'
Now to be fair I am not a hardware expert, but obviously the names are different. I started googling around and found this blog post. We now know from the blog post that the name defines the Vendor ID and the Product ID. Since the first half is the same, and since they are both DLinks things are adding up.
Now what about the product ID? There is a website that documents all of the unique codes on PCI cards. The websites shows that a 4b01 has a Marvell 88E8001 Chip. The website also shows that the 4c00 has the Marvell 88E8003 Chip. Even though I have a different chipset, I decided to give it whirl and ran the following command:
/usr/sbin/update_drv -a -i "pci1186,4c00" skge
This brought the interface on-line and it seems to be working!
My adventure continues with getting to SATA.