Friday Jul 04, 2008

A Simple VNC Server and GDM Configuration Example for OpenSolaris 2008.05

My requirement was to be able to connect my VNC client to a system running OpenSolaris 2008.05 and to be able to login as root. I have now done this successfully on a  system running the original OpenSolaris 2008.05 binary distribution and on a system running OpenSolaris 2008.05 after I ran a full image update to snv_91.

Update September 19th 2008: This procedure does not work if you have updated the image to snv_97 but does work if you update the image to snv_98. The upgrade from snv_97 to snv_98 wiped out the entries I had made in /etc/X11/gdm/custom.conf so I had to make those again. I have added an extra step at the end, based on Chris Drake's comments, to make the VNS server session persist if you exit the client.

1. Check that the VNC Server is Installed

This should be present as it is part of the 2008.05 binary distribution, but I checked anyway.

# pkg info SUNWxvnc
          Name: SUNWxvnc
       Summary: X11/VNC server
         State: Installed
     Authority: opensolaris.org (preferred)
       Version: 4.1.2
 Build Release: 5.11
        Branch: 0.91
Packaging Date: Fri Jun 13 17:49:25 2008
          Size: 6.3 MB
          FMRI: pkg:/SUNWxvnc@4.1.2,5.11-0.91:20080613T174925Z

2. Add this line to /etc/services

vnc-server      5900/tcp                        # Xvnc

3. Edit /etc/X11/gdm/custom.conf as below

[xdmcp]
Enable=true
[security]
DisallowTCP=false
AllowRoot=true
AllowRemoteRoot=true

4. Enable the Services

# svcadm enable xvnc-inetd
# svcs xvnc-inetd
STATE          STIME    FMRI
online         16:22:30 svc:/application/x11/xvnc-inetd:default
# svcadm enable gdm
# svcs gdm
STATE          STIME    FMRI
online         14:43:13 svc:/application/graphical-login/gdm:default

5. Connect to the Display with a VNC Client

You should now be able to connect to <hostname>:5900 and you should see the gdm login screen.

If you cannot connect, try stopping & starting the services:

# svcadm disable xvnc-inetd gdm
# svcadm enable xvnc-inetd gdm

6. Making the Session Persist

This may or may not be desirable for you, but if you want the VNC session to persist if you exit the VNC client then do the following:

# svccfg -s xvnc-inetd

svc:/application/x11/xvnc-inetd> editprop

This take you into a vi session. Look for the line...

#setprop inetd/wait = boolean: false

Copy the line, uncomment it and set it to true. Save the file, exit svccfg and run the command...

# svcadm refresh xvnc-inetd

Connect again with you VNC client. Now, when you exit/kill the VNC client, the session on the server will persist and you will be able to connect to it again.

You may now want to add an extra level of security to enable password protection on your VNC server. That is something that I have been unable to make work...and from searching around, it seems that others have a similar problem.

References: 1, 2

Tuesday May 27, 2008

Making SAMBA Go Faster.....

I do a lot of work with the CIFS server that we now provide as part of OpenSolaris, but I also still do work with SAMBA as well.

I have been experimenting with a workload where I am accessing 100 files from a Windows Server 2003 CIFS client (Sun Fire V40z). The server is a Sun Fire X4500 running SAMBA. I am doing sequential I/O with a  workload generating tool, and at any time 75 files are open for read and 25 open for write. There are 1million files in the file system comprising 4 TB of data.

I was not seeing great scalability or performance, then some research by a colleague (supported by some folk at samba.org) led me to try enabling Async IO (AIO) on the SAMBA server. This is a standard feature of SAMBA, and has been available in Sun's SAMBA server build since Solaris 10 8/07. From what I have been told , AIO specifically helps the case of client workload scaling w/ threads Vs scaling with many connections; the workload generating tool I am using (vdbench) scales with threads.

To enable AIO add these lines to the global part of your smb.conf :

aio read size = 1
aio write size =1

Then restart SAMBA.

This parameter is defined in bytes. This setting means that any I/O over 1 byte will be handled asynchronously by smbd. There may be reasons for this to be a bigger value in some cases, I don't know.

Without AIO, my previous best result was 42 MB/sec reads and 14 MB/sec writes; with AIO the client could read at 64.4 MB/sec reads and write a 22.1 MB/sec. CPU utilization on the X4500 running SAMBA went up from 15% to 50%.

Your mileage will vary depending on your workload and application, but that is quite a nice boost for just adding two lines to a configuration file :-)

Note: This work was done running the Solaris Express Community Edition snv_89 X86 (aka Nevada) on the Sun Fire X4500. The underlying file system was a ZFS file system provisioned from a RAID-Z storage pool configured as described here. I was reading/writing 8KB blocks.

Friday Apr 11, 2008

OpenSolaris as a StorageOS - The Week That Everything Worked First Time

This has been an extraordinary week...everything I have tried to do has worked first time.

To summarise this weeks activities, I have set-up and then blogged on:

Configuring the OpenSolaris CIFS Server in Workgroup Mode
Configuring the OpenSolaris CIFS Server in Domain Mode
Solaris CIFS Windows SID Mapping - A First Look
Configuring the OpenSolaris Virus Scanning Services for ZFS Accessed via CIFS Clients

All in all..a very good week :-) 

Thursday Apr 10, 2008

Solaris CIFS Windows SID Mapping - A First Look

This follows directly on from where I set up an OpenSolaris CIFS server in Domain Mode. When I mapped the share I was not asked for a user ID and password as I had been in Workgroup mode..so, how are file and folder ownership and permissions handled ?

The Solaris CIFS implementation includes ID Mapping Services. These services let you explicitly map Windows Security IDentifiers to Solaris User IDs and Group IDs if you wish, but if you don't set up explicit mappings then the ID Mapping Service will generate Ephemeral Solaris User IDs and Group IDs for Windows Users. The Solaris CIFS Administrators Guide discusses Identity Mapping strategies.

Permissions on the files and directories created at the end of the previous exersize, when listed from Solaris, look like this...

root@isv-x4500b # pwd
/tank/cifs1
root@isv-x4500b # ls -l
total 10
d---------+  2 2147483649 2147483650       2 Apr  8 05:49 user1
----------+  1 2147483649 2147483650       0 Apr  8 05:57 user1.txt
d---------+  2 2147483650 2147483650       3 Apr  8 05:50 user2
----------+  1 2147483650 2147483650       0 Apr  8 05:51 user2.txt

This looks a bit odd because the OpenSolaris CIFS service uses ZFS ACLs, and they don't show up in the above listing. We can dump the current ID Mappings, as below...

root@isv-x4500b # idmap dump
usid:S-1-5-21-500772251-2770406677-2360070262-1125      ==      uid:2147483650
usid:S-1-5-21-500772251-2770406677-2360070262-1113      ==      uid:2147483649
gsid:S-1-5-21-500772251-2770406677-2360070262-513       ==      gid:2147483650
gsid:S-1-5-21-500772251-2770406677-2360070262-512       ==      gid:2147483651
gsid:S-1-5-11   ==      gid:2147483652
gsid:S-1-5-2    ==      gid:2147483653
gsid:S-1-5-32-544   ==      gid:2147483654

We can see that Windows SIDs have been mapped to Solaris User and Group IDs.  The mappings for the users have been created on the fly (Ephemeral ID Mapping); the Windows Group SID mappings are generated by system level interactions, you can decode those using the list here. We can see that:

Windows Group SID xxxxxx-513 = Domain Users = Solaris GID 2147483650

Windows Group SID xxxxxx-512 = Domain Admins = Solaris GID 2147483651

Windows Group SID xxxxxx-11 = Authenticated Users = Solaris GID 2147483653

Windows Group SID xxxxxx-2 = Network (Users logged in via the Network) = Solaris GID 2147483651

Windows Group SID xxxxxx-544= Administrators  = Solaris GID 2147483654

We now know that the files and directories we created are in the Domain Users group as it was mapped to the Solaris GID of 2147483650.

Sometime soon I am going to dig deeper on all this, this was just a first look. 

For More Information

OpenSolaris Project: CIFS Server Home Page

Open Solaris CIFS Documentation including the Solaris CIFS Administrators Guide & Troubleshooting Information

Also, consider joining the Open Solaris Storage Discuss Forum

Wednesday Apr 09, 2008

Configuring the OpenSolaris CIFS Server in Domain Mode

[Update July 4th 2008: This article was written prior to the release of OpenSolaris 2008.05, and I used the term OpenSolaris sloppily as I really meant Solaris Express Community Edition, codenamed "Nevada". If you take a look here the different downloads available are explained.

These instructions are equally applicable to both distributions, but depending on what Solaris Express Community Edition package cluster you install you may not have the SMB server packages (I always install everything, so I cannot be more precise than that). In the case of OpenSolaris 2008.05 you will need to add the packages SUNWsmbkr & SUNWsmbs from the repository using Package Manager, or using the pkg install <pkgname> command.]

I recently blogged about configuring the OpenSolaris CIFS Server in Workgroup Mode. I have now gone through the process of doing this in an Active Directory environment. 

As before, I am working on a Sun Fire X4500 with Solaris Nevada build 86 installed....

root@isv-x4500b # uname -a
SunOS isv-x4500b 5.11 snv_86 i86pc i386 i86pc

I have mostly presented the commands I have used and actual files from my system as is..but I have occasionally had to edit fields.

1. Configure the OpenSolaris server to be a DNS client of the Active Directory Domain Server

To do this create/modify the file /etc/resolv.conf to do lookups against the Active Directory Domain Controller.

root@isv-x4500b # cat /etc/resolv.conf
domain sspg.central.sun.com
nameserver 192.168.2.1
search sspg.central.sun.com central.sun.com

Now, set up /etc/nsswitch.conf so that hosts are resolved via DNS. You can modify you existing/etc/nsswitch.conf file or just copy /etc/nsswitch.dns to /etc/nsswitch.conf.

To check that DNS is working you can run a few simple tests by looking up a known hosts with nslookup.

2. Set up Kerberos

Edit the file /etc/krb5/krb5.conf and set up the below fields as shown, customized to your environment. Below is just a part of the /etc/krb5/krb5.conf file. The manual covers this step well on pages 42 and 43.

<--snip-->
[libdefaults]
        default_realm = SSPG.CENTRAL.SUN.COM

[realms]
        SSPG.CENTRAL.SUN.COM = {
                kdc = domaincontroller.sspg.central.sun.com
                admin_server = domaincontroller.sspg.central.sun.com
                kpasswd_server = domaincontroller.sspg.central.sun.com
                kpasswd_protocol = SET_CHANGE
        }

[domain_realm]
        .sspg.central.sun.com = SSPG.CENTRAL.SUN.COM
<--snip-->

3. Synchronise Clocks of your Server with the Domain Controller

This is an easy step to miss..and you may later be unable to join the domain due to Kerberos initialization problems.....that is what happened to me!!!

There are various ways to synchronize the clocks described in the manual on page 43....I did it this way:

root@isv-x4500b # ntpdate domaincontroller.sspg.central.sun.com

4. Start the CIFS Services

root@isv-x4500b # svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.

The message can be ignored.

5. Join the Domain

To complete this step you need to know the user name and password of an Active Directory user (aduser in this case) with Administrator rights for the domain 

This is the part of the process that I got stuck with for a while as the manual describes some apparently redundant steps on page 43 using sharectl which did not work..the below worked:

root@isv-x4500b # smbadm join -u aduser sspg.central.sun.com
Enter domain password:
Joining 'sspg.central.sun.com' ... this may take a minute ...
Successfully joined domain 'sspg.central.sun.com'

If this fails, make sure you did not skip Step 3. You will see Kerberos messages in the system log when you try to join the Domain if the time difference is too great between the servers. If that is not the problem then check the Troubleshooting Guide.

6. Stop and Start the CIFS Server

root@isv-x4500b # svcadm disable smb/server
root@isv-x4500b # svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.

7. Create a ZFS file system

I already have ZFS storage pool called tank.

root@isv-x4500b # zfs create -o casesensitivity=mixed tank/cifs1

8. Share the ZFS File System via SMB and Customise the Share Name

root@isv-x4500b # zfs sharesmb=on tank/cifs1

The default share name would be tank_cifs1..we can change that to cifs1 as follows..

root@isv-x4500b # zfs sharesmb=name=cifs1 tank/cifs1

You can check this using sharemgr:

root@isv-x4500b # sharemgr show -vp
default nfs=()
zfs
    zfs/tank/cifs1 smb=()
          cifs1=/tank/cifs1

9. Set Permissions on the Shared Directory

I was going to be accessing the share from two Windows clients using Active Directory registered users. I opened up the permissions on the shared directory so that I would not have any access problems.

root@isv-x4500b # chmod 777 /tank/cifs1

I need to experiment with ZFS ACLs and maybe Identity Mapping, as described in the Solaris CIFS Administrators Guide, to handle this more elegantly; those are things that I will  explore in the future.

10. Access the Share

I accessed the share from two clients (client 1 and client 2) running Microsoft Windows Server 2003.

Both servers were members of the same Active Directory Domain as the CIFS server. I logged into each server as a different Active Directory registered user: user1 logged into client1; user2 logged into client2.

I mapped the share to both clients.

Map Share

When I mapped the share I was not asked for a user ID and password as I had been in Workgroup mode, but I could see in the system log that authentication had taken place and rw access to the share had been granted to users [SSPG\\user1] and [SSPG\\user2]...

Apr  8 05:49:30 isv-x4500b smbsrv: NOTICE: smbd[SSPG\\user1]: cifs1 rw access granted
Apr  8 05:49:53 isv-x4500b smbsrv: NOTICE: smbd[SSPG\\user2]: cifs1 rw access granted

Both clients could see the same shared directory and I created some files and folders on the share from both clients with no problems.

For More Information

OpenSolaris Project: CIFS Server Home Page

Open Solaris CIFS Documentation including the Solaris CIFS Administrators Guide & Troubleshooting Information

Also, consider joining the Open Solaris Storage Discuss Forum

Tuesday Apr 08, 2008

Configuring the OpenSolaris CIFS Server in Workgroup Mode

[Update July 4th 2008: This article was written prior to the release of OpenSolaris 2008.05, and I used the term OpenSolaris sloppily as I really meant Solaris Express Community Edition, codenamed "Nevada". If you take a look here the different downloads available are explained.

These instructions are equally applicable to both distributions, but depending on what Solaris Express Community Edition package cluster you install you may not have the SMB server packages (I always install everything, so I cannot be more precise than that). In the case of OpenSolaris 2008.05 you will need to add the packages SUNWsmbkr & SUNWsmbs from the repository using Package Manager, or using the pkg install <pkgname> command.]

This article documents a quick and simple process showing you how configure the OpenSolaris CIFS Server in Workgroup Mode. 

I am working on a Sun Fire X4500 with Solaris Nevada build 86 installed....

root@isv-x4500b # uname -a
SunOS isv-x4500b 5.11 snv_86 i86pc i386 i86pc

I already have a ZFS storage pool called "tank" created, so here goes:

1. Enable the CIFS server

root@isv-x4500b # svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

This diagnostic message, and it can be ignored.

2. Create the ZFS file system

root@isv-x4500b # zfs create -o casesensitivity=mixed tank/cifs0

3. Share the new file system via SMB and check that status of the operation

root@isv-x4500b # zfs set sharesmb=on tank/cifs0
root@isv-x4500b # sharemgr show -vp
default nfs=()
zfs
    zfs/tank/cifs0 smb=()
          tank_cifs0=/tank/cifs0

4. Change the name of the Share

I don't like the default name of the share, tank_cifs0, so I will change that to cifs0

root@isv-x4500b # zfs set sharesmb=name=cifs0 tank/cifs0
root@isv-x4500b # sharemgr show -vp
default nfs=()
zfs
    zfs/tank/cifs0 smb=()
          cifs0=/tank/cifs0

5. Set the name of the Workgroup.

By default the workgroup name is "workgroup" but I want to change that to "solcifs".

root@isv-x4500b # smbadm join -w solcifs
Successfully joined workgroup 'solcifs'

6. Install the SMB PAM module

Add the below line to the end of /etc/pam.conf:

other   password required       pam_smb_passwd.so.1     nowarn

In this whole process, this is the only time I have to edit a file, and this is a one off.

7. Set/Change the Passwords for any Solaris User That Will be Used to Authenticate when Connecting to a  CIFS share

I will user root, but I could use any Solaris user the server knows about.

root@isv-x4500b # passwd root
New Password:
Re-enter new Password:
passwd: password successfully changed for root

With the SMB PAM module installed, this generates passwords that can be used by Windows as well as Solaris. This is a required step.

8. From Windows, Map the Share

From windows, the share is accessed via its UNC path: \\\\isv-x4500b\\cifs0. OpenSolaris CIFS does not support access to shares by unauthenticated users: it does not have an equivalent of SAMBA's "guest mode". In this example, I have authenticated myself as root.

Map Share

The mapped share looks like this...

Mapped Share

Files created from Windows will be owned on the Solaris server by the user you authenticated with. If that user does not have the correct UNIX permissions for the shared directory then some file operations will fail. That is easily fixed using chmod .

I can also browse the OpenSolaris CIFS server from Windows...

Browse Shares

For More Information

OpenSolaris Project: CIFS Server Home Page

Open Solaris CIFS Documentation including the Solaris CIFS Administrators Guide & Troubleshooting Information

Also, consider joining the Open Solaris Storage Discuss Forum

About

Tim Thomas

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today