Friday Jun 17, 2011

Welcome to ubiquitous file sharing (December 08, 2009)

The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.

The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.

Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.

With a single, integrated sharing and access control model:

If you connect from Windows or another SMB/CIFS client:

  • The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups.

  • If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs).

  • If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity.

  • If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL.

If you connect via NFS (typically from a UNIX client):

  • The system creates a credential containing your UNIX/NIS identity (including groups).

  • Files you create will be owned by your UNIX/NIS identity.

  • If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing.

The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

Access-based Enumeration (December 04, 2009)

Access-based Enumeration (ABE) is another recent addition to the Solaris CIFS Service - delivered into snv_124.  Designed to be compatible with Windows ABE, which was introduced in Windows Server 2003 SP1, this feature filters directory content based on the user browsing the directory.  Each user can only see the files and directories to which they have access.  This can be useful to implement an out-of-sight, out-of-mind policy or simply to reduce the number of files presented to each user - to make it easier to find files in directories containing a large number of files.

ABE is managed on a per share basis by a new boolean share property called, as you might imagine, abe, which is described insharemgr(1M).  When set to true, ABE filtering is enabled on the share and directory entries to which the user has no access will be omitted from directory listings returned to the client.  When set to false or not defined, ABE filtering will not be performed on the share.  The abe property is not defined by default.

Administration is straightforward, for example:

# zfs sharesmb=abe=true,name=jane tank/home/jane
# sharemgr show -vp
    zfs

       zfs/tank/home/jane nfs=() smb=()

          jane=/export/home/jane     smb=(abe="true")

ABE is also supported via sharemgr(1M) and on smbautohome(4) shares.

Note that even though a file is visible in a share, with ABE enabled, it doesn't automatically mean that the user will always be able to open the file.  If a user has read attribute access to a file ABE will show the it but access will be denied if this user tries to open the file for reading or writing.

We considered supporting ABE on NFS shares, as suggested by the name of PSARC/2009/375, but we ran into problems due to NFS client readdir caching.  NFS clients maintain a common directory entry cache for all users, which not only defeats the intent of ABE but can lead to very confusing results.  If multiple users are looking at the content of a directory with ABE enabled, the entries that get cached will depend on who looks at the directory first.  Subsequent users may see files that ABE on the server would have filtered out or files may be missing because they were filtered out for the original user.

Although this issue can be resolved by disabling the NFS client readdir cache, this was deemed to be an unsuitable solution because it would create a dependency between a server share property and the configuration on all NFS clients, and there was the potential for differences in behavior across the various NFS clients.  It just seemed to add unnecessary administration complexity so we pulled it out.

References for more information

PSARC/2009/246 ZFS support for Access Based Enumeration

PSARC/2009/375 ABE share property for NFS and SMB

6802734 Support for Access Based Enumeration

6802736 SMB share support for Access Based Enumeration

Windows Access-based Enumeration

Using Computer Management (MMC) with the Solaris CIFS Service (August 25, 2009)

One of our goals for the Solaris CIFS Service is to provide seamless Windows interoperability: not just to deliver ubiquitous, multi-protocol file sharing, which is obviously a major part of this project, but to support Windows services at a fundamental level.  It's an ongoing mission and our latest update includes support for Windows remote management.

Remote management is extremely important to Windows administrators and one of the mainstay tools is Computer Management. Computer Management is a Windows administration application, actually a collection of Microsoft Management Console (MMC) tools, that can be used to configure, monitor and manage local and remote services and resources.  The MMC is an extensible framework of registered components, known as snap-ins, which allows Computer Management to provide comprehensive management features for both the local system and remote systems on the network.

Supported Computer Management features include:

  • Share Management

    Support for share management is relatively complete.  You can create, delete, list and configure shares.  It's not yet possible to change the maximum allowed or number of users properties but other properties, including the Share Permissions, can be managed via the MMC.

    Managing shares using Computer Management


  • Users, Groups and Connections

    You can view local SMB users and groups, monitor user connections and see the list of open files. If necessary, you can also disconnect users and/or close files.

    Viewing local users and groups using Computer Management

  • Services

    You can view the SMF services running on an OpenSolaris system.  This is a read-only view - we don't support service management (the ability to start or stop) SMF services from Computer Management (yet).

    Viewing SMF services using Computer Management

To ensure that only the appropriate users have access to administrative operations there are some access restrictions on these remote management features.

  • Regular users can:
    • List shares

  • Only members of the Administrators or Power Users groups can:
    • Manage shares
    • List connections

  • Only members of the Administrators group can:
    • List open files and close files
    • Disconnect users
    • View SMF services
    • View the EventLog

Here's a screenshot when I was using Computer Management and Server Manager (another Windows remote management application) on Windows XP to view some open files on an OpenSolaris system to prepare a slide presentation on MMC support.

Computer Management and Server Manager

Using Windows Previous Versions to access ZFS Snapshots (July 14, 2009)

The Previous Versions tab on the Windows desktop provides a straightforward, intuitive way for users to view or recover files from ZFS snapshots.  ZFS snapshots are read-only, point-in-time instances of a ZFS dataset, based on the same copy-on-write transactional model used throughout ZFS.  ZFS snapshots can be used to recover deleted files or previous versions of files and they are space efficient because unchanged data is shared between the file system and its snapshots.  Snapshots are available locally via the .zfs/snapshot directory and remotely via Previous Versions on the Windows desktop.

Shadow Copies for Shared Folders was introduced with Windows Server 2003 but subsequently renamed to Previous Versions with the release of Windows Vista and Windows Server 2008.  Windows shadow copies, or snapshots, are based on the Volume Snapshot Service (VSS) and, as the [Shared Folders part of the] name implies, are accessible to clients via SMB shares, which is good news when using the Solaris CIFS Service.  And the nice thing is that no additional configuration is required - it "just works".

On Windows clients, snapshots are accessible via the Previous Versions tab in Windows Explorer using the Shadow Copy client, which is available by default on Windows XP SP2 and later.  For Windows 2000 and pre-SP2 Windows XP, the client software is available for download from Microsoft: Shadow Copies for Shared Folders Client.

Assuming that we already have a shared ZFS dataset, we can create ZFS snapshots and view them from a Windows client.

zfs snapshot tank/home/administrator@snap101
zfs snapshot tank/home/administrator@snap102

To view the snapshots on Windows, map the dataset on the client then right click on a folder or file and select Previous Versions.  Note that Windows will only display previous versions of objects that differ from the originals.  So you may have to modify files after creating a snapshot in order to see previous versions of those files.

The screenshot above shows various snapshots in the Previous Versions window, created at different times.  On the left panel, the .zfs folder is visible, illustrating that this is a ZFS share.  The .zfs setting can be toggled as desired, it makes no difference when using previous versions.  To make the .zfs folder visible:

zfs set snapdir=visible tank/home/administrator

To hide the .zfs folder:

zfs set snapdir=hidden tank/home/administrator

The following screenshot shows the Previous Versions panel when a file has been selected.  In this case the user is prompted to view, copy or restore the file from one of the available snapshots.


As can be seen from the screenshots above, the Previous Versions window doesn't display snapshot names: snapshots are listed by snapshot creation time, sorted in time order from most recent to oldest.  There's nothing we can do about this, it's the way that the interface works.  Perhaps one point of note, to avoid confusion, is that the ZFS snapshot creation time isnot the same as the root directory creation timestamp. In ZFS, all object attributes in the original dataset are preserved when a snapshot is taken, including the creation time of the root directory.  Thus the root directory creation timestamp is the time that the directory was created in the original dataset.

# ls -d% all /home/administrator
         timestamp: atime         Mar 19 15:40:23 2009
         timestamp: ctime         Mar 19 15:40:58 2009
         timestamp: mtime         Mar 19 15:40:58 2009
         timestamp: crtime         Mar 19 15:18:34 2009

# ls -d% all /home/administrator/.zfs/snapshot/snap101
         timestamp: atime         Mar 19 15:40:23 2009
         timestamp: ctime         Mar 19 15:40:58 2009
         timestamp: mtime         Mar 19 15:40:58 2009
         timestamp: crtime         Mar 19 15:18:34 2009

The snapshot creation time can be obtained using the zfs command as shown below.

# zfs get all tank/home/administrator@snap101
NAME                             PROPERTY  VALUE
tank/home/
administrator@snap101  type      snapshot
tank/home/
administrator@snap101  creation  Mon Mar 23 18:21 2009

In this example, the dataset was created on March 19th and the snapshot was created on March 23rd.

In conclusion, Shadow Copies for Shared Folders provides a straightforward way for users to view or recover files from ZFS snapshots.  The Windows desktop provides an easy to use, intuitive GUI and no configuration is required to use or access previous versions of files or folders.

REFERENCES FOR MORE INFORMATION

ZFS

ZFS Learning Center

Introduction to Shadow Copies of Shared Folders

Shadow Copies for Shared Folders Client

Thursday Jun 16, 2011

Client-Side Caching for Offline Files (January 9, 2009)

With the putback of PSARC 2008/758, the CIFS service provides per-share configuration options to support client-side caching (csc) for offline files, which can be set using sharemgr or zfs.  Note that client-side caching and offline files are managed entirely by clients, and theoptions discussed here provide per-share policy advice to clientsThe csc options are shown in below.

  • manual
    Clients are allowed to cache files from the specified share for offline use as requested by users but automatic file-by-file reintegration is not allowed.  This is the default setting.


  • auto
    Clients are allowed to automatically cache files from the specified share for offline use and file-by-file reintegration is allowed.


  • vdo
    Clients are allowed to automatically cache files from the specified share for offline use, file-by-file reintegration is allowed and clients are permitted to work from their local cache
    even while offline.
  • disabled
    Client-side caching is disabled for this share.

First, we need to create a file system and share it.  For use with SMB/CIFS, it's best to create a mixed-mode zfs file system.  If you have both NFS and SMB clients using a mixture of different character sets on the same file system you may also want to set utf8only and consider the charset=<access-list> NFS share property that Doug described in a recent blog entry.

        zfs create -o casesensitivity=mixed -o utf8only=on tank/zvol
        zfs sharesmb=name=zvol tank/zvol

        sharemgr show -vp
        default nfs=()
        zfs
            zfs/tank/zvol smb=()
                zvol=/tank/zvol

The default, when no specific csc option has been set, is equivalent to csc=manual.

Since this share is a member of the zfs group, share options such as client-side caching must be set using the zfs command.  In the following example the csc option is set to auto.

        zfs sharesmb=name=zvol,csc=auto tank/zvol

        sharemgr show -vp
        default nfs=()
        zfs
            zfs/tank/zvol          smb=(csc="auto")
                zvol=/tank/zvol

Note: The zfs command interprets sharesmb as a property with a single value, even though that value may contain a list of share properties.  When using the zfs command to set share options, all desired share options must be set each time the property is set or modified - as illustrated above.  Failure to do this may result in share properties being unset.  For example, after the two commands below, the share name would be tank_zvol rather than zvol.

zfs sharesmb=name=zvol tank/zvol
zfs sharesmb=csc=auto tank/zvol
<-- resets the share name to
tank_zvol

Alternatively, we can use sharemgr's default group or create our own group.  In this example, we add a share to zgroup and disable client-side caching for that share.

        sharemgr create -P smb zgroup
        sharemgr add-share -r zvol2 -s /tank/zvol2 zgroup
        sharemgr set -P smb -p csc=disabled -r zvol2 zgroup

        sharemgr show -vp
        default nfs=()
        zfs
            zfs/tank/zvol          smb=(csc="auto")
                zvol=/tank/zvol
        zgroup smb=()
            /tank/zvol2
                zvol2=/tank/zvol2  smb=(csc="disabled")

Note that, as illustrated above, the csc setting is per-share.

The csc options are also valid for autohome shares in the smbautohome map.  As with zfs sharesmb, multiple options can be specified as a comma separated list.  For example,

        *      /export/home/&   csc=disabled,description=&
        john   /export/home/&   csc=auto,dn=sun,dn=com,ou=users

Some useful links for managing offline files on Windows:

CIFS Service Documentation, Troubleshooting and Diagnostics (December 4, 2008)

One of our design goals was to make it easy to use the Solaris CIFS service but operating system interoperability can be a bewilderingly large subject area and it's always useful to have references on installation, configuration, troubleshooting and gathering diagnostic information.  If you're looking for information, check out the CIFS Service project pages:

http://www.opensolaris.org/os/project/cifs-server
http://opensolaris.org/os/project/cifs-server/docs

  • Getting Started Guide

  • Administration Guide

  • Troubleshooting Guide

If you suspect you have a configuration problem, try the cifs-chkcfg script:

http://opensolaris.org/os/project/cifs-server/files/cifs-chkcfg

If you'd like to discuss a configuration or operational problem, it's highly likely that we'll ask for details of your setup, much of which can be obtained using the cifs-gendiag script:

http://opensolaris.org/os/project/cifs-server/files/cifs-gendiag

cifs-gendiag obtains information from the following sources:

  • sharemgr show -vp

  • sharectl get smb

  • smbadm list

  • zfs get all

  • /etc/krb5/krb5.conf

  • /etc/pam.conf

  • /etc/resolv.conf

Environment information (client OS, version and service packs) and network captures (wireshark, netmon)‏ and are also extremely useful in troubleshooting problems.

If it turns out there's a bug or you would like to search for existing bugs, change requests (CR) or enhancement requests (RFE), our bugster categories on http://bugs.opensolaris.org/ are:

  • solaris->kernel->cifs
    - smbsrv kernel module or SMB/CIFS protocol issues

  • solaris->utility->cifs
     - CIFS Service utility programs, libraries, NDR RPC

OpenSolaris CIFS Service at the Storage Developer Conference (November 7, 2008)

The native OpenSolaris CIFS Service has been out there for about a year and things seem to be going well. We have an active community on OpenSolaris.org, with discussion on cifs-discuss, and the recent SNIA Storage Developer Conference (SDC) seemed like a good opportunity to provide an update on the project.

The SDC is an excellent forum for information exchange: well organized, good presentations and an active CIFS interoperability plugfest.  I recommend it if you're looking for somewhere to hear the latest on SMB/CIFS interoperability.  TheOpenSolaris CIFS Service presentation was an interactive affair and, if you didn't make it to the conference, the presentationvideo and slides are available online.

CIFS Service Autohome Shares (November 9, 2007)

Since the topic of SMB autohome shares came up ...

SMB autohome shares resulted from of a customer request to make managing home directory shares easier. This particular customer had around 2000 users connecting to home directories on a server and the actual request was for help in scripting a management interface. Automatic sharing turned out to be a better solution.

The SMB autohome map provides a means to automatically share a directory when a user connects and unshare it when the user disconnects. SMB autohome shares are typically used to share home directories, in which case the share is filtered when viewed via CIFS so that it is only visible to the user whose username matches the share name. By default, the SMB autohome map is /etc/smbautohome, with a syntax that is similar to that used with the automounter, although the services are not related.

A map entry takes the form shown below, where key is a username, location is the fully qualified path for the user's home directory and container is an optional Active Directory Service (ADS) container.

  • key location [container]
As with regular shares, autohome shares can be published in Active Directory. The ADS container is specified as a comma-separated list of attribute=value pairs using LDAP distinguished name (DN) or relative distinguished name (RDN) format. The DN or RDN must be specified in LDAP format using the ou=, cn= and dc= prefixes as indicated below:
  • cn=common name
  • ou=organizational unit
  • dc=domain component
cn=, ou= and dc= are attribute types. The attribute type used to describe an object's RDN is called the naming attribute, which, for ADS, include the following object classes:
  • cn for the user object class
  • ou for the organizational unit (OU)
  • dc for the domainDns object class
Map Key Substitution

The location field contains a directory path with the ampersand (&) and question (?) characters providing substitution characters to simplify map entries. Ampersands are expanded to the value of the key and question marks are expanded to the first character of the key. In the following example, the path would be expanded to /home/jj/jane.
  • jane /home/??/&
Wildcard Key

An asterisk (*) can be used as the key, which is recognized as the catch-all entry. Such an entry will match any key not previously matched. For example; the following entry would map any user to a home directory in /home in which the home directory name was the same as the username.
  • * /home/&
Note that the wildcard rule will only be applied if an appropriate rule cannot be found in any other map entry.

NSSwitch Map

The nsswitch special map can be used to request that the home directory be obtained from a name service passwd database. An ADS container can be appended, which will be used to publish shares.
  • +nsswitch [container]
The nsswitch will only be searched if an appropriate rule cannot be found in any other map entry, including the wildcard rule, which means that the wildcard and nsswitch rules are mutually exclusive and an nsswitch rule will have no effect if a wildcard rule has been defined.

CIFS ... in Solaris (November 2, 2007)

CIFS ... in Solaris. That's an interesting concept; one that has evoked a variety of emotions within Sun.

I was first asked about adding an in-kernel CIFS service to Solaris about 4 years ago, "Yes, it's possible but it's probably a lot more intrusive than you think. Are you sure you want to do that?"

We had been working on an independent CIFS implementation for several years, and we would use this as the basis for the Solaris CIFS project, but it would take time for everything necessary to fall into place: there is a big difference between what management at Sun would like to happen and what the engineers at Sun will endorse. It took a while to get the necessary support and it would be another two years before the Solaris CIFS server project became a reality.

Although I didn't know it at the time, this all started about 16 years ago when I was working on multi-OS, distributed, transaction processing systems (I seemed to be forever porting ONC RPC to yet another OS) and I came across something called OSF DCE, which was a distributed computing environment that used DCE RPC for client-server communication, X.500 name services and Kerberos security. We built a prototype using DCE across various different UNIX operating systems but decided to use a different technology to create products. People were talking about using DCE for CORBA (object request broker) servers, and we'd used a DCE based transaction processing monitor, but I didn't think DCE would catch on and I thought I'd never hear about it again - I had no idea.

About 10 years ago, I was asked to help design and implement a new transaction oriented, journalling file system (not ZFS) for a small NAS company. We started working on it but a large OEM deal came along and the file system was put aside to productize what would eventually become the StorageTek 5320 NAS. There were many things to take care of but one of the main problems was the lack of a comprehensive CIFS service. No-one on the team had much experience with Windows or CIFS so I took it on, and it didn't take long to realize that I'd run straight back into DCE. CIFS had evolved: MSRPC is essentially DCE RPC and Active Directory is based on LDAP and Kerberos. Oh Joy!

And 10 years on ...

Many people assumed or desired that the Solaris CIFS server project would be like Samba but what would that achieve? Sure, it would avoid breaking any eggs, i.e. avoid making substantial changes to Solaris, but Samba is available on Solaris today. There is no point in creating another Samba. If you truly want an integrated CIFS implementation, that can really inter-operate with Windows at a fundamental level, the operating system has to support certain core features. Eggs will have to be broken.

Not surprisingly, the most contentious topic seemed to be the possibility that someone might introduce case-insensitivity into a Solaris file system. This was anathema to many, which led to many repetitive discussions, and was an interesting distraction because it wasn't the thing that really concerned me. Case-insensitivity is important for transparent inter- operability with Windows, as is ensuring that file names can be shared by applications and protocols using disparate character encoding schemes, but I could visualize solutions for those things - solutions that would not compromise POSIX conformance. My real concern was how to support integrated file access control within the file system given the mismatch between UNIX and Windows credentials. The idea of non-unique UIDs and GIDs is so pervasive in UNIX that I wasn't sure it would be possible to effect the level of change necessary to achieve true CIFS integration. CIFS access control relies on access tokens (Windows credentials) and security descriptors, in which users and groups are represented by globally unique, variable length security identifiers (SID). NFSv4 and ZFS offer partial compatibility through NFSv4 access control lists (ACL) but those ACLs and Solaris credentials are still founded on UIDs and GIDs, which are fixed size and non-unique across NIS domains.

Afshin, a senior member of the CIFS project team, and I spent a lot of time working on white boards and came to the conclusion that we couldn't get round the credential issue. So, in December 2006 - the same month that I submitted the CIFS Service project proposal (PSARC case 2006/715), Afshin wrote and distributed a white paper titled "CIFS/NFSv4 Unified Authorization", which proposed a unified access control model for CIFS, NFSv4, local users and ZFS, and introduced the concept of the FUID that is now used in ZFS. This generated some good discussion but the mountain remained; how to make such a huge change a reality.

We continued working on other things, there was no shortage of other things, but it was difficult to see a light at the end of the tunnel until February 2007, when a happenstance discussion over coffee with Jeff Bonwick and Mike Shapiro, both Sun Distinguished Engineers who need no introduction here, changed the future of the CIFS project. They asked how things were going and I explained the magnitude of the change I wanted to make. A short discussion and white board session later we had consensus and Mike said, "Let me do some reading this weekend and I'll write something up." That write-up became PSARC case 2007/064 (Unified POSIX and Windows Credentials for Solaris) and we had the way forward for the Solaris CIFS service.

We already had the basic CIFS service building on Solaris but it took another 8 months, 22 more ARC cases, a lot of helping hands and many late nights to deliver the project. On October 25th, 2007, the CIFS service project putback over 800 files, approximately 370,000 lines of code (including 180,000 lines of new code) to the Solaris operating system.

It's a large, complex project (several years in the making, including a very intense final year), which incorporates some fundamental changes to Solaris.

So what is CIFS, why add support to Solaris and what did we change?

The Common Internet File System (CIFS), also known as SMB, is the standard for Windows file sharing services, and one of the primary goals for Solaris is to continue to improve and enhance it as a storage operating system and platform. Adding CIFS, to provide seamless, ubiquitous file sharing for both CIFS (Windows, MacOS etc) clients and NFS clients is a major step towards achieving this goal. Together, with the CIFS client, which is also an OpenSolaris project, the CIFS server helps provide comprehensive, integrated native Windows interoperability on Solaris.

What does this mean for Samba on Solaris? Not a lot really. Samba is a good, multi-platform application service that provides file and print service for Windows and CIFS clients. It is a portable, user space application, and is actively supported on Solaris. The Solaris CIFS service is a native kernel implementation; a first-class citizen of the Solaris operating system that has been integrated with NFS, ZFS and many OS feature enhancements to provide seamless, ubiquitous, cross-protocol file sharing.

There is a common misconception that Windows interoperability is just a case of implementing file transfer using the CIFS protocol. Unfortunately, that doesn't get you very far. Windows interoperability also requires that a server support various Windows services, typically MSRPC services, and it is very sensitive to the way that those services behave: Windows inter- operability requires that a CIFS server convince a Windows client or server that it "is Windows". This is really only possible if the operating system supports those services at a fundamental level.

In addition to the CIFS/SMB and MSRPC protocols and services:

  • We added support for SIDs to Solaris credentials. This solved the centralized access control problem: CIFS can specify users in terms of SIDs and ZFS can perform native file system access control using that information.

  • There are various VFS updates and enhancements to support new attributes, share reservations and mandatory locking. As with the credential change, this was also a significant effort, which affected the interface to every file system in Solaris.

  • ZFS enhancements include:
    • Support for DOS attributes (archive, hidden, read-only and system)
    • Case-insensitive file name operations.
      There are three modes: case-sensitive, case-insensitive and mixed.
    • Support for ubiquitous cross-protocol file sharing through an option to ensure UTF8-only name encoding.
    • Atomic ACL-on-create semantics.
    • Enhanced ACL support for compatibility with Windows.
    • sharesmb, which is similar to sharenfs.

  • One of our project goals was to minimize the number of new commands being introduced. To this end, sharemgr(1M) and sharectl(1M), which are used to manage NFS shares and NFS configuration, have been enhanced to support CIFS. Sharemgr now supports named shares and directories can be shared multiple times. Sharectl has been enhanced to support CIFS configuration.

  • Various file system utilities have been, or are being, updated to support the new attributes and features: chmod, (un)compress, cp, cpio, ls, (un)pack, pax and tar. Most of these file system commands have already been modified to accommodate the changes. The pax and cpio commands have not yet been modified and will simply ignore the new attributes.

  • A new privilege, sys_smb, has been introduced to restrict access to the NetBIOS and SMB ports, which is similar to sys_nfs. The sys_smb privilege is required to bind to ports 137, 138, 139 and/or 445. For those running Samba as root, this should make no difference. Otherwise, it may require that the process privilege set be modified, using ppriv, to run the process with sys_smb. The privileges(5) man page is being updated to include PRIV_SYS_SMB.
The initial integration of the Solaris CIFS service supports both workgroup and domain modes, and provides a comprehensive CIFS file sharing service implementation. For the highest level of CIFS interoperability, use ZFS in mixed case-sensitivity mode. This will retain the expected behavior for local and NFS access whilst maximizing interoperability with Windows, MacOS and other CIFS clients. Note that the case-sensitivity mode of a ZFS file system must be set at creation time and cannot be changed after creation.

A preliminary draft of System Administration Guide: Windows Interoperability for the Solaris OS will be available soon on the Solaris CIFS server project page at opensolaris.org. The following man pages are also being prepared.
  • sharctl(1M)
  • sharemgr(1M)
  • smbadm(1M)
  • smbd(1M)
  • smbstat(1M)
  • smb(4)
  • smbautohome(4)
It has been a long and interesting journey, which is not yet complete. Interesting is always a good euphemism when working on CIFS - a sense of humor is essential. This is an active project, a work-in-progress, and there's still much more to do - after we get some sleep.
About

SMB BLOG - originally published by amw

Search

Categories
Archives
June 2011
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today