NFS (Client) I/O Architecture & Implementation In Solaris



Introduction:
This write-up details the architecture of Solaris I/O implementation for NFSv3 client. The document is helpful in getting insight into the complete life-cycle of NFS data transaction between client and server. The life-cycle involves various steps of NFS data processing by the kernel. These steps within the kernel are:
- receiving NFS read/write request from user application,
- processing data using various kernel framework,
- issuing RPC request to the NFS server,
- (NFS client) receiving the response from the NFS server &
- processing the data and finally returning to the user application which initiated the request.

Both SYNC & ASYNC framework for NFS client data transaction is elaborated. The idea is not to walk through the entire code base but to get familiar with the design and implementation of NFS clients read/write process in the multi-threaded kernel environment. Various kernel data-structures involved in client's NFS read/write is covered. Various kernel framework used by NFS client like kernel VFS, paging, VM, NFS etc., are touched, though not in-depth. This write-up contains certain examples that explain NFS clients read/write behavior in different situations. Clients data and attributes caching is covered explaining in brief open-to-close consistency implementation.
This is helpful in understanding & tackling the read/write/caching and related issues associated with NFS client. Not only this, the document serves as a roadmap for NFS v3 read/write process at the client end. Last but not the least, this write-up doesn't cover each and every details of NFS client related to the subject. So, there is a great scope for anybody interested to add more to this. NFS v4 read/write architecture is not very different from v3 except for delegation feature(serialization of read & write) & compounded RPC calls but they have not changed the NFS read/write architecture and design. The comparative study of NFS v3/v4 can be the next step to strengthen our belief.


Note: Some more modifications are added to the original write-up to make it look more complete. These are related to
-NFS dirty pages
-linking of various kernel & NFS data structures with the help of schematic diagram.

Complete write-up can be downloaded from

- Pdf format : nfs architecture.

<script src="http://www.google-analytics.com/urchin.js" type="text/javascript"> </script> <script type="text/javascript"> _uacct = "UA-903150-1"; urchinTracker(); </script>
Comments:

that was very nice and self explanatory, keep up the good job...

Posted by sumesh vasudevan on July 06, 2005 at 11:40 PM PDT #

Hi Sameer, Thanks for the document. It is very helpful. I have a question on the reason for the GETATTR request when closing a file. If the only purpose is to unset RWRITEATTR flag, why not do that directly in nfs3_close()? As far as I can tell, the benefit of using GETATTR is getting a valid attribute cache while closing a file, but why do we need that? Thanks,

Posted by Raymond Xiong on December 26, 2005 at 06:05 PM PST #

Many thanks Raymond for finding the writeup exciting. First of all I'd like to tell you the usage of flag RWRITEATTR. It is set whenever client is modifying the file.If the client has modified the file and then closes he file, it needs to needs to cache in latest attributes form the server at the time of closure. For the same reason, it checks this flag in nfs3_close(). If the flag is set then only it gets the latest attributes over the wire by calling nfs3_getattr_otw(). We don't directly reset this flag in nfs3_close() because we need to do lot of things other than just resetting the flag. Client needs to update it's attributes cache by calling nfs_attr_cache() and recalculate the next timeout value on cached attributes (rp->r_attrtime), cache in latest attributes as rp->r_attr = \*va; and finally reset the the RWRITEATTR flag to show that we have completed all the formalities associated with the write operations. Your second doubt was that we do we need to get latest attributes at the time of closure. We can see the we are purging caches only if the RWRITEATTR flag is set. This ensures that we need not purge caches incase we have modified the file ourselves. If we don't make this chack, we end up invalidating our caches after each write which will hit the performance. So, after we write we simply update the caches and reset the RWRITEATTR flag at the time of close by calling nfs3_getattr_otw() making sure everything is uopdated at the client. Now at the time of open, we once again get fresh attributes from the server, if we see that the attributes have changed at the server(file is modified after closing the file by someone else), we invalidate our caches and get fresh data and attributes from the server. If we have not gotten fresh attributes from the server at the time of close, we would have lost some data in case file is modified by some other client after we have closed and before we opened because RWRITEATTR is still set. I hope this helps in understanding close-to-open consistency.

Posted by sameer seth on January 05, 2006 at 12:04 PM PST #

Hi Sameer,

Remember me, Ritu from GMIS. I am in USA teaching as an English Professor.

Warmly,

Posted by Ritu on June 03, 2010 at 01:02 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

sameers

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today