NFS is a popular network filesystem, providing the capability of mounting the same filesystem by multiple clients. Linux kernel provides a powerful NFS server, and as with most Linux NFS servers it runs in kernel space. This article covers how to migrate an NFS server from kernel space to userspace, which is based on Glusterfs and nfs-ganesha.
Assuming an NFS server is hosted on node1. All NFS files are in an XFS partition and the partition is mounted on /mnt/nfs. All files are needed to be migrated from the NFS server to GlusterFS without copying, and the GlusterFS has 3 replicas. NFS service still hosted on node1, but provided by nfs-ganesha server. After the migration, the NFS client could access the migrated NFS file from the nfs-ganesha mount target. This article is focused on the file migration, nfs-ganesha HA is out of the scope.
Test Environment:
NFS server is hosted on node1
[root@ol78-1 ~]# systemctl status nfs-server nfs-server.service - NFS server and services Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled) Drop-In: /run/systemd/generator/nfs-server.service.d order-with-mounts.conf Active: active (exited) Process: 5463 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS) Process: 5460 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS) Process: 5458 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS) Process: 5499 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl restart gssproxy ; fi (code=exited, status=0/SUCCESS) Process: 5481 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS) Process: 5479 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS) Main PID: 5481 (code=exited, status=0/SUCCESS) CGroup: /system.slice/nfs-server.service [root@ol78-1 ~]# cat /etc/exports /mnt/nfs *(rw,no_root_squash) [root@ol78-1 ~]# exportfs /mnt/nfs <world>
NFS client is on node4
[root@ol78-4 ~]# showmount -e ol78-1 Export list for ol78-1: /mnt/nfs * [root@ol78-4 ~]# mount -t nfs ol78-1:/mnt/nfs /mnt/nfs/ [root@ol78-4 ~]# ls /mnt/nfs/ glusterfs-5.6-1.el7.x86_64.rpm glusterfs-devel-5.6-1.el7.x86_64.rpm glusterfs-rdma-5.6-1.el7.x86_64.rpm glusterfs-api-5.6-1.el7.x86_64.rpm glusterfs-events-5.6-1.el7.x86_64.rpm glusterfs-resource-agents-5.6-1.el7.noarch.rpm glusterfs-api-devel-5.6-1.el7.x86_64.rpm glusterfs-extra-xlators-5.6-1.el7.x86_64.rpm glusterfs-server-5.6-1.el7.x86_64.rpm glusterfs-cli-5.6-1.el7.x86_64.rpm glusterfs-fuse-5.6-1.el7.x86_64.rpm python2-gluster-5.6-1.el7.x86_64.rpm glusterfs-client-xlators-5.6-1.el7.x86_64.rpm glusterfs-geo-replication-5.6-1.el7.x86_64.rpm glusterfs-cloudsync-plugins-5.6-1.el7.x86_64.rpm glusterfs-libs-5.6-1.el7.x86_64.rpm [root@ol78-4 ~]# cat /proc/mounts |grep nfs ol78-1:/mnt/nfs /mnt/nfs nfs4 rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.64,local_lock=none,addr=192.168.1.61 0 0
Install GlusterFS and nfs-ganesha packages.
Add new yum repository gluster.repo to directory /etc/yum.repo.d will following content:
[root@ol78-1,2,3]# cat gluster.repo [Gluster5] name=Gluster 5 on Oracle Linux $releasever ($basearch) baseurl=https://yum$ociregion.oracle.com/repo/OracleLinux/OL7/gluster5/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1
Install packages on all GlusterFS server nodes node1, node2 and node3:
[root@ol78-1,2,3]# yum install nfs-ganesha-gluster glusterfs-server Repository ol7_latest is listed more than once in the configuration Resolving Dependencies --> Running transaction check ---> Package glusterfs-server.x86_64 0:5.9-1.el7 will be installed ---> Package nfs-ganesha-gluster.x86_64 0:2.7.1-1.el7 will be installed --> Running transaction check ---> Package glusterfs.x86_64 0:5.9-1.el7 will be installed ---> Package glusterfs-api.x86_64 0:5.9-1.el7 will be installed ---> Package glusterfs-cli.x86_64 0:5.9-1.el7 will be installed ---> Package glusterfs-client-xlators.x86_64 0:5.9-1.el7 will be installed ---> Package glusterfs-fuse.x86_64 0:5.9-1.el7 will be installed ---> Package glusterfs-libs.x86_64 0:5.9-1.el7 will be installed ---> Package nfs-ganesha.x86_64 0:2.7.1-1.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ======================================================================== Package Arch Version Repository Size ======================================================================== Installing: glusterfs-server x86_64 5.9-1.el7 Gluster5 1.4 M nfs-ganesha-gluster x86_64 2.7.1-1.el7 Gluster5 38 k Installing for dependencies: glusterfs x86_64 5.9-1.el7 Gluster5 640 k glusterfs-api x86_64 5.9-1.el7 Gluster5 80 k glusterfs-cli x86_64 5.9-1.el7 Gluster5 176 k glusterfs-client-xlators x86_64 5.9-1.el7 Gluster5 963 k glusterfs-fuse x86_64 5.9-1.el7 Gluster5 120 k glusterfs-libs x86_64 5.9-1.el7 Gluster5 388 k nfs-ganesha x86_64 2.7.1-1.el7 Gluster5 683 k Transaction Summary ======================================================================== Install 2 Packages (+7 Dependent packages) Total download size: 4.4 M Installed size: 17 M Is this ok [y/d/N]: y Downloading packages: (1/9): glusterfs-5.9-1.el7.x86_64.rpm | 640 kB 00:00:00 (2/9): glusterfs-cli-5.9-1.el7.x86_64.rpm | 176 kB 00:00:00 (3/9): glusterfs-api-5.9-1.el7.x86_64.rpm | 80 kB 00:00:00 (4/9): glusterfs-fuse-5.9-1.el7.x86_64.rpm | 120 kB 00:00:00 (5/9): glusterfs-client-xlators-5.9-1.el7.x86_64.rpm | 963 kB 00:00:00 (6/9): glusterfs-libs-5.9-1.el7.x86_64.rpm | 388 kB 00:00:00 (7/9): nfs-ganesha-2.7.1-1.el7.x86_64.rpm | 683 kB 00:00:00 (8/9): glusterfs-server-5.9-1.el7.x86_64.rpm | 1.4 MB 00:00:00 (9/9): nfs-ganesha-gluster-2.7.1-1.el7.x86_64.rpm | 38 kB 00:00:01 ------------------------------------------------------------------------ Total 1.8 MB/s | 4.4 MB 00:00:02 Running transaction check Running transaction test Transaction test succeeded Running transaction Warning: RPMDB altered outside of yum. Installing : glusterfs-libs-5.9-1.el7.x86_64 1/9 Installing : glusterfs-5.9-1.el7.x86_64 2/9 Installing : glusterfs-client-xlators-5.9-1.el7.x86_64 3/9 Installing : glusterfs-api-5.9-1.el7.x86_64 4/9 Installing : glusterfs-fuse-5.9-1.el7.x86_64 5/9 Installing : glusterfs-cli-5.9-1.el7.x86_64 6/9 Installing : nfs-ganesha-2.7.1-1.el7.x86_64 7/9 Installing : nfs-ganesha-gluster-2.7.1-1.el7.x86_64 8/9 Installing : glusterfs-server-5.9-1.el7.x86_64 9/9 Verifying : glusterfs-api-5.9-1.el7.x86_64 1/9 Verifying : glusterfs-5.9-1.el7.x86_64 2/9 Verifying : nfs-ganesha-gluster-2.7.1-1.el7.x86_64 3/9 Verifying : glusterfs-libs-5.9-1.el7.x86_64 4/9 Verifying : glusterfs-cli-5.9-1.el7.x86_64 5/9 Verifying : glusterfs-server-5.9-1.el7.x86_64 6/9 Verifying : glusterfs-client-xlators-5.9-1.el7.x86_64 7/9 Verifying : nfs-ganesha-2.7.1-1.el7.x86_64 8/9 Verifying : glusterfs-fuse-5.9-1.el7.x86_64 9/9 Installed: glusterfs-server.x86_64 0:5.9-1.el7 nfs-ganesha-gluster.x86_64 0:2.7.1-1.el7 Dependency Installed: glusterfs.x86_64 0:5.9-1.el7 glusterfs-api.x86_64 0:5.9-1.el7 glusterfs-cli.x86_64 0:5.9-1.el7 glusterfs-client-xlators.x86_64 0:5.9-1.el7 glusterfs-fuse.x86_64 0:5.9-1.el7 glusterfs-libs.x86_64 0:5.9-1.el7 nfs-ganesha.x86_64 0:2.7.1-1.el7 Complete!
Start glusterd service on node1, node2 and node3, and create trust pool:
[root@ol78-1,2,3]# systemctl start glusterd [root@ol78-1,2,3]# systemctl status glusterd glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled) Active: active (running) [root@ol78-1 ~]# gluster peer probe ol78-2 peer probe: success. [root@ol78-1 ~]# gluster peer probe ol78-3 peer probe: success. [root@ol78-1 ~]# gluster peer status Number of Peers: 2 Hostname: ol78-2 Uuid: 6be94497-1a07-4968-bac8-178346c122df State: Peer in Cluster (Connected) Hostname: ol78-3 Uuid: 53ca268e-3780-4f86-b3d1-ace325a0fbd3 State: Peer in Cluster (Connected)
Unmount NFS from the client node, if multiple NFS clients have mounted ensure to unmount it from all clients:
[root@ol78-4 ~]# umount /mnt/nfs/
Stop NFS service on node1:
[root@ol78-1 ~]# systemctl stop nfs-server [root@ol78-1 ~]# systemctl disable nfs-server [root@ol78-1 ~]# exportfs [root@ol78-1 ~]# showmount -e localhost clnt_create: RPC: Program not registered
All GlusterFS brick path were /data/gnfs, to faciltate migration unmount the XFS partition of NFS server from /mnt/nfs and remount it to /data/gnfs on node1. Then create GlusterFS volume with brick path /data/gnfs, Glusterfs will create metadata for each file in the brick path, so all NFS files will automatically migrate into GlusterFS.
[root@ol78-1 ~]# umount /mnt/nfs [root@ol78-1 ~]# mount -t xfs /dev/sdb1 /data/gnfs [root@ol78-1 ~]# gluster volume create gnfs ol78-1:/data/gnfs force volume create: gnfs: success: please start the volume to access data [root@ol78-1 ~]# gluster volume start gnfs volume start: gnfs: success [root@ol78-1 ~]# gluster volume info gnfs Volume Name: gnfs Type: Distribute Volume ID: 8df05a50-fb0d-47a4-81d6-55e95c7d9fc3 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ol78-1:/data/gnfs Options Reconfigured: transport.address-family: inet nfs.disable: on [root@ol78-1 ~]# gluster volume status Status of volume: gnfs Gluster process TCP Port RDMA Port Online Pid ----------------------------------------------------------------------- Brick ol78-1:/data/gnfs 49152 0 Y 13077 Task Status of Volume gnfs ----------------------------------------------------------------------- There are no active volume tasks
Add bricks one by one from node2 and node3. After a new brick was added, NFS files will be automatically synced to the new brick.
Add new brick on node2 to the GlusterFS volume:
[root@ol78-1 ~]# gluster volume add-brick gnfs replica 2 ol78-2:/data/gnfs force volume add-brick: success [root@ol78-1 ~]# gluster volume info Volume Name: gnfs Type: Replicate Volume ID: 8df05a50-fb0d-47a4-81d6-55e95c7d9fc3 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: ol78-1:/data/gnfs Brick2: ol78-2:/data/gnfs Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off [root@ol78-1 ~]# gluster volume status Status of volume: gnfs Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------- Brick ol78-1:/data/gnfs 49152 0 Y 13077 Brick ol78-2:/data/gnfs 49152 0 Y 27278 Self-heal Daemon on localhost N/A N/A Y 3767 Self-heal Daemon on ol78-2 N/A N/A Y 27301 Self-heal Daemon on ol78-3 N/A N/A Y 2964 Task Status of Volume gnfs -------------------------------------------------------------------- There are no active volume tasks
Check files on the backend of bricks, waiting until all files have synced to this brick:
[root@ol78-2 ~]# ls /data/gnfs/ glusterfs-5.6-1.el7.x86_64.rpm glusterfs-devel-5.6-1.el7.x86_64.rpm glusterfs-rdma-5.6-1.el7.x86_64.rpm glusterfs-api-5.6-1.el7.x86_64.rpm glusterfs-events-5.6-1.el7.x86_64.rpm glusterfs-resource-agents-5.6-1.el7.noarch.rpm glusterfs-api-devel-5.6-1.el7.x86_64.rpm glusterfs-extra-xlators-5.6-1.el7.x86_64.rpm glusterfs-server-5.6-1.el7.x86_64.rpm glusterfs-cli-5.6-1.el7.x86_64.rpm glusterfs-fuse-5.6-1.el7.x86_64.rpm python2-gluster-5.6-1.el7.x86_64.rpm glusterfs-client-xlators-5.6-1.el7.x86_64.rpm glusterfs-geo-replication-5.6-1.el7.x86_64.rpm glusterfs-cloudsync-plugins-5.6-1.el7.x86_64.rpm glusterfs-libs-5.6-1.el7.x86_64.rpm
Add new brick on node3 to the volume:
[root@ol78-1 ~]# gluster volume add-brick gnfs replica 3 ol78-3:/data/gnfs force volume add-brick: success [root@ol78-1 ~]# gluster volume info Volume Name: gnfs Type: Replicate Volume ID: 8df05a50-fb0d-47a4-81d6-55e95c7d9fc3 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: ol78-1:/data/gnfs Brick2: ol78-2:/data/gnfs Brick3: ol78-3:/data/gnfs Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off [root@ol78-1 ~]# gluster volume status Status of volume: gnfs Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------Brick ol78-1:/data/gnfs 49152 0 Y 13077 Brick ol78-2:/data/gnfs 49152 0 Y 27278 Brick ol78-3:/data/gnfs 49152 0 Y 3190 Self-heal Daemon on localhost N/A N/A Y 4013 Self-heal Daemon on ol78-2 N/A N/A Y 27543 Self-heal Daemon on ol78-3 N/A N/A Y 3213 Task Status of Volume gnfs ---------------------------------------------------------------------- There are no active volume tasks
Wait until all files have synced to this brick:
[root@ol78-3 ~]# ls /data/gnfs/ glusterfs-5.6-1.el7.x86_64.rpm glusterfs-devel-5.6-1.el7.x86_64.rpm glusterfs-rdma-5.6-1.el7.x86_64.rpm glusterfs-api-5.6-1.el7.x86_64.rpm glusterfs-events-5.6-1.el7.x86_64.rpm glusterfs-resource-agents-5.6-1.el7.noarch.rpm glusterfs-api-devel-5.6-1.el7.x86_64.rpm glusterfs-extra-xlators-5.6-1.el7.x86_64.rpm glusterfs-server-5.6-1.el7.x86_64.rpm glusterfs-cli-5.6-1.el7.x86_64.rpm glusterfs-fuse-5.6-1.el7.x86_64.rpm python2-gluster-5.6-1.el7.x86_64.rpm glusterfs-client-xlators-5.6-1.el7.x86_64.rpm glusterfs-geo-replication-5.6-1.el7.x86_64.rpm glusterfs-cloudsync-plugins-5.6-1.el7.x86_64.rpm glusterfs-libs-5.6-1.el7.x86_64.rpm
Create an export config file and modify ganesh config file, then start nfs-ganesha service to export GlusterFS volume
[root@ol78-1 ~]# mkdir /etc/ganesha/exports [root@ol78-1 ~]# vim /etc/ganesha/exports/export.gnfs.conf [root@ol78-1 ~]# cat /etc/ganesha/exports/export.gnfs.conf EXPORT { # Export Id (mandatory, each EXPORT must have a unique Export_Id) Export_Id = 9; # Exported path (mandatory) Path = "/gnfs"; # Exporting FSAL FSAL { name = GLUSTER; hostname="localhost"; volume="gnfs"; } Access_type = RW; Disable_ACL = true; Squash="No_root_squash"; # Pseudo Path (required for NFS v4) Pseudo="/gnfs"; Protocols = "3", "4" ; Transports = "UDP","TCP"; SecType = "sys"; } [root@ol78-1 ~]# echo "%include "/etc/ganesha/exports/export.gnfs.conf"" >>/etc/ganesha/ganesha.conf [root@ol78-1 ~]# systemctl start nfs-ganesha [root@ol78-1 ~]# systemctl status nfs-ganesha nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: active (running) Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 4302 ExecStartPost=/bin/bash -c /usr/bin/sleep 2 && /bin/dbus-send --system --dest=org.ganesha.nfsd --type=method_call /org/ganesha/nfsd/admin org.ganesha.nfsd.admin.init_fds_limit (code=exited, status=0/SUCCESS) Process: 4301 ExecStartPost=/bin/bash -c prlimit --pid $MAINPID --nofile=$NOFILE:$NOFILE (code=exited, status=0/SUCCESS) Process: 4296 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS) Main PID: 4297 (ganesha.nfsd) CGroup: /system.slice/nfs-ganesha.service 4297 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT [root@ol78-1 ~]# showmount -e localhost Export list for localhost: /gnfs (everyone)
Check kernel NFS service, it should already be stopped:
[root@ol78-1 ~]# systemctl status nfs nfs-server.service - NFS server and services Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled) Drop-In: /run/systemd/generator/nfs-server.service.d order-with-mounts.conf Active: inactive (dead) Process: 5649 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS) Process: 5644 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS) Process: 5642 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS) Process: 5499 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl restart gssproxy ; fi (code=exited, status=0/SUCCESS) Process: 5481 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS) Process: 5479 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS) Main PID: 5481 (code=exited, status=0/SUCCESS) [root@ol78-1 ~]# exportfs
Check the new NFS service:
[root@ol78-1 ~]# showmount -e localhost Export list for localhost: /gnfs (everyone)
mount NFS from client node:
[root@ol78-4 ~]# showmount -e ol78-1 Export list for ol78-1: /gnfs (everyone) [root@ol78-4 ~]# mount -t nfs ol78-1:/gnfs /mnt/nfs/ [root@ol78-4 ~]# ls /mnt/nfs/ glusterfs-5.6-1.el7.x86_64.rpm glusterfs-devel-5.6-1.el7.x86_64.rpm glusterfs-rdma-5.6-1.el7.x86_64.rpm glusterfs-api-5.6-1.el7.x86_64.rpm glusterfs-events-5.6-1.el7.x86_64.rpm glusterfs-resource-agents-5.6-1.el7.noarch.rpm glusterfs-api-devel-5.6-1.el7.x86_64.rpm glusterfs-extra-xlators-5.6-1.el7.x86_64.rpm glusterfs-server-5.6-1.el7.x86_64.rpm glusterfs-cli-5.6-1.el7.x86_64.rpm glusterfs-fuse-5.6-1.el7.x86_64.rpm python2-gluster-5.6-1.el7.x86_64.rpm glusterfs-client-xlators-5.6-1.el7.x86_64.rpm glusterfs-geo-replication-5.6-1.el7.x86_64.rpm glusterfs-cloudsync-plugins-5.6-1.el7.x86_64.rpm glusterfs-libs-5.6-1.el7.x86_64.rpm [root@ol78-4 ~]# cat /proc/mounts |grep nfs ol78-1:/gnfs /mnt/nfs nfs4 rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.64,local_lock=none,addr=192.168.1.61 0 0
Now, the NFS service is provided by nfs-ganesha and the backend files are on the GlusterFS volume.
NFS service running in userspace using GlusterFS and nfs-ganesha is more flexible than running in kernel space. It's easy to restart the service, and easy to failover because it has automatic file replication.
Previous Post