- On the Linux system that runs the NFS server, you export (share) one or more directories by listing them in the
/etc/exports
file and by running theexportfs
command. In addition, you must start the NFS server. - On each client system, you use the
mount
command to mount the directories that your server exported.
NFS has security vulnerabilities, so you shouldn’t set up NFS on systems that are directly connected to the Internet without using the RPCSEC_GSS security that comes with NFS version 4 (NFSv4). Version 4.2 was released in November 2016; you should use it for most purposes, because it includes all the needed updates.
The following information walks you through NFS setup, using an example of two Linux PCs on a LAN.Exporting a file system with NFS in Linux
Start with the server system that exports — makes available to the client systems — the contents of a directory. On the server, you must run the NFS service and designate one or more file systems to export.To export a file system, you have to add an appropriate entry to the /etc/exports
file. Suppose that you want to export the /home
directory, and you want to enable the host named LNBP75
to mount this file system for read and write operations. You can do so by adding the following entry to the /etc/exports
file:
/home LNBP75(rw,sync)If you want to give access to all hosts on a LAN such as
192.168.0.0
, you could change this line to
/home 192.168.0.0/24(rw,sync)Every line in the
/etc/exports
file has this general format:
<em>Directory host1</em>(<em>options</em>) <em>host2</em>(<em>options</em>) …The first field is the directory being shared via NFS, followed by one or more fields that specify which hosts can mount that directory remotely and several options in parentheses. You can specify the hosts with names or IP addresses, including ranges of addresses.
The options in parentheses denote the kind of access each host is granted and how user and group IDs from the server are mapped to ID the client. (If a file is owned by root
on the server, for example, what owner is that on the client?) Within the parentheses, commas separate the options. If a host is allowed both read and write access, and all IDs are to be mapped to the anonymous user (by default, the anonymous user is named nobody
), the options look like this:
(rw,all_squash)The table below shows the options you can use in the
/etc/exports
file. You find two types of options: general options and user ID mapping options.
Option | Description |
General Options | |
secure | Allows connections only from port 1024 or lower (default) |
insecure | Allows connections from port 1024 or higher |
ro | Allows read-only access (default) |
rw | Allows both read and write access |
sync | Performs write operations (writing information to the disk) when requested (by default) |
async | Performs write operations when the server is ready |
no_wdelay | Performs write operations immediately |
wdelay | Waits a bit to see whether related write requests arrive and then performs them together (by default) |
hide | Hides an exported directory that’s a subdirectory of another exported directory (by default) |
no_hide | Causes a directory to not be hidden (opposite of hide) |
subtree_check | Performs subtree checking, which involves checking parent directories of an exported subdirectory whenever a file is accessed (by default) |
no_subtree_check | Turns off subtree checking (opposite of subtree_check) |
insecure_locks | Allows insecure file locking |
User ID Mapping Options | |
all_squash | Maps all user IDs and group IDs to the anonymous user on the client |
no_all_squash | Maps remote user and group IDs to similar IDs on the client (by default) |
root_squash | Maps remote root user to the anonymous user on the client (by default) |
no_root_squash | Maps remote root user to the local root user |
anonuid=UID | Sets the user ID of anonymous user to be used for the all_squash and root_squash options |
anongid=GID | Sets the group ID of anonymous user to be used for the all_squash and root_squash options |
/etc/exports
file, manually export the file system by typing the following command in a terminal window:
exportfs -aThis command exports all file systems defined in the
/etc/exports
file.Now you can start the NFS server processes.
In Debian, start the NFS server by logging in as root
and typing /etc/init.d/nfs-kernel-server start in a terminal window. In Fedora, type /etc/init.d/nfs start. In SUSE, type /etc/init.d/nfsserver start. If you want the NFS server to start when the system boots, type update-rc.d nfs-kernel-server defaults in Debian. In Fedora, type chkconfig - -level 35 nfs on. In SUSE, type chkconfig - -level 35 nfsserver on.
When the NFS service is up, the server side of NFS is ready. Now you can try to mount the exported file system from a client system and access the exported file system as needed.
If you ever make any changes in the exported file systems listed in the /etc/exports
file, remember to restart
the NFS service. To restart a service, invoke the script in the /etc/init.d
directory with restart as the argument (instead of the start
argument that you use to start the service).
Mounting an NFS file system in Linux
To access an exported NFS file system on a client system, you have to mount that file system on a mount point. The mount point is nothing more than a local directory. Suppose that you want to access the/home
directory exported from the server named LNBP200
at the local directory /mnt/lnbp200
on the client system. To do so, follow these steps:
- Log in as
root
, and create the directory with this command:mkdir /mnt/lnbp200
- Type the following command to mount the directory from the remote system (
LNBP200
) on the local directory/mnt/lnbp200
:mount lnbp200:/home /mnt/lnbp200
/mnt/lnbp200
.To confirm that the NFS file system is indeed mounted, log in as root
on the client system, and type mount in a terminal window. You see a line similar to the following about the NFS file system:
lnbp200:/home/public on /mnt/lnbp200 type nfs (rw,addr=192.168.0.4)
NFS supports two types of mount operations: hard and soft. By default, a mount is hard, which means that if the NFS server doesn’t respond, the client keeps trying to access the server indefinitely until the server responds. You can soft-mount an NFS volume by adding the -o soft
option to the mount
command. For a soft mount, the client returns an error if the NFS server fails to respond and doesn’t retry.