Kubernetes: Simplifying NFS Storage Setup

Dynamic NFS storage provisioning refers to the automatic creation of persistent storage volumes using an NFS (Network File System) server when a Kubernetes PersistentVolumeClaim (PVC) is made. Instead of pre-creating volumes, dynamic provisioning uses a StorageClass to allocate storage on demand.

Importance:

  • Automation: Eliminates manual intervention, speeding up deployment and scaling.
  • Resource Efficiency: Allocates only what is needed, reducing unused capacity.
  • Scalability: Supports large-scale environments with dynamic workloads.
  • Flexibility: Easily integrates with existing NFS infrastructure.
  • Developer Productivity: Developers can request storage without relying on storage admins.

While deploying an application on my lab Kubernetes cluster, I initially faced challenges with manual volume provisioning. To simplify the process, I explored dynamic NFS storage provisioning, which seemed like a more efficient and scalable solution. Here’s how I approached it:

Assumptions:

  • You have an Ubuntu machine (physical or VM) accessible over the network from your Kubernetes cluster nodes. This will be our NFS Server Node.
  • You have sudo access on the NFS Server Node.
  • You have kubectl access to your Kubernetes cluster.
  • Your Kubernetes worker nodes can reach the NFS Server Node via IP address.
  • You know the IP address of the NFS Server Node (NFS_SERVER_IP).
  • You know the IP address range or specific IPs of your Kubernetes worker nodes (KUBERNETES_NODE_SUBNET_OR_IP).

Part 1: Setup NFS Server on Ubuntu Node

a. Install NFS Server Package:

Log in to your designated Ubuntu NFS Server Node via SSH using a terminal or command prompt that supports SSH connections.

sudo apt update
sudo apt install nfs-kernel-server -y

b. Create the Export Directory:

This is the directory on the NFS server that will be shared.

# Choose a path for your shared data. /srv/nfs/kubedata is a common choice.
sudo mkdir -p /srv/nfs/kubedata

c. Set Directory Permissions:

NFS often maps unknown users to nobody:nogroup. It’s crucial that the directory has permissions allowing writes from the processes inside your Kubernetes pods (which might run as non-root). For simplicity initially, we can make it world-writable, but this is not recommended for production without understanding the security implications.

sudo chown nobody:nogroup /srv/nfs/kubedata
sudo chmod 777 /srv/nfs/kubedata

d. Configure NFS Exports:

Edit the /etc/exports file to define which directories are shared and who can access them.

sudo vi /etc/exports

Add a line like the following, replacing KUBERNETES_NODE_SUBNET_OR_IP with the actual subnet (e.g., 192.168.1.0/24) or specific IP addresses of your Kubernetes worker nodes.

/srv/nfs/kubedata   KUBERNETES_NODE_SUBNET_OR_IP(rw,sync,no_subtree_check)
  • /srv/nfs/kubedata: The directory we created.
  • KUBERNETES_NODE_SUBNET_OR_IP: Specifies which clients can connect. Use your K8s node subnet (e.g., 192.168.1.0/24) or specific node IPs separated by spaces. Using * allows anyone, which is insecure.
  • rw: Allows read and write access.
  • sync: Replies to requests only after changes are committed to stable storage (safer).
  • no_subtree_check: Disables subtree checking, which can improve reliability but has minor security implications if you only export subdirectories (which we aren’t here).
  • no_root_squash (Optional, Security Risk): If you add this option (rw,sync,no_subtree_check,no_root_squash), root users on the client (Kubernetes node) will have root privileges on the NFS share. This is often needed if your container runs as root and needs to manage permissions, but it’s a security risk.

Save and close the file.

e. Export the Shared Directory:

Apply the changes made to /etc/exports.

sudo exportfs -ra
sudo exportfs -a

f. Start and Enable NFS Service:

Ensure the NFS server starts now and automatically on boot.

sudo systemctl restart nfs-kernel-server
sudo systemctl enable nfs-kernel-server

g. Configure Firewall (If necessary):

If you are using ufw or another firewall on the NFS Server Node, you need to allow access from your Kubernetes nodes.

# Check if ufw is active
sudo ufw status

# If active, allow access from your K8s node subnet/IPs
sudo ufw allow from KUBERNETES_NODE_SUBNET_OR_IP to any port nfs
# NFS also relies on rpcbind/portmapper
sudo ufw allow from KUBERNETES_NODE_SUBNET_OR_IP to any port 111

# Reload ufw if changes were made
# sudo ufw reload

h. Verification (Optional but Recommended):

From one of your Kubernetes worker nodes (or another machine on the same network), test if you can see and mount the share.

# On a K8s worker node or test client:
# Install NFS client utilities (might already be installed)
sudo apt update && sudo apt install nfs-common -y

# Check if the server exports the directory
showmount -e NFS_SERVER_IP

# Try to mount it temporarily
sudo mkdir /mnt/nfs_test
sudo mount -t nfs NFS_SERVER_IP:/srv/nfs/kubedata /mnt/nfs_test

# Check if mounted
df -h | grep nfs_test

# Try creating a file
sudo touch /mnt/nfs_test/test_from_client.txt
ls /mnt/nfs_test

# Cleanup
sudo umount /mnt/nfs_test
sudo rmdir /mnt/nfs_test

If this works, the NFS server side is likely configured correctly.

Part 2: Configure Kubernetes to Use NFS Volume

a. Install NFS Client on all Worker Nodes:

Crucially, the nfs-common package (or equivalent for your worker node OS) must be installed on all Kubernetes worker nodes that might potentially run pods needing the NFS volume.

Connect to each worker node and run:

sudo apt update
sudo apt install nfs-common -y

Part 3: Install and Configure NFS Client Provisioner

Deploy the NFS Subdir External Provisioner in your Kubernetes cluster to automate the creation and management of NFS-backed Persistent Volumes (PVs) and Persistent Volume Claims (PVCs).

a. Add the Helm Repository:

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update

b. Install the NFS Client Provisioner using Helm:

You’ll install the provisioner using helm install. You need to provide the NFS server’s IP and the base path you exported.

  • Choose a release name (e.g., nfs-provisioner).
  • Choose a namespace for the provisioner (e.g., nfs-provisioner). Creating a dedicated namespace is good practice.
  • Provide your NFS server details via –set flags.
# Replace with your actual NFS server IP and exported path
NFS_SERVER_IP="YOUR_NFS_SERVER_IP"
NFS_PATH="/srv/nfs/kubedata" # Or your chosen base export path

# Choose a name for the StorageClass this provisioner will manage
STORAGE_CLASS_NAME="nfs-client"

# --- Installation Command ---
helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --namespace nfs-provisioner --create-namespace \
    --set nfs.server=${NFS_SERVER_IP} \
    --set nfs.path=${NFS_PATH} \
    --set storageClass.name=${STORAGE_CLASS_NAME} \
    --set storageClass.allowVolumeExpansion=true \
    --set storageClass.reclaimPolicy=Delete \
    --set storageClass.archiveOnDelete=false \
    --set storageClass.defaultClass=true

Explanation of –set flags:

  • nfs.server: (Required) IP address of your NFS server.
  • nfs.path: (Required) The base exported directory on your NFS server. The provisioner will create subdirectories inside this path (e.g., /srv/nfs/kubedata/pvc-xxxxx).
  • storageClass.name: The name of the StorageClass object that will be created. PVCs will need to specify this name to use the provisioner.
  • storageClass.allowVolumeExpansion: Allows resizing PVCs if the underlying storage supports it (NFS generally does space-wise, but the provisioner doesn’t actively resize the filesystem within).
  • storageClass.reclaimPolicy:
    • Delete: When the PVC is deleted, the PV and the corresponding subdirectory on the NFS server are deleted. Use with caution!
    • Retain: When the PVC is deleted, the PV and the NFS subdirectory remain. You need to clean them up manually. Safer, but requires manual management.
  • storageClass.archiveOnDelete: If reclaimPolicy is Delete, setting this to true will rename the directory on the NFS server (e.g., archived-pvc-xxxxx) instead of deleting it. Defaults to false.
  • storageClass.defaultClass: If set to true, PVCs that don’t specify any storageClassName will automatically use this one. Be careful if you have other storage classes.

c. Verify the Provisioner Deployment:

Check if the provisioner pod is running in the namespace you specified.

kubectl get pods -n nfs-provisioner
# Look for a pod named similar to nfs-provisioner-nfs-subdir-external-provisioner-... Running

Check if the StorageClass was created:

kubectl get storageclass ${STORAGE_CLASS_NAME}
# Output should show the storage class with PROVISIONER kubernetes.io/nfs-subdir-external-provisioner

Part 4 : Test Dynamic Provisioning

a. Create a PersistentVolumeClaim (PVC):

Now, create a PVC that requests storage using the StorageClass name you defined (nfs-client in our example). Create test-dynamic-pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-nfs-dynamic-pvc
  # namespace: default # Specify namespace if needed
spec:
  storageClassName: nfs-client # Must match the StorageClass name
  accessModes:
    - ReadWriteOnce # RWO is typically fine, RWX is also supported by NFS
  resources:
    requests:
      storage: 1Gi # Request some storage

Apply it:

kubectl apply -f test-dynamic-pvc.yaml

b. Check PVC and PV:

Watch the PVC status. It should quickly go from Pending to Bound.

kubectl get pvc test-nfs-dynamic-pvc
# STATUS should become Bound

Check that a corresponding PV was automatically created. The PV name will be dynamically generated (e.g., pvc-xxxxxxxx-xxxx-…).

kubectl get pv
# Look for a PV bound to default/test-nfs-dynamic-pvc with the nfs-client StorageClass
# Note its STATUS is Bound and RECLAIM POLICY is Delete (or Retain)

c. Inspect the NFS Server:

Check the base directory on your NFS server. You should see a new subdirectory created for this PVC. The name usually includes the namespace and PVC name.

# On the NFS Server Node
ls -l /srv/nfs/kubedata/
# You should see a directory like 'default-test-nfs-dynamic-pvc-pvc-xxxx...'

d. Use the Dynamically Provisioned PVC in a Pod:

Create a pod that uses the test-nfs-dynamic-pvc. Create nginx-dynamic-nfs-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-dynamic-nfs-test
  # namespace: default # Use the same namespace as the PVC
spec:
  containers:
  - name: nginx
    image: nginx:alpine
    ports:
    - containerPort: 80
    volumeMounts:
    - name: dynamic-nfs-storage
      mountPath: /usr/share/nginx/html
  volumes:
  - name: dynamic-nfs-storage
    persistentVolumeClaim:
      claimName: test-nfs-dynamic-pvc # Use the PVC name created above

Apply it:

kubectl apply -f nginx-dynamic-nfs-pod.yaml

Part 5 : Verify Data Persistence

a. Check the pod:

kubectl get pod nginx-dynamic-nfs-test

b. Write data from the pod:

kubectl exec -it nginx-dynamic-nfs-test -- /bin/sh
# Inside the pod:
echo "Hello from Dynamic NFS! $(date)" > /usr/share/nginx/html/index.html
cat /usr/share/nginx/html/index.html
exit

c. Verify the data exists on the NFS Server Node inside the specific subdirectory created for the PVC:

# On the NFS Server Node
# Find the exact directory name first
ls /srv/nfs/kubedata/
# Then cat the file inside that directory, e.g.:
cat /srv/nfs/kubedata/default-test-nfs-dynamic-pvc-pvc-xxxx.../index.html
# Should show "Hello from Dynamic NFS! ..."

You now have dynamic NFS volume provisioning set up in your Kubernetes cluster! Any PVC requesting the nfs-client StorageClass will automatically get a dedicated directory on your NFS server and a corresponding PV.

Leave a comment