This guide provides detailed steps to resolve issues where Longhorn fails to mount Persistent Volume Claims (PVC) to pods in a Kubernetes cluster. The error typically results from conflicts with the multipathd service, which can incorrectly identify Longhorn volumes as being in use, preventing the filesystem from being created.

Longhorn, a distributed block storage system for Kubernetes, failing to mount Persistent Volume Claims (PVC) to pods. This issue occurs due to conflicts with the multipathd service, which is responsible for managing multiple paths to the same storage device. When the system mistakenly identifies a Longhorn volume as being in use, it prevents the filesystem from being created, leading to the error during the mount operation.

Problem

If you encounter the following error in your Kubernetes environment, Longhorn has failed to mount a PVC to a pod.

Error Message

Warning  FailedMount             12s (x6 over 28s)  kubelet                  MountVolume.MountDevice failed for volume "pvc-87285c92-26c4-40bd-842d-7f608d9db2d8" : rpc error: code = Internal desc = format of disk "/dev/longhorn/pvc-87285c92-26c4-40bd-842d-7f608d9db2d8" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/driver.longhorn.io/1e70ad7ff7c1222b1d656429fcc03679fdfa8ed3d9ae0739e656b2e161bfc08d/globalmount") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.46.4 (18-Aug-2021)
/dev/longhorn/pvc-87285c92-26c4-40bd-842d-7f608d9db2d8 is apparently in use by the system; will not make a filesystem here!

Solution

  1. Edit the multipath.conf file using following command:
    vi /etc/multipath.conf
  2. Add the Required Configuration.
    • Add the following data to the multipath.conf file on all nodes in the cluster:
      blacklist {
          devnode "^sd[a-z0-9]+"
      }
    • After adding the data, the file should look like this:
      defaults {
          user_friendly_names yes
      }
      blacklist {
          devnode "^sd[a-z0-9]+"
      }
  3. After adding the data, restart the multipathd.service in all nodes.
    systemctl restart multipathd.service
  4. To resolve the issue, delete the affected pods so that Kubernetes can recreate them with the corrected configuration:
    kubectl delete pod nextgen-gw-0 nextgen-gw-redis-master-0