K8s Tips: Resize Persistent Volumes

Your application keeps growing and the space on its PV is terminating?

Don’t worry with Kubernetes >= 1.11 this is not an issue, previously on Kubernetes < 1.11 you had to do some painful operations that required administrative control.

Kubernetes >= 1.11

Since Kubernetes 1.8 this feature was in Alpha but required you to enable the ExpandPersistentVolumes feature flag and the PersistentVolumeClaimResize admission controller.

Now has been promoted in Beta and accessible to any k8s cluster >= 1.11

This functionality is available only on supported PV plugins: AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx and Ceph RBD.

You have just to add the property allowVolumeExpansion: true to the StorageClass

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
allowVolumeExpansion: true
reclaimPolicy: Delete

From now on, any PVC created from this StorageClass could be edited to request more space, Kubernetes will take care of storage expansion.

FileSystem Expansion

In case the PV is based on a Block Storage it will also require the File System expansion.

You can verify if there is a FileSystemResizePending checking the PVC:

kubectl get pvc <PVC_NAME> -o yaml

1
2
3
4
5
6
7
8
9
status:
capacity:
storage: 30G
conditions:
- lastProbeTime: null
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending

In this case, Kubernetes is just waiting that the Pod is restarted to expand the FileSystem.

Errors

In case of errors during PV expansion they should appear as events on the Pod.

Automatic FileSystem Expansion

The automatic FileSystem Expansion is actually in Alpha under the ExpandInUsePersistentVolumes feature flag and is supported only by these plugins: GCE-PD, AWS-EBS, Cinder, and Ceph RBD.

In this case, the FileSystem expansion is automatically executed without requiring the Pod restarts.

Kubernetes < 1.11

In case you are still on Kubernetes < 1.11 you have to manually expand the Storage and then the FileSystem, as an example I’ll use AWS Elastic Block Storage.

You’ll need root access to host nodes with installed disk utilities (e2fsprogs/e2fsprogs-extra).

1) Storage Expansion

  • Find the volume on AWS, use tags to help you identify the correct one, take note of the PV name.
    • kubernetes.io/created-for/pvc/name
    • kubernetes.io/created-for/pvc/namespace
    • kubernetes.io/created-for/pv/name
  • (Extra) This is the right moment to take a snapshot as a backup
  • Using AWS console/API increase the volume size to desired capacity

2) FileSystem Expansion

  • Identify the node mounting the volume
    • kubectl get node -o wide | grep $(kubectl -n namespace describe $(kubectl -n namespace get po -o name | grep <POD_NAME>) | grep Node: | awk -F"[\t/]" '{print }')
  • Login, with SSH, to the node
  • Now you have to identify the name of the block device
    • BD_NAME=$(lsblk | grep "<PV_NAME>" | awk '{print $1}')
  • Resize the block device
    • resize2fs "/dev/$BD_NAME"

3) Kubernetes Expansion

Resize PV

kubectl patch pv "<PV_NAME>" -p '{"spec":{"capacity":{"storage":"<NEW_SIZE>"}}}'

Resize PVC (Optional)

If you are a perfectionist like me and want that your PVC reflects the PV size exactly, you’ll have to take these extra steps.

  • Set reclaimPolicy: retain on the PV
  • Delete Pod and PVC
  • Manually recreate the PVC pointing to the existing PV id