Persistent Volumes

Host Mounted Volume

apiVersion: v1
kind: Pod
metadata:
    name: random-number-generator
spec:
containers:
- image: alpine
    name: alpine
    command: ["/bin/sh","-c"]
    args: ["shuf -i 0-100 -n 1 >> /opt/number.out;"]
    volumeMounts:
    - mountPath: /opt       # mounted under /opt in the container
    name: data-volume
volumes:
- name: data-volume
    hostPath:
    path: /data           # /data directory on the host
    type: Directory

The pod defined in the yaml shown above mounts a directory /data on the host for persistence. Note: It is not advised to do this as each node’s directory structure and file contents would be different, since a container can be scheduled on any of the nodes.

NFS Mounted Volume

thomas-pk@tom-raspberry:~ $ cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
/store/drv-movies  192.168.1.0/24(rw,async,no_subtree_check)

Shown above is the /etc/exports file on the host tom-raspberry.bigtom.local

apiVersion: v1
kind: Pod
metadata:
  name: nfs-poc
spec:
  volumes:
  - name: app-data
    nfs:
      server: tom-raspberry
      path: "/store/drv-movies"
  containers:
  - name: nfs-poc
      image: thomaspk/tom-blog
      securityContext:
        privileged: true
      volumeMounts:
      - mountPath: /data
        name: app-data
  • The above pod definition shows a working nfs mount in a pod. The container is from nginx:stable (Debian OS).

  • Note that the ports show above do not have to be opened unless this pod is being accessed by an NFS service for other pods.

  • Also note that the container is able to perform dns resolution for the host tom-raspberry.

Persistent Volumes

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-raspberry
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 500Mi
  nfs:
    server: tom-raspberry
    path: "/store/drv-movies"
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 2Gi
  persistentVolumeReclaimPolicy: Retain   # optional default is Retain
  accessModes:
  - ReadOnlyMany
  storageClassName: regular
  nfs:
    server: 192.168.1.6
    path: /store/drv-sync/k8-volumes/bravo
  mountOptions:
  - hard
    nfsvers=4.1

Persistent Volume Claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-tom
spec:
  resources:
    requests:
      storage: 500Mi
  accessModes:
  - ReadWriteOnce
  storageClassName: " "

The yaml above creates a PVC which is completely different from creating the PV or the Pod.