3. Using ROBIN Storage in Kubernetes

The Container Storage Interface (CSI) is a standard for exposing storage to workloads on Kubernetes. To enable automatic creation/deletion of volumes for CSI Storage, a Kubernetes resource called StorageClass must be created and registered within the Kubernetes cluster. Associated with the StorageClass is a CSI provisioner plugin that does the heavy lifting at disk and storage management layers to provision storage volumes based on the various attributes defined in the StorageClass. Kubernetes CSI was introduced in Kubernetes v1.9 release, promoted to beta in Kuberentes v1.10 release as CSI v0.3, followed by a GA release in Kubernetes v1.13 as CSI v1.0.

Kubernetes CSI broke compatibility between CSI v1.0 and CSI v0.3 and hence one must implement two different StorageClasses, one each for implementing v0.3 and v1.0 version of the Spec. To facilitate this ROBIN, ships with two StorageClasses:

  1. robin-0-3: The StorageClass that is compatible with Kubernetes versions less than v1.13

  2. robin: The StorageClass that is comptabile with Kubernetes versions v1.13 and above

Both storage classes follow the same parameters as described below:

Definition of ROBIN CSI Storage Class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: robin
provisioner: robin
reclaimPolicy: Delete
parameters:
    # media: SSD|HDD             // default is media of first available drive in the cluster
    # blocksize: "512"|"4096"    // default is 4096
    # fstype: ext4|xfs           // default is ext4
    # replication: "2"|"3"       // default is no replication
    # faultdomain: disk|host     // default is disk
    # compression: LZ4           // default is no inline compression
    # encryption: CHACHA20|AES256|AES128 // default is no encryption
    # snapshot_space_limit: "50" // default 40% of Volume size.

media

The media type ROBIN should use to allocate PersistentVolumes.
Two values are supported: HDD for spinning-disks and
SSD for Solid State Devices. ROBIN automatically
discovers the media type of the underlying local disks. If not
provided ROBIN will choose type of the first discovered media.
For example, GCE Standard Persistent Disk is treated as HDD media
type and GCE SSD Persistent Disk is treated as an SSD media type.

blocksize

By default ROBIN uses 4096 as the block size of the underlying
logical block device it creates. You can overwrite it by setting it
to 512 for certain workloads that require it. This value is made
available via cat /sys/block/<DEVNAME>/queue/physical_block_size

fstype

By default the logical block device created by ROBIN is formatted
using ext4 filesystem. It can also be changed to xfs

replication

By default ROBIN does not enable replication for the logical block device.
It can be set to 2 or 3 to setup 2-way or 3-way replication.
ROBIN implements a strictly consistent data replication guarantee. Which
means that a write IO is NOT acknowledged back to the client until it
is made durable on all replicas.

faultdomain

The fault domain to be used when “replication” is turned on. Setting the
right fault domain maximizes data safety. Setting it to disk results
in ensuring that ROBIN picks two different disks to keep the replication
copies. ROBIN also tries to pick disks on different nodes to ensure
higher availability in the event of node failures. But on a very busy
cluster, if there are no spare disks on different nodes, setting the
fault domain to disk would result in disks from the same node to
be picked up for storing the replicated copies of the volume. To prevent
this and to ensure that your application can tolerate entire node
going down, you can set the fault domain to host. Doing so would
gurantee that ROBIN never picks disks from the same node when storing
replicated data of a volume. If disks across different nodes are not
available, then the volume creation is failed rather than degrading
to disk level fault domain

compression

By default inline data compression is disabled. It can be enabled by
setting it to LZ4 which turns on inline block-level data compression
using LZ4 compression algorithm. Support for other compression
algorithms is on the roadmap

encryption

By default data-at-rest encryption is not enabled. To enable it set it
to CHACHA20, AES128 or AES256, which uses one of these
algorithms to perform block-level encryption of data for that
PersistentVolume.

snapshot_space_limit

This is how much space that is set aside for snapshots for
this volume. For example, if volume size is 100GB,
value of “30” here would be 30GB space reserved for snapshots.
New snapshot creation will fail once this limit is reached.
Default is 40% of volume size.

Note

Make sure that for blocksize and replication, the values are passed as quoted strings to adhere to CSI spec. That is, blocksize should be passed as “4096” (quoted) and NOT as 4096 (unquoted)

3.1. Using ROBIN StorageClass to Provision Storage

3.1.1. Basic Use Case

STEP 1: Create a PersistentVolumeClaim (PVC) using ROBIN StorageClass:

$ cat mypvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: mypvc
   annotations:
      volume.beta.kubernetes.io/storage-class: robin

spec:
   accessModes:
       - ReadWriteOnce
   resources:
       requests:
          storage: 10Gi


$ kubectl create -f mypvc.yaml
persistentvolumeclaim/mypvc created

Note

Notice that under metadata/annotations we have spcified the storage class as volume.beta.kubernetes.io/storage-class: robin. This results in the ROBIN StorageClass to be be picked up. For Kubernetes versions less than v1.13 one should instead use volume.beta.kubernetes.io/storage-class: robin-0-3.

STEP 2: Confirm that the PersistentVolumeClaim and the corresponding PersistentVolume are created:

$ kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc      Bound    pvc-1b37154c-4764-11e9-bac1-00155d61160d   10Gi       RWO            robin          2s


$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
pvc-b91ee150-4790-11e9-bac1-00155d61160d   10Gi       RWO            Delete           Bound    default/mypvc      robin                   24s

STEP 3: Attach this PersistentVolumeClaim to a simple Pod:

$ cat mypod.yaml
kind: Pod
apiVersion: v1
metadata:
   name: myweb
spec:
   volumes:
      - name: htdocs
        persistentVolumeClaim:
          claimName: mypvc
   containers:
      - name: myweb0
        image: nginx
        ports:
           - containerPort: 80
             name: "http-server"
        volumeMounts:
           - mountPath: "/usr/share/nginx/html"
             name: htdocs

$ kubectl create -f mypod.yaml

3.1.2. Customizing Volume Provisioning

Let’s say that we’d like to create a PVC which meets the following requirements:

  • Data is replicated 3-ways

  • The Pod should continue to have access to data even if 2 of the 3 disks or the nodes on which these disks are hosted go down

  • The data must be compressed

  • The data should only reside on SSD media

This is accomplished by specifying these requirements under metadata/annotations section of the PVC Spec as described in blow below. Please notice that each annotations are prefixed with robin.io/. Annotations can take exact same parameters as in ROBIN StorageClass and would override the corrosponding parameters specified in the StorageClass.

$ cat newpvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: proteced-compressed-pvc
   annotations:
      volume.beta.kubernetes.io/storage-class: robin
      robin.io/replication: "3"
      robin.io/faultdomain: host
      robin.io/compression: LZ4
      robin.io/media: SSD

spec:
   accessModes:
       - ReadWriteOnce
   resources:
       requests:
          storage: 1Gi

$ kubectl create -f newpvc.yaml
persistentvolumeclaim/proteced-compressed-pvc created

Note

Note that the number 3 is quoted as "3" when specifying robin.io/replication: annotation. This is per the Kubernetes Spec. Not doing so would result in an error being thrown by kubectl

3.1.3. Using ROBIN Storage in a StatefulSet

In a StatefulSet a PVC is not directly referenced as in the above examples, but instead a volumeClaimTemplate is used to describe the type of PVC that needs to be created as part of the creation of the StatefulSet resource. This is accomplished as follows:

$ cat myweb.yaml

     apiVersion: v1
     kind: Service
     metadata:
       name: nginx
       labels:
           app: nginx
     spec:
       ports:
       - port: 80
         name: web
       clusterIP: None
       selector:
         app: nginx
     ---
     apiVersion: apps/v1
     kind: StatefulSet
     metadata:
       name: web
     spec:
       serviceName: "nginx"
       replicas: 2
       selector:
         matchLabels:
           app: nginx
       template:
         metadata:
           labels:
             app: nginx
         spec:
           containers:
           - name: nginx
             image: k8s.gcr.io/nginx-slim:0.8
             ports:
             - containerPort: 80
               name: web
             volumeMounts:
             - name: www
               mountPath: /usr/share/nginx/html
       volumeClaimTemplates:
       - metadata:
           name: www
           annotations:
             volume.beta.kubernetes.io/storage-class: robin
             robin.io/replication: "2"
             robin.io/media: HDD
         spec:
           accessModes: [ "ReadWriteOnce" ]
           resources:
             requests:
               storage: 1Gi

 $ kubectl get statefulset
 NAME   READY   AGE
 web    2/2     12s

 $ kubectl get pvc
 NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
 www-web-0   Bound    pvc-2b97d8fc-479d-11e9-bac1-00155d61160d   1Gi        RWO            robin          8s
 www-web-1   Bound    pvc-436536e6-479d-11e9-bac1-00155d61160d   1Gi        RWO            robin          8s

 $ kubectl get pv
 NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
 pvc-2b97d8fc-479d-11e9-bac1-00155d61160d   1Gi        RWO            Delete           Bound    default/www-web-0   robin                   10s
 pvc-436536e6-479d-11e9-bac1-00155d61160d   1Gi        RWO            Delete           Bound    default/www-web-1   robin                   10s

3.1.4. Provisioning Storage for Helm Charts

Helm charts are a popular way to deploy an entire stack of Kubernetes resources in one shot. A helm chart is installed using helm install command. To use ROBIN Storage for persistent storage one needs to pass it as --set persistence.storageClass==robin command line option as shown below:

$ helm install stable/mysql --set persistence.storageClass=robin

This would result in ROBIN being used as the storage provisioner for PersistentVolumeClaims created by this helm chart.

3.2. Protecting PVCs using ROBIN’s Volume Replication

ROBIN uses storage volume-level replication to ensure that data is always available in the event of nodes and disk failures. When replication is configured to 2, at least 2 copies of the volume on maintained on different disks, if set to 3 at least 3 copies are maintained. This ensures that the volume’s data is available in the event of 1 or 2 disk/node failures. Configuring replication is done by annotating the PVC spec with robin.io/replication: "<count>" and optionally robin.io/faultdomain: disk|host as shown below:

$ cat replicated-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: replicated-pvc
   annotations:
      volume.beta.kubernetes.io/storage-class: robin
      robin.io/replication: "3"
      robin.io/faultdomain: host

spec:
   accessModes:
       - ReadWriteOnce
   resources:
       requests:
          storage: 1Gi

Setting the correct value for robin.io/fautdomain to either disk or host ensures that this PVC’s data is available in the event of just a disk or also node failures.

How are faults handled?

ROBIN uses strict-consistency semantics to guarantee correctness for your mission critical stateful applications. Which means that a “write” IO is not ackowledged back to the application until it has been made durable on all the healthy replicas disks. It is possible that one or more replica disks for a volume can go down for short periods of time (node going through a reboot cycle), or for longer periods of time (node has a hardware fault and can’t be brought online unti the part is replaced). ROBIN handles both cases gracefully. When a replica disk becomes available during IO, ROBIN automatically evicts it from the replication group. IOs continue to go to the remaining healthy replicas. When the faulted disks becomes available ROBIN automatically brings it up to the same state as the other healthy disks before adding it back into the replication group. This is automatically handled and transparent to the application.

When a disk suffers a more serious error. For example, an IO error is returned by the disk during a write or read operation. In this case ROBIN marks that disk as faulted and generates an alert for the storage admin to investigate. The storage admin can then determine the nature of the error and then mark that disk as healthy, in which case ROBIN adds it back into the replication group and initiates a data resync to bring it up to the same level as the other healthy disks. If the error is serious (e.g., SMART counters returns corruption), or if the node has a motherboard or IO card fault that needs to be replaced, the storage admin can permanently decommison that disk or node from the Kubernetes cluster. Doing so would also automatically evict that disk from the replication group of the PVC. The storage admin can then add a new healthy disk to the replication group so that the PVC can be brought back to the same level of availability as before.

There is a practical reason why ROBIN doesn’t automatically trigger rebuilds of fauluted disks. ROBIN is currently being used in mission critical workloads with multiple-petabytes under management by the ROBIN storage stack. We have seen scenarios where an IO controller card has failed while it has 12 disks of 10TiB each. That is 120 TiB of storage capacity under a single IO controller card. Rebuilding 120 TiB of data takes more time than replacing a faulted IO controller card with a healthy one. Also, moving 120 TiB of data over the network from healthy disks on other nodes puts a significant load on the network switches and the applications running on the nodes from which the data is pulled. This results in noticeable performance deradation. With our experience managing storage under large scale deployments and taking feedback from admins managing those cluters we have determined that it is best to inform an admin of a failure and let them decide, based on cost and time, wheather they want to replace a faulty hardware or want ROBIN to initiate a rebuild.

3.3. Making ROBIN the default StorageClass

To avoid typing the name of the StorageClass each time a new chart is deployed, it is highly recommend to set ROBIN’s Storage as the default Kubernetes StorageClass. This can be done as follows:

STEP 1: Inspect if there is already a different StorageClass marked as default:

$ kubectl get storageclass
NAME                 PROVISIONER               AGE
standard (default)   kubernetes.io/gce-pd      11d
robin                robin                     1d

STEP 2: Mark the current, non ROBIN StorageClass as “non-default” before proceeding to the next step:

$ kubectl patch storageclass standard \
   -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"false"}}}'

Note

Before patching the storage class ensure that the annotation specified is correct. The above example is specific to a GKE cluster running version 1.12 of Kubernetes.

STEP 3: Now mark ROBIN as the new default StorageClass:

$ kubectl patch storageclass robin \
   -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Note

Before patching the ROBIN storage class ensure that name specified is correct as for Kubernetes versions newer than 1.13 it appears as robin but for older versions it is displayed as robin-0-3

STEP 4: Confirm that ROBIN is now the default StorageClass:

$ kubectl get storageclass
NAME                 PROVISIONER               AGE
standard             kubernetes.io/gce-pd      11d
robin (default)      robin                     1d

To learn more see official documentation on how to Change the default StorageClass.

3.4. Snapshot Volumes

Just like storage management, which is done by an external storage provisioner such as ROBIN, taking snapshots of a volume is also done using a Snapshoting provisioner that is registered with Kubernetes. See more details on the official documentation on Volume Snapshots. ROBIN supports Kubernetes snapshots for Kubernetes v1.13 and beyond.

STEP 1: Register a SnapshotClass with Kubernetes:

$ cat robin-snapshot-class.yaml
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotClass
metadata:
  name: robin-snapshotclass
snapshotter: robin

$ kubectl create -f robin-snapshot-class.yaml
volumesnapshotclass.snapshot.storage.k8s.io/robin-snapshotclass created

STEP 2: Confirm that a SnapshotClass is registered:

$ kubectl get volumesnapshotclass
NAME                      AGE
robin-snapshotclass       12h

STEP 3: Take a snapshot of a PersistentVolumeClaim:

$ cat take-snapshot.yaml
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: mypvc-snapshot
spec:
  snapshotClassName: robin-snapshotclass
  source:
    name: mypvc
    kind: PersistentVolumeClaim

$ kubectl create -f take-snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/mypvc-snapshot created

STEP 4: Confirm that the VolumeSnapshot for the PersistentVolumeClaim is created:

$ kubectl get volumesnapshot
NAME            AGE
mypvc-snapshot  1s

$ kubectl get volumesnapshotcontent
NAME                                               AGE
snapcontent-415d53bc-481a-11e9-bac1-00155d61160d   1s

3.5. Clone Volumes

ROBIN has a capability of taking a clone from a snapshot of a volume. We allow user to have RW clone so that old data can be read from parent snapshot and new data can be overwritten on the newly provisioned cloned volume. See more details on the official Kubernetes documentation on Volume Snapshot Restores and Clones. ROBIN supports Kubernetes Clones for Kubernetes v1.13 and beyond.

Note

Clone is still an Alpha feature in Kubernetes so it requires VolumeSnapshotDataSource feature gate be enabled on the apiserver and controller-manager.

STEP 1: Take a Clone of a VolumeSnapshot:

$ cat take-clone.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc-clone-snap1
#annotations:
  # robin.io/media: <SSD, HDD>
  # robin.io/replication: <"2", "3">
  # robin.io/faultdomain: <disk, host>          // default disk
  # robin.io/encryption: <CHACHA20, AES256, AES128>
  # robin.io/snapshot_space_limit: "50"         // default 40%. Percentage of Vol size.

spec:
  storageClassName: robin
  dataSource:
    name: mypvc-snap1
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

$ kubectl create -f take-clone.yaml
persistentvolumeclaim/mypvc-clone-snap1 created

STEP 4: Confirm that the PersistentVolumeClaim for Clone is created:

$ kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc               Bound    pvc-83ed719a-5500-11e9-a0b7-00155d320462   1Gi        RWO            robin          49m
mypvc-clone-snap1   Bound    pvc-6dd554d1-5506-11e9-a0b7-00155d320462   1Gi        RWO            robin          7m19s

3.6. Handling Disruptions

With ROBIN, highly available applications can be deployed on Kubernetes as ROBIN can handle failures of drives, rack or hosts automatically. On a Baremetal setup, volumes can be setup with a replication factor of 2 or 3 to ensure that storage is available even if a drive fails. Users can also choose the fault domain to be ‘host’ to protect against node reboots or lost.

However, in a public cloud environment the cloud disks can be detached from one cloud node and reattached to another one. For example, in AWS an EBS volume can be detached on one EC2 host and reattached to a different EC2 host. Same with GCP where a PD can be moved across GCE nodes. If a cloud node (EC2, GCE, Azure VM) is terminated or rebooted, one would want any cloud drive attached to them (EBS, PD, Block) to be moved to the one or more of the remaining healthy nodes automatically. This is not limited to just cloud disks, but also SAN LUNS that are offered to ROBIN as disks. The SAN LUNS can also be multi-mounted onto multiple nodes or moved around from node to node. User can still choose to replicate volume on public cloud as it takes sometime to detach and attach drives on cloud platforms.

Just having the storage available during a disruption will not help if Kubernetes can not access it from the Pod. For example a Kubernetes StatefulSet serializes the mounting and unmounting of a volume to protect against possible corruptions. ROBIN utilizes smart detection techniques to ensure that even if a volume is mounted on multiple nodes, it can differentiate the IOs issued from the previous stale mount and the new mount. With this consistency guarantees, ROBIN enables the Kubernetes StatefulSet to unmount a volume from a dead node and remount it on a healthy node where the Pod is scheduled to run. ROBIN actively monitors these events to allow for the fast failover of the Pods without user intervention and consequently enables users to reliably deploy highly available stateful applications on Kubernetes.