Encryption of Kubernetes Persistent Local Volumes
Data can be encrypted at rest at filesystem or on block device level. At file system level, e.g., eCryptfs, a stacked file system appends metadata to each file that is used to decrypt the file. At block level, the encryption is transparent and faster.
Kubernetes Local Persistent Volumes allows to manage local block storages as we would manage AWS EBS disks in Kubernetes. We will create multiple virtual block devices, or loopback devices, in the existing file system, encrypt this volumes using aes-xtx-plain64 with password hash sha256 and make them available as persistent storage volumes.
function create_disk {
dd bs=512 count=4 if=/dev/urandom of=/disk1-key
chmod 600 /disk1-key
fallocate -l $((4*1024*1024*1024)) /disk1
cat /disk$1-key | cryptsetup -q luksFormat --hash sha256 /disk1 -d -
cryptsetup luksOpen /disk1 device-disk-1 — key-file /disk1-key
mkfs.ext4 /dev/mapper/device-disk-1
mkdir /data/disk1
mount /dev/mapper/device-disk-1 /data/disk1
}
This snippet stores a random password in /disk1-key , creates a 4GB file is /disk1, creates the loopback device /dev/mapper/device-disk-1 and mounts as /data/disk1. All content in this device is stored encrypted in the file. Yet, the disk is accessible while mounted. You still need access control to prevent disk access.
The next step is to create a local persistent volume:
# Only create this for K8s 1.9+
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-disk
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
# Supported policies: Delete, Retain
reclaimPolicy: Delete
We can create the persistent volumes manually or use a provisioner:
---
# Source: provisioner/templates/provisioner.yamlapiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: kube-system
data:
storageClassMap: |
local-disk:
hostDir: /data
mountDir: /data
blockCleanerCommand:
- "/scripts/shred.sh"
- "2"
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: local-volume-provisioner
namespace: kube-system
labels:
app: local-volume-provisioner
spec:
selector:
matchLabels:
app: local-volume-provisioner
template:
metadata:
labels:
app: local-volume-provisioner
spec:
serviceAccountName: local-storage-admin
containers:
- image: "quay.io/external_storage/local-volume-provisioner:v2.1.0"
imagePullPolicy: "Always"
name: provisioner
securityContext:
privileged: true
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: JOB_CONTAINER_IMAGE
value: "quay.io/external_storage/local-volume-provisioner:v2.1.0"
volumeMounts:
- mountPath: /etc/provisioner/config
name: provisioner-config
readOnly: true
- mountPath: /data
name: local-disk
mountPropagation: "HostToContainer"
volumes:
- name: provisioner-config
configMap:
name: local-provisioner-config
- name: local-disk
hostPath:
path: /data---
# Source: provisioner/templates/provisioner-service-account.yamlapiVersion: v1
kind: ServiceAccount
metadata:
name: local-storage-admin
namespace: kube-system---
# Source: provisioner/templates/provisioner-cluster-role-binding.yamlapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-pv-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:persistent-volume-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-storage-provisioner-node-clusterrole
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-node-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: local-storage-provisioner-node-clusterrole
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jobsrole
namespace: kube-system
rules:
- apiGroups:
- 'batch'
resources:
- jobs
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: local-storage-provisioner-job-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: kube-system
roleRef:
kind: Role
name: jobsrole
apiGroup: rbac.authorization.k8s.io
This will create a persistent volume for each volume in the dir /data.
I expect the next the next step to be kubelet to mount/unmount local persistent volumes with a password.