Nick's Blog

Make Deepsecurity Smartcheck db pod backed by managed azure disk

I was working on the trendmicro deepsecurity smart check for container scanning. The client I’m working on got a big container registry, around 10k images. I used the default helm deployment without customisation, which means it only allocate 8Gi storage for the db and a blank storageClassName.

The storage quickly ran out and crash the db. Since the storage class is set blank. You cannot extend it. It’s fine to running into this issue if you are just testing. However, for production usages you either want to use a managed postgreSQL or a managed disk backed PV. In this case, I tried to create a managed disk backed PV. and make the PVC to claim that disk.

Because this client is using Azure. So I created a Managed disk in Azure. and copy the resource ID from the disk property. and create the following files.

## values.yaml

persistence:
  enabled: true
  storageClassName: "azure-disk-static-volume"
  db:
    size: 20Gi #?
    storageClassName: "azure-disk-static-volume"

azureDisk:
  kind: Managed
  diskName: dssc-db
  diskURI: /subscriptions/{subid}/resourcegroups/myResources/providers/Microsoft.Compute/disks/dssc-db

## templates/storageClass-{name}.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azure-disk-static-volume
provisioner: kubernetes.io/azure-disk
parameters:
  kind: Managed
  storageAccount: mystorageaccount
  storageaccounttype: Standard_LRS
reclaimPolicy: Retain

## templates/pv.yaml
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-{{ template "smartcheck.name" . }}-data
  labels:
    app: {{ template "smartcheck.name" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
spec:
  capacity:
    storage: {{ .Values.persistence.db.size }}
  storageClassName: {{ .Values.persistence.storageClassName | quote }}
  azureDisk:
    kind: {{ .Values.azureDisk.kind | quote }}
    diskName: {{ .Values.azureDisk.diskName | quote }}
    diskURI: {{ .Values.azureDisk.diskURI | quote }}
  accessModes:
    - ReadWriteOnce
{{- end }}

So ran the helm install after changing /adding the above files. Just be aware you cannot upgrade the helm template from the default deployment. It just didn’t work that way. You need a fresh install for it.

After the deployment, you will see the pv created and pvc bond successfully. Navigate to Azure Disk. You should see the azure disk has been attached on to the worker node in the kubernetes cluster. I figure that you may no see the status showing attached if you choose to use the premium ssd. I dont know the reason. But I was just using the standard ssd.