ConfigMap and Secret Volumes
Your application needs a configuration file at startup. The naive approach is to bake the file into the image. But then every environment change requires a new image build, a new push, and a new deployment. Configuration and code are now coupled in the worst possible way.
Kubernetes separates them. A ConfigMap stores arbitrary key-value pairs or file contents as a cluster object. A Secret stores the same but for sensitive data. Both can be mounted into a container as files, appearing at a path of your choice. The application reads its config file normally, with no awareness that Kubernetes put it there.
ConfigMap as a Volume
Start by creating a ConfigMap that holds a configuration file:
nano app-config.yamlapiVersion: v1kind: ConfigMapmetadata: name: app-configdata: config.yaml: | log_level: info max_connections: 100 timeout_seconds: 30kubectl apply -f app-config.yamlThe key config.yaml becomes the filename. The value is the file content.
Now mount it into a Pod. Build the spec step by step.
First, declare the volume in spec.volumes, referencing the ConfigMap by name:
# illustrative onlyspec: volumes: - name: config configMap: name: app-configThen mount it in the container:
# illustrative onlycontainers: - name: app image: nginx:1.28 volumeMounts: - name: config mountPath: /etc/appThe full manifest:
nano config-pod.yamlapiVersion: v1kind: Podmetadata: name: config-podspec: volumes: - name: config configMap: name: app-config containers: - name: app image: nginx:1.28 volumeMounts: - name: config mountPath: /etc/appkubectl apply -f config-pod.yamlOnce the Pod is running, verify the file is there:
kubectl exec config-pod -- ls /etc/appkubectl exec config-pod -- cat /etc/app/config.yamlYou should see config.yaml with the content from the ConfigMap. The container has no special code to read it, it is just a file on the filesystem.
You update the ConfigMap with a new log_level: debug. Does the running Pod see the change immediately?
Reveal answer
Eventually yes, with a short delay. Kubernetes syncs ConfigMap-backed volume mounts periodically (default: around 60 seconds). The file on disk is updated automatically without restarting the container. However, whether the application picks up the change depends on how it reads config: applications that re-read config files on a signal or interval will see it; applications that only read config at startup will not until the Pod restarts.
Secret as a Volume
Secrets work identically to ConfigMaps from a volume mount perspective. The difference is that Secret values are base64-encoded at rest and Kubernetes handles their distribution more carefully than plain ConfigMaps.
Create a Secret with a database password:
kubectl create secret generic db-credentials --from-literal=password=s3cr3tpassword --from-literal=username=adminMount it as a volume:
nano secret-pod.yamlapiVersion: v1kind: Podmetadata: name: secret-podspec: volumes: - name: creds secret: secretName: db-credentials containers: - name: app image: nginx:1.28 volumeMounts: - name: creds mountPath: /etc/credentials readOnly: truekubectl apply -f secret-pod.yamlkubectl exec secret-pod -- ls /etc/credentialskubectl exec secret-pod -- cat /etc/credentials/passwordEach key in the Secret becomes a file. The value is the decoded content. The readOnly: true field prevents the container from modifying the mounted Secret, which is a good default for credentials.
Mounting a Secret as a volume does not make it invisible inside the container. Any process running as root can read /etc/credentials/password. The readOnly flag prevents the container from writing to the mount, not from reading it. Proper security requires also restricting which containers have access to the Secret and running containers as non-root users.
Clean up:
kubectl delete pod config-pod secret-podkubectl delete configmap app-configkubectl delete secret db-credentialsConfigMap and Secret volumes decouple configuration from images. You update the cluster object, and the mounted files update in running Pods without a rebuild. The next lesson steps up to a different class of storage problem: data that must outlive the Pod entirely, across deletions, restarts, and rescheduling events.