Docker secrets
Table of Contents
Introduction
The secrets in Docker are all sensitive data that should not travel unsecured through the network like user accounts and passwords, TLS certificates, SSH keys, and other important data. The secret can be explicitly mapped to containers with certain conditions:
• The container should be part of the service (secrets do not work for individual containers)
• If a node that hosts container with mapped secret temporarily loses connection to the Docker swarm, the secret remains mapped to the container.
• Secrets cannot be removed while mapped to the container
• When the service is off, the filesystem on which the secrets are stored is removed and the secrets are deleted from the host’s memory
• You can always add additional secrets to services
When a secret is created, it is sent directly to Raft log into a swarm manager that replicates the secret to all managers via the TLS protocol. When secrets are exclusively mapped to containers, they are mapped in the form of filesystem located in the host’s working memory at /run/secrets/ secret-name. It is important to note that active secrets cannot be changed, old secret should be removed from the container and the new secret introduced.
Example (Container with secret)
Create your secret:
[root@swmanager ~]# printf “My secret” |docker secret create mysecret –
ohb9ntjkk67cwegvbublgbjbi |
Secret is printed with printf command and sent to pipe to command docker secret service.
Create service Centos with your secret:
[root@swmanager ~]# docker service create –name centos –secret mysecret centos ping docker.com
ooowt7ba6lk05cx1is1cgy4sw overall progress: 1 out of 1 tasks 1/1: running verify: Service converged |
Check if service is in running state:
[root@swmanager ~]# docker service ps centos
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS pfzxzlv12xkf centos.1 centos:latest swworker2 Running Running about a minute ago |
To get container name start following command:
[root@swmanager ~]# docker ps –filter name=centos
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 21667354ba69 centos:latest “ping docker.com” 16 seconds ago Up 7 seconds centos.1.s25fxotqimrjs1u8f0hkne5b8 |
Print your secret:
[root@swmanager ~]# docker exec centos.1.s25fxotqimrjs1u8f0hkne5b8 cat /run/secrets/mysecret
My secret |
When docker commit command was executed, a new image is created from the container, ie an image that captures all changes that have occurred in the meantime. Secrets are not part of this process and are not transferred to a new image:
[root@swmanager ~]# docker commit $(docker ps –filter name=centos -q) new_centos
sha256:9c95a3fe8344f9e599a778d3be4a3b35269dba9e5d643acbc7456f95c4128278 |
Check list of secrets:
[root@swmanager ~]# docker secret ls
ID NAME DRIVER CREATED UPDATED ohb9ntjkk67cwegvbublgbjbi mysecret 8 hours ago 8 hours ago |
You can’t remove service which is mapped to running container:
[root@swmanager ~]# docker secret rm mysecret
Error response from daemon: rpc error: code = InvalidArgument desc = secret ‘mysecret’ is in use by the following service: centos |
First remove secret from container:
docker service update –secret-rm mysecret centos |
Secret is not part of container anymore:
[root@swmanager ~]# docker exec centos.1.4fxvi29jjzr5nh9ht1kdvn9f8 cat /run/secrets/mysecret
cat: /run/secrets/mysecret: No such file or directory |
Now secret can be removed:
[root@swmanager ~]# docker secret rm mysecret
Mysecret |
Mapping configuration files
While it is possible to map sensitive data such as secrets, it is also possible to map data that does not need to be specifically encrypted such as configuration data. The goal of mapping configuration data is to maintain generic images without adding specific configurations and environment variables. There are conditions and rules:
• Configuration files can be added or removed at any time
• Multiple services may share the same configuration
• Services can use environment variable with configuration files
• Only available to service providers and not to individual containers
• Supported for Linux and Windows
When a configuration file is defined, it is sent directly to the Raft log into a swarm manager that replicates the configuration files to all managers via the TLS protocol. When configuration files are exclusively mapped to containers, they are mapped in the form of filesystem located in the host’s working memory at / <config-name>. The permissions are set to user who runs the commands in the container. If explicit rights are not set, they are set to 0444. Upgrading the configuration file is possible by changing the compose file and running the Docker stack deploy <new-compose-file> <stack-name> command. Note that the name of the new configuration file must be used because the changes are immutable.
Example (Container with configuration files)
Create you configuration file:
[root@swmanager ~]# printf “configuration file” |docker config create myconfig –
t0p96dgfx9cu3280wvin20b6p |
Configuration file is created with printf command and piped to command docker config create.
Create service Centos with your configuration file:
[root@swmanager ~]# docker service create –name centos –config myconfig centos ping docker.com
53g5bo4gwf3vie3qcga2ab16w overall progress: 1 out of 1 tasks 1/1: running verify: Service converged |
Check if service is in running state:
[root@swmanager ~]# docker service ps centos
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS lqkribml9bfn centos.1 centos:latest swmanager Running Running 47 seconds ago |
To get container name start following command:
[root@swmanager ~]# docker ps –filter name=centos
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 765b22fb97f9 centos:latest “ping docker.com” 2 minutes ago Up 2 minutes centos.1.lqkribml9bfnr6m5uerwczu2e |
Print your configuration file:
[root@swmanager ~]# docker exec -it centos.1.lqkribml9bfnr6m5uerwczu2e cat /myconfig
configuration file |
Print list of configuration files:
[root@swmanager ~]# docker config ls
ID NAME CREATED UPDATED t0p96dgfx9cu3280wvin20b6p myconfig 9 minutes ago 9 minutes ago |
You can’t remove configuration from running container:
[root@swmanager ~]# docker config rm myconfig
Error response from daemon: rpc error: code = InvalidArgument desc = config ‘myconfig’ is in use by the following service: centos |
First remove container from container:
[root@swmanager ~]# docker service update –config-rm myconfig centos
centos overall progress: 1 out of 1 tasks 1/1: running verify: Service converged |
Configuration file is no longer mapped to container:
[root@swmanager ~]# docker exec -it centos.1.icgcvln3ppxoo6o8m77fkde24 cat /myconfig
cat: /myconfig: No such file or directory |
Now it is safe to remove configuration file:
[root@swmanager ~]# docker config rm myconfig
Myconfig |
Raft consensus
Algorithm is required to maintain a consistent state in the cluster. Docker swarm managers should have the same cluster status information at the same time. In the event of a leader manager node failure, the other node manager should easily take the leadership and continue to run Docker swarm cluster. This is done through distributed and encrypted Raft logs that are the same on each node. Docker swarm uses the Raft Consensus algorithm to keep the cluster up to date.
Administrator guide
Manager node
The number of manager nodes in a swarm is unlimited. What we know for a swarm to function, is that critical number of nodes must be functional, ie the quorum should be maintained. The size of the quorum depends on the number of nodes. The higher the number of nodes, your environment is more resistant to outages, but on the other hand, system performance is suffering. Why? To approve each change in cluster (scale, update, etc.) it requires a green light from quorum that can be composed of 2 to an unlimited number of manager nodes. It is always advisable to have an odd number of managers because there is a better chance of holding a quorum if an outage occurs. The number of nodes that can fail is calculated by the formula (n-1) / 2. In the event of a quorum failure, the swarm continue to work with worker nodes. Updating, adding or deleting is not possible until the quorum is restored.
Procedures during quorum loss
- The best way to recover is to restore lost manager nodes
- If this is not possible, then re-initialize the cluster with the –force-new-cluster parameter where all remaining manager nodes are deleted except the one from which the command was started. The quorum is there because there is only one manager that has all swarm information
Static or DHCP addresses?
For manager nodes that are a stable part of the swarm infrastructure, it is always advisable to have static IP addresses, while for workers you can use DHCP.
Manager and worker node in one node?
This is by default, but it is always recommended that node is one type only.
Resource allocation in cluster
Swarm cluster ensures that resources, ie containers, are always optimally allocated to maintain resource fairness. The exception are rules which include CPU and memory requirements, host metadata match and so on. It is important to note that the Swarm cluster does not automatically balance resources when a new worker node is added. Swarm wants to avoid performance degradation moving containers around cluster. If you want to force the balance operation use force parameter with Docker service update.
Example 1 (Monitor swarm health)
Managers can be in reachable or unreachable status, workers can be ready or unready. If the manager is unreachable, try the following:
– Restart daemon
– Restart host
– Add a new manager
Check availability of manager node:
[root@swmanager ~]# docker node inspect swmanager –format “{{ .ManagerStatus.Reachability }}”
Reachable |
Check if manager node can receive new API requests:
[root@swmanager ~]# docker node inspect swmanager –format “{{ .Status.State }}”
Ready |
Example 2 (Remove manager node by force)
In case the node becomes inaccessible or corrupted, it can be removed by force from the cluster :
[root@swmanager ~]# docker node remove –force swworker2
swworker2 |
Docker swarm lock
The Docker swarm cluster offers a compromise in protecting the cluster itself. By default, there is no manual locking and unlocking of the swarm cluster. TLS and encryption keys used for communication between nodes and encryption of Raft logs are located on the disk. An attacker can theoretically access these keys and compromise the security of the swarm cluster. It is recommended to lock the swarm cluster and to manually unlock it after restart. Keys should be stored out of the cluster in a safe place like password manager.
Example (Locking the cluster)
Lock cluster with parameter lock:
[root@swmanager ~]# docker swarm init –autolock –advertise-addr 192.168.78.3
Swarm initialized: current node (hxg0zpxc8jy6acu8qwfl8gx7d) is now a manager. To add a worker to this swarm, run the following command: docker swarm join –token SWMTKN-1-51kk996trzlr72i52dgg8l92fmk2acdliuzf7rdgri89cw9uaq-dhq3o6xpiqzhs5pi60r3adow4 192.168.78.3:2377 To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions. To unlock a swarm manager after it restarts, run the `docker swarm unlock` command and provide the following key: SWMKEY-1-qW4Jl3wIf3DVTSw/7xmRsSh4n57WVfBuA78BOAIDTRg Please remember to store this key in a password manager, since without it you will not be able to restart the manager. |
Store unlocking key to safe place: SWMKEY-1-qW4Jl3wIf3DVTSw/7xmRsSh4n57WVfBuA78BOAIDTRg
Restart docker service:
[root@swmanager ~]# service docker restart
Redirecting to /bin/systemctl restart docker.service |
We can see that docker swarm is locked:
[root@swmanager ~]# docker service ls
Error response from daemon: Swarm is encrypted and needs to be unlocked before it can be used. Please use “docker swarm unlock” to unlock it. |
Unlock swarm cluster:
[root@swmanager ~]# docker swarm unlock
Please enter unlock key: [root@swmanager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS [root@swmanager ~]# |
You can lock or unlock the cluster on the fly:
[root@swmanager ~]# docker swarm update –autolock=false
Swarm updated. |
To list current unlock key:
[root@swmanager ~]# docker swarm unlock-key
To unlock a swarm manager after it restarts, run the `docker swarm unlock` command and provide the following key: SWMKEY-1-nEO5vHEWQaEXPrxBtgc1BlpB6oLmxNAkZlJweUln+7g Please remember to store this key in a password manager, since without it you will not be able to restart the manager. |
Security
Docker implements security with Public Key Infrastructure (PKI). Nodes in swarm clusters use mutual TLS encryption. When you first initialize the Docker with command Docker swarm init, node becomes a leading manager node and Certification Authority (CA). CA issues manager and worker tokens. The tokens consist of CA digest and password. With help of CA digest, new potential nodes validate the CA certificate, while CA authenticates new members with CA password. Every new node that becomes a member of a swarm cluster receives a certificate from CA.