Why Is Container Security Important? What Are The Top 10 Docker Projects According To OWASP?

Why Is Container Security Important? What Are The Top 10 Docker Projects According To OWASP?

Why is Container Security Important?

A thorough security evaluation must include container security as a crucial component. Using a combination of security technologies and policies, it is the practice of securing containerized applications against potential threats. The most common containerization technology is called Docker. If used properly, it can raise the bar for security (in comparison to running applications directly on the host). On the other hand, some configuration errors may cause security to be compromised or even result in the introduction of new vulnerabilities.

What are the Top 10 Docker Projects according to OWASP?

OWASP Docker Top 10

You may plan and implement a safe Docker-based container infrastructure using the ten bullet points provided by the OWASP Docker Top 10 initiative. These ten points are listed in order of importance. They reflect security controls, not risks, as each individual point in the OWASP Top 10.

OWASP Docker Top 10

Title Description

D01 – Secure User Mapping

Most often the application within the container runs with the default administrative privileges: root. This violates the least privilege principle and gives an attacker better chances further extending his activities if he manages to break out of the application into the container. From the host perspective the application should never run as root.

D02 – Patch Management Strategy

The host, the containment technology, the orchestration solution and the minimal operating system images in the container will have security bugs. Once publicly known it is vital for your security posture to address those bugs in a timely fashion. For all those components mentioned you need to decide when you apply regular and emergency patches before you put those into production.

D03 – Network Segmentation and Firewalling

You properly need to design your network upfront. Management interfaces from the orchestration tool and especially network services from the host are crucial and need to be protected on a network level. Also make sure that all other network based microservices are only exposed to the legitimate consumer of this microservice and not to the whole network.

D04 – Secure Defaults and Hardening

Depending on your choice of host and container operating system and orchestration tool you have to take care that no unneeded components are installed or started. Also all needed components need to be properly configured and locked down.

D05 – Maintain Security Contexts

Mixing production containers on one host with other stages of undefined or less secure containers may open a backdoor to your production. Also mixing e.g. frontend with backend services on one host may have negative security impacts.

D06 – Protect Secrets

Authentication and authorization of a microservice against a peer or a third party requires secrets to be provided. For an attacker those secrets potentially enable him to access more of your data or services. Thus any passwords, tokens, private keys or certificates need to be protected as well as possible.

D07 – Resource Protection

As all containers share the same physical CPU, disks, memory and networks. Those physical resources need to be protected so that a single container running out of control — deliberately or not — doesn’t affect any other container’s resources.

D08 – Container Image Integrity and Origin

The minimal operating system in the container runs your code and needs to be trustworthy, starting from the origin up until the deployment. You need to make sure that all transfers and images at rest haven’t been tampered with.

D09 – Follow Immutable Paradigm

Often container images don’t need to be written into their filesystem or a mounted filesystem, once set up and deployed. In those cases you have an extra security benefit if you start the containers in read-only mode.

D10 – Logging

For your container image, orchestration tool and host you need to log all security relevant events on a system and API level. All logs should be remote, they should contain a common timestamp and they should be tamper proof. Your application should also provide remote logging.

D01 - Secure User Mapping

How can I find out?


Depending on how you start your containers the first place is to have a look into the configuration / build file of your container whether it contains a user.


Have a look in the process list of the host, or use docker top or docker inspect.

					ps auxwf
docker top <containerID> or for d in $(docker ps -q); do docker top $d; done
Determine the value of the key Config/User in docker inspect <containerID>. For all running containers: docker inspect $(docker ps -q) --format='{{.Config.User}}'

User namespaces

The files /etc/subuid and /etc/subgid do the UID mapping for all containers. If they don’t exist and /var/lib/docker/ doesn’t contain any other entries owned by root:root you’re not using any UID remapping. On the other hand if those files exist and there are files in that directory you still need to check whether your docker daemon was started with --userns-remap or the config file /etc/docker/daemon.json was used.

How Do I prevent it?

It is important to run your microservice with the least privilege possible.

First of all: Never use the --privileged flag. It gives all so-called capabilities (see D04) to the container and it can access host devices (/dev) including disks, and also has access to the /sys and /proc filesystem. And with a little work the container can even load kernel modules on the host [2]. The good thing is that containers are per default unprivileged. You would have to configure them explicitly to run privileged.

However, still running your microservice under a different user as root requires configuration. You need to configure your mini distribution of your container to both contain a user (and maybe a group) and your service needs to make use of this user and group.

Basically there are two choices.

In a simple container scenario if you build your container you have to add RUN useradd <username> or RUN adduser <username>  with the appropriate parameters — respectively the same applies for group IDs. Then, before you start the microservice, the USER <username> [3] switches to this user. Please note that a standard web server wants to use a port like 80 or 443. Configuring a user doesn’t let you bind the server on any port below 1024. There’s no need at all to bind to a low port for any service. You should then configure a higher port and map this port accordingly with the expose command [4]. If a binary needs root for other reasons you can grant them the capability only using setcap instead of full root privileges [5].

The second choice would be using Linux user namespaces. Namespaces are a general means to provide to a container a different (faked) view of Linux kernel resources. There are different resources available like User, Network, PID, IPC, see namespaces(7). In the case of user namespaces a container could be provided with a relative perspective of a standard root user whereas the host kernel maps this to a different user ID. More, see [6], cgroup_namespaces(7) and user_namespaces(7).

User namespace does come with some limitations [7]. If you run user namespacing you e.g. can’t share network/pid namespace with the host --pid=host or --network=host. Also, all your containers on a host will be defaulted to it, unless you explicitly configure this differently per container.

In any case use user IDs which haven’t been taken yet. If you e.g. run a service in a container which maps outside the container to a systemd user, this is not necessarily better.

Your mileage may vary if you’re using an orchestration tool. In an orchestrated environment make sure that you have a proper pod security policy..


Recent Posts

Follow Us

Web Application Firewall Solution