By Jeff Nickoloff
Containers are a crosscutting concern. There are more reasons and ways that people could use them than I could ever enumerate. So, it’s important that when you use Docker to build containers to serve your own purposes, you take the time to do so in a way that’s appropriate for the software you’re running.
The most secure tactic for doing so would be to start with the most isolated container you can build and justify reasons for weakening those restrictions. In reality, people tend to be a bit more reactive than proactive. For that reason, I think Docker hits a sweet spot with the default container construction. It provides reasonable defaults without hindering productivity of users.
Docker containers are not the most isolated by default. Docker does not require that you enhance those defaults. It will let you do silly things in production if you want to. This makes Docker seem much more like a tool than a burden and something people generally want to use rather than something they feel like they have to use. For those who would rather not do silly things in production, Docker provides a simple interface to enhance container isolation.
Applications are the whole reason we use computers. Most applications are programs that other people wrote and work with potentially malicious data. Consider your web browser.
A web browser is a type of application that’s installed on almost every computer. It interacts with web pages, images, scripts, embedded video, Flash™ documents, Java applications, and anything else out there. You certainly didn’t create all that content, and most people were not contributors on web browser projects. How can you trust your web browser to handle all that content correctly?
Some more cavalier readers might just ignore the problem. After all, what’s the worst thing that could happen? Well, if an attacker gains control of your web browser (or other application), they will gain all of the capabilities of that application and the permissions of the user that it is running as. They could trash your computer, delete your files, install other malware, or even launch attacks against other computers from yours. So, this isn’t a good thing to ignore. The question remains, how do you protect yourself when this is a risk that you need to take?
The best approach is to isolate the risk. First, make sure that the application is running as a user with limited permissions. This way, if there’s a problem it won’t be able to change the files on your computer. Second, limit the system capabilities of the browser. In doing so you make sure that your system configuration is safer. Third, set limits on how much of the CPU and memory the application can use. Limits will help reserve resources to keep the system responsive. Last, it’s a good idea to specifically whitelist devices that it can access. This will keep snoops off your webcam, USB, and the like.
High-level system services
High-level system services are a bit different from applications. They’re not part of the operating system, but your computer makes sure they’re started and kept running. These tools typically sit alongside applications outside the operating system, but they often require privileged access to the operating system to operate correctly. They provide important functionality to users and other software on a system. Examples include cron, syslogd, dbus, sshd, and docker.
If you’re unfamiliar with these tools (hopefully not all of them), it’s all right. They do things like keep system logs, run scheduled commands, and provide a way to get a secure shell on the system from the network, and docker manages containers.
Although running services as root is common, few of them actually need full privileged access. Use capabilities to tune their access for the specific features they need.
Low-level system services
Low-level services control things like devices or the system’s network stack. They require privileged access to the components of the system they provide (for example, firewall software needs administrative access to the network stack).
It’s rare to see these run inside containers. Tasks like file system management, device management, and network management are core host concerns. Most software run in containers is expected to be portable. So machine-specific tasks like these are a poor fit for general container use cases.
The best exceptions are short-running configuration containers. For example, in an environment where all deployments happen with Docker images and containers, you’d want to push network stack changes the same way you push software. In this case, you might push an image with the configuration to the host and make the changes with a privileged container. The risk in this case is reduced because you authored the configuration to be pushed, the container is not long running, and changes like these are simple to audit.
Docker: Build use-case-appropriate containers
By Jeff Nickoloff
Excerpted from Docker in Action.