Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/phusion/baseimage-docker
A minimal Ubuntu base image modified for Docker-friendliness
https://github.com/phusion/baseimage-docker
Last synced: 5 days ago
JSON representation
A minimal Ubuntu base image modified for Docker-friendliness
- Host: GitHub
- URL: https://github.com/phusion/baseimage-docker
- Owner: phusion
- License: mit
- Created: 2013-11-12T10:41:02.000Z (almost 11 years ago)
- Default Branch: master
- Last Pushed: 2024-04-26T11:52:52.000Z (6 months ago)
- Last Synced: 2024-10-23T05:47:19.901Z (12 days ago)
- Language: Shell
- Homepage: http://phusion.github.io/baseimage-docker/
- Size: 2.04 MB
- Stars: 8,953
- Watchers: 238
- Forks: 1,091
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Changelog: Changelog.md
- Contributing: CONTRIBUTING.md
- Funding: .github/FUNDING.yml
- License: LICENSE.txt
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- starred-awesome - baseimage-docker - A minimal Ubuntu base image modified for Docker-friendliness (Python)
- awesome-starred - phusion/baseimage-docker - A minimal Ubuntu base image modified for Docker-friendliness (others)
README
# A minimal Ubuntu base image modified for Docker-friendliness
[![Release](https://github.com/phusion/baseimage-docker/actions/workflows/main.yml/badge.svg)](https://github.com/phusion/baseimage-docker/actions/workflows/main.yml)
_Baseimage-docker only consumes 8.3 MB RAM and is much more powerful than Busybox or Alpine. See why below._
Baseimage-docker is a special [Docker](https://www.docker.com) image that is configured for correct use within Docker containers. It is Ubuntu, plus:
* Modifications for Docker-friendliness.
* Administration tools that are especially useful in the context of Docker.
* Mechanisms for easily running multiple processes, [without violating the Docker philosophy](#docker_single_process).You can use it as a base for your own Docker images.
Baseimage-docker is available for pulling from [the Docker registry](https://hub.docker.com/r/phusion/baseimage) and [GHCR (GitHub Container Registry)](https://github.com/phusion/baseimage-docker/pkgs/container/baseimage)!
### What are the problems with the stock Ubuntu base image?
Ubuntu is not designed to be run inside Docker. Its init system, Upstart, assumes that it's running on either real hardware or virtualized hardware, but not inside a Docker container. But inside a container you don't want a full system; you want a minimal system. Configuring that minimal system for use within a container has many strange corner cases that are hard to get right if you are not intimately familiar with the Unix system model. This can cause a lot of strange problems.
Baseimage-docker gets everything right. The "Contents" section describes all the things that it modifies.
You can configure the stock `ubuntu` image yourself from your Dockerfile, so why bother using baseimage-docker?
* Configuring the base system for Docker-friendliness is no easy task. As stated before, there are many corner cases. By the time that you've gotten all that right, you've reinvented baseimage-docker. Using baseimage-docker will save you from this effort.
* It reduces the time needed to write a correct Dockerfile. You won't have to worry about the base system and you can focus on the stack and the app.
* It reduces the time needed to run `docker build`, allowing you to iterate your Dockerfile more quickly.
* It reduces download time during redeploys. Docker only needs to download the base image once: during the first deploy. On every subsequent deploys, only the changes you make on top of the base image are downloaded.-----------------------------------------
**Related resources**:
[Website](http://phusion.github.io/baseimage-docker/) |
[Github](https://github.com/phusion/baseimage-docker) |
[Docker registry](https://registry.hub.docker.com/r/phusion/baseimage/) |
[Discussion forum](https://groups.google.com/d/forum/passenger-docker) |
[Twitter](https://twitter.com/phusion_nl) |
[Blog](http://blog.phusion.nl/)**Table of contents**
* [What's inside the image?](#whats_inside)
* [Overview](#whats_inside_overview)
* [Wait, I thought Docker is about running a single process in a container?](#docker_single_process)
* [Does Baseimage-docker advocate "fat containers" or "treating containers as VMs"?](#fat_containers)
* [Inspecting baseimage-docker](#inspecting)
* [Using baseimage-docker as base image](#using)
* [Getting started](#getting_started)
* [Adding additional daemons](#adding_additional_daemons)
* [Running scripts during container startup](#running_startup_scripts)
* [Environment variables](#environment_variables)
* [Centrally defining your own environment variables](#envvar_central_definition)
* [Environment variable dumps](#envvar_dumps)
* [Modifying environment variables](#modifying_envvars)
* [Security](#envvar_security)
* [System logging](#logging)
* [Upgrading the operating system inside the container](#upgrading_os)
* [Container administration](#container_administration)
* [Running a one-shot command in a new container](#oneshot)
* [Running a command in an existing, running container](#run_inside_existing_container)
* [Login to the container via `docker exec`](#login_docker_exec)
* [Usage](#docker_exec)
* [Login to the container via SSH](#login_ssh)
* [Enabling SSH](#enabling_ssh)
* [About SSH keys](#ssh_keys)
* [Using the insecure key for one container only](#using_the_insecure_key_for_one_container_only)
* [Enabling the insecure key permanently](#enabling_the_insecure_key_permanently)
* [Using your own key](#using_your_own_key)
* [The `docker-ssh` tool](#docker_ssh)
* [Building the image yourself](#building)
* [Removing optional services](#removing_optional_services)
* [Conclusion](#conclusion)-----------------------------------------
*Looking for a more complete base image, one that is ideal for Ruby, Python, Node.js and Meteor web apps? Take a look at [passenger-docker](https://github.com/phusion/passenger-docker).*
| Component | Why is it included? / Remarks |
| ---------------- | ------------------- |
| Ubuntu 24.04 LTS | The base system. |
| A **correct** init process | _Main article: [Docker and the PID 1 zombie reaping problem](http://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/)._
According to the Unix process model, [the init process](https://en.wikipedia.org/wiki/Init) -- PID 1 -- inherits all [orphaned child processes](https://en.wikipedia.org/wiki/Orphan_process) and must [reap them](https://en.wikipedia.org/wiki/Wait_(system_call)). Most Docker containers do not have an init process that does this correctly. As a result, their containers become filled with [zombie processes](https://en.wikipedia.org/wiki/Zombie_process) over time.
Furthermore, `docker stop` sends SIGTERM to the init process, which stops all services. Unfortunately most init systems don't do this correctly within Docker since they're built for hardware shutdowns instead. This causes processes to be hard killed with SIGKILL, which doesn't give them a chance to correctly deinitialize things. This can cause file corruption.
Baseimage-docker comes with an init process `/sbin/my_init` that performs both of these tasks correctly. |
| Fixes APT incompatibilities with Docker | See https://github.com/dotcloud/docker/issues/1024. |
| syslog-ng | A syslog daemon is necessary so that many services - including the kernel itself - can correctly log to /var/log/syslog. If no syslog daemon is running, a lot of important messages are silently swallowed.
Only listens locally. All syslog messages are forwarded to "docker logs".
Why syslog-ng?
I've had bad experience with rsyslog. I regularly run into bugs with rsyslog, and once in a while it takes my log host down by entering a 100% CPU loop in which it can't do anything. Syslog-ng seems to be much more stable. |
| logrotate | Rotates and compresses logs on a regular basis. |
| SSH server | Allows you to easily login to your container to [inspect or administer](#login_ssh) things.
_SSH is **disabled by default** and is only one of the methods provided by baseimage-docker for this purpose. The other method is through [docker exec](#login_docker_exec). SSH is also provided as an alternative because `docker exec` comes with several caveats._
Password and challenge-response authentication are disabled by default. Only key authentication is allowed. |
| cron | The cron daemon must be running for cron jobs to work. |
| [runit](http://smarden.org/runit/) | Replaces Ubuntu's Upstart. Used for service supervision and management. Much easier to use than SysV init and supports restarting daemons when they crash. Much easier to use and more lightweight than Upstart. |
| `setuser` | A tool for running a command as another user. Easier to use than `su`, has a smaller attack vector than `sudo`, and unlike `chpst` this tool sets `$HOME` correctly. Available as `/sbin/setuser`. |
| `install_clean` | A tool for installing `apt` packages that automatically cleans up after itself. All arguments are passed to `apt-get -y install --no-install-recommends` and after installation the apt caches are cleared. To include recommended packages, add `--install-recommends`. |Baseimage-docker is very lightweight: it only consumes 8.3 MB of memory.
### Wait, I thought Docker is about running a single process in a container?The Docker developers advocate the philosophy of running a single *logical service* per container. A logical service can consist of multiple OS processes.
Baseimage-docker only advocates running multiple OS processes inside a single container. We believe this makes sense because at the very least it would solve [the PID 1 problem](http://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/) and the "syslog blackhole" problem. By running multiple processes, we solve very real Unix OS-level problems, with minimal overhead and without turning the container into multiple logical services.
Splitting your logical service into multiple OS processes also makes sense from a security standpoint. By running processes as different users, you can limit the impact of vulnerabilities. Baseimage-docker provides tools to encourage running processes as different users, e.g. the `setuser` tool.
Do we advocate running multiple *logical services* in a single container? Not necessarily, but we do not prohibit it either. While the Docker developers are very opinionated and have very rigid philosophies about how containers *should* be built, Baseimage-docker is completely unopinionated. We believe in freedom: sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't. It is up to you to decide what makes sense, not the Docker developers.
### Does Baseimage-docker advocate "fat containers" or "treating containers as VMs"?There are people who think that Baseimage-docker advocates treating containers as VMs because Baseimage-docker advocates the use of multiple processes. Therefore, they also think that Baseimage-docker does not follow the Docker philosophy. Neither of these impressions are true.
The Docker developers advocate running a single *logical service* inside a single container. But we are not disputing that. Baseimage-docker advocates running multiple *OS processes* inside a single container, and a single logical service can consist of multiple OS processes.
It follows that Baseimage-docker also does not deny the Docker philosophy. In fact, many of the modifications we introduce are explicitly in line with the Docker philosophy. For example, using environment variables to pass parameters to containers is very much the "Docker way", and providing [a mechanism to easily work with environment variables](#environment_variables) in the presence of multiple processes that may run as different users.
## Inspecting baseimage-dockerTo look around in the image, run:
docker run --rm -t -i phusion/baseimage: /sbin/my_init -- bash -l
where `` is [one of the baseimage-docker version numbers](https://github.com/phusion/baseimage-docker/blob/master/Changelog.md).
You don't have to download anything manually. The above command will automatically pull the baseimage-docker image from the Docker registry.
## Using baseimage-docker as base imageThe image is called `phusion/baseimage`, and is available on the Docker registry.
# Use phusion/baseimage as base image. To make your builds reproducible, make
# sure you lock down to a specific version, not to `latest`!
# See https://github.com/phusion/baseimage-docker/blob/master/Changelog.md for
# a list of version numbers.
FROM phusion/baseimage:# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]# ...put your own build instructions here...
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*A daemon is a program which runs in the background of its system, such
as a web server.You can add additional daemons (for example, your own app) to the image
by creating runit service directories. You only have to write a small
shell script which runs your daemon;
[`runsv`](http://smarden.org/runit/runsv.8.html) will start your script,
and - by default - restart it upon its exit, after waiting one second.The shell script must be called `run`, must be executable, and is to be
placed in the directory `/etc/service/`. `runsv` will switch to
the directory and invoke `./run` after your container starts.**Be certain that you do not start your container using interactive mode
(`-it`) with another command, as `runit` must be the first process to run. If you do this, your runit service directories won't be started. For instance, `docker run -it bash` will bring you to bash in your container, but you'll lose all your daemons.**Here's an example showing you how a `runit` service directory can be
made for a `memcached` server.In `memcached.sh`, or whatever you choose to name your file (make sure
this file is chmod +x):
```bash
#!/bin/sh
# `/sbin/setuser memcache` runs the given command as the user `memcache`.
# If you omit that part, the command will be run as root.
exec /sbin/setuser memcache /usr/bin/memcached >>/var/log/memcached.log 2>&1
```
In an accompanying `Dockerfile`:```Dockerfile
RUN mkdir /etc/service/memcached
COPY memcached.sh /etc/service/memcached/run
RUN chmod +x /etc/service/memcached/run
```
A given shell script must run **without daemonizing or forking itself**;
this is because `runit` will start and restart your script on its own.
Usually, daemons provide a command line flag or a config file option for
preventing such behavior - essentially, you just want your script to run
in the foreground, not the background.
### Running scripts during container startupThe baseimage-docker init system, `/sbin/my_init`, runs the following scripts during startup, in the following order:
* All executable scripts in `/etc/my_init.d`, if this directory exists. The scripts are run in lexicographic order.
* The script `/etc/rc.local`, if this file exists.All scripts must exit correctly, e.g. with exit code 0. If any script exits with a non-zero exit code, the booting will fail.
**Important note:** If you are executing the container in interactive mode (i.e. when you run a container with `-it`), rather than daemon mode, you are sending stdout directly to the terminal (`-i` interactive `-t` terminal). If you are not calling `/sbin/my_init` in your run declaration, `/sbin/my_init` will not be executed, therefore your scripts will not be called during container startup.
The following example shows how you can add a startup script. This script simply logs the time of boot to the file /tmp/boottime.txt.
In `logtime.sh`:
#!/bin/sh
date > /tmp/boottime.txtIn `Dockerfile`:
RUN mkdir -p /etc/my_init.d
COPY logtime.sh /etc/my_init.d/logtime.sh
RUN chmod +x /etc/my_init.d/logtime.sh#### Shutting down your process
`/sbin/my_init` handles termination of children processes at shutdown. When it receives a SIGTERM
it will pass the signal onto the child processes for correct shutdown. If your process is started with
a shell script, make sure you `exec` the actual process, otherwise the shell will receive the signal
and not your process.`/sbin/my_init` will terminate processes after a 5 second timeout. This can be adjusted by setting
environment variables:# Give children processes 5 minutes to timeout
ENV KILL_PROCESS_TIMEOUT=300
# Give all other processes (such as those which have been forked) 5 minutes to timeout
ENV KILL_ALL_PROCESSES_TIMEOUT=300Note: Prior to 0.11.1, the default values for `KILL_PROCESS_TIMEOUT` and `KILL_ALL_PROCESSES_TIMEOUT`
were 5 seconds. In version 0.11.1+ the default process timeout has been adjusted to 30 seconds to
allow more time for containers to terminate gracefully. The default timeout of your container runtime
may supersede this setting, for example Docker currently applies a [10s timeout](https://docs.docker.com/engine/reference/commandline/stop/#options)
by default before sending SIGKILL, upon `docker stop` or receiving SIGTERM.### Environment variables
If you use `/sbin/my_init` as the main container command, then any environment variables set with `docker run --env` or with the `ENV` command in the Dockerfile, will be picked up by `my_init`. These variables will also be passed to all child processes, including `/etc/my_init.d` startup scripts, Runit and Runit-managed services. There are however a few caveats you should be aware of:
* Environment variables on Unix are inherited on a per-process basis. This means that it is generally not possible for a child process to change the environment variables of other processes.
* Because of the aforementioned point, there is no good central place for defining environment variables for all applications and services. Debian has the `/etc/environment` file but it only works in some situations.
* Some services change environment variables for child processes. Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the `env` configuration option. If you host any applications on Nginx (e.g. using the [passenger-docker](https://github.com/phusion/passenger-docker) image, or using Phusion Passenger in your own image) then they will not see the environment variables that were originally passed by Docker.
* We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because _not_ ignoring them will break multi-user containers. See https://github.com/phusion/baseimage-docker/pull/86 -- A workaround for setting the `HOME` environment variable looks like this: `RUN echo /root > /etc/container_environment/HOME`. See https://github.com/phusion/baseimage-docker/issues/119`my_init` provides a solution for all these caveats.
#### Centrally defining your own environment variablesDuring startup, before running any [startup scripts](#running_startup_scripts), `my_init` imports environment variables from the directory `/etc/container_environment`. This directory contains files named after the environment variable names. The file contents contain the environment variable values. This directory is therefore a good place to centrally define your own environment variables, which will be inherited by all startup scripts and Runit services.
For example, here's how you can define an environment variable from your Dockerfile:
RUN echo Apachai Hopachai > /etc/container_environment/MY_NAME
You can verify that it works, as follows:
$ docker run -t -i /sbin/my_init -- bash -l
...
*** Running bash -l...
# echo $MY_NAME
Apachai Hopachai**Handling newlines**
If you've looked carefully, you'll notice that the 'echo' command actually prints a newline. Why does $MY_NAME not contain a newline then? It's because `my_init` strips the trailing newline. If you intended on the value having a newline, you should add *another* newline, like this:
RUN echo -e "Apachai Hopachai\n" > /etc/container_environment/MY_NAME
#### Environment variable dumpsWhile the previously mentioned mechanism is good for centrally defining environment variables, itself does not prevent services (e.g. Nginx) from changing and resetting environment variables from child processes. However, the `my_init` mechanism does make it easy for you to query what the original environment variables are.
During startup, right after importing environment variables from `/etc/container_environment`, `my_init` will dump all its environment variables (that is, all variables imported from `container_environment`, as well as all variables it picked up from `docker run --env`) to the following locations, in the following formats:
* `/etc/container_environment`
* `/etc/container_environment.sh` - a dump of the environment variables in Bash format. You can source the file directly from a Bash shell script.
* `/etc/container_environment.json` - a dump of the environment variables in JSON format.The multiple formats make it easy for you to query the original environment variables no matter which language your scripts/apps are written in.
Here is an example shell session showing you how the dumps look like:
$ docker run -t -i \
--env FOO=bar --env HELLO='my beautiful world' \
phusion/baseimage: /sbin/my_init -- \
bash -l
...
*** Running bash -l...
# ls /etc/container_environment
FOO HELLO HOME HOSTNAME PATH TERM container
# cat /etc/container_environment/HELLO; echo
my beautiful world
# cat /etc/container_environment.json; echo
{"TERM": "xterm", "container": "lxc", "HOSTNAME": "f45449f06950", "HOME": "/root", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "FOO": "bar", "HELLO": "my beautiful world"}
# source /etc/container_environment.sh
# echo $HELLO
my beautiful world
#### Modifying environment variablesIt is even possible to modify the environment variables in `my_init` (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in `/etc/container_environment`. After each time `my_init` runs a [startup script](#running_startup_scripts), it resets its own environment variables to the state in `/etc/container_environment`, and re-dumps the new environment variables to `container_environment.sh` and `container_environment.json`.
But note that:
* modifying `container_environment.sh` and `container_environment.json` has no effect.
* Runit services cannot modify the environment like that. `my_init` only activates changes in `/etc/container_environment` when running startup scripts.Because environment variables can potentially contain sensitive information, `/etc/container_environment` and its Bash and JSON dumps are by default owned by root, and accessible only to the `docker_env` group (so that any user added this group will have these variables automatically loaded).
If you are sure that your environment variables don't contain sensitive data, then you can also relax the permissions on that directory and those files by making them world-readable:
RUN chmod 755 /etc/container_environment
RUN chmod 644 /etc/container_environment.sh /etc/container_environment.jsonBaseimage-docker uses syslog-ng to provide a syslog facility to the container. Syslog-ng is not managed as an runit service (see below). Syslog messages are forwarded to the console.
#### Log startup/shutdown sequence
In order to ensure that all application log messages are captured by syslog-ng, syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits. This uses the [startup script facility](#running_startup_scripts) provided by this image. This avoids a race condition which would exist if syslog-ng were managed as an runit service, where runit kills syslog-ng in parallel with the container's other services, causing log messages to be dropped during a graceful shutdown if syslog-ng exits while logs are still being produced by other services.
### Upgrading the operating system inside the containerBaseimage-docker images contain an Ubuntu operating system (see OS version at [Overview](#overview)). You may want to update this OS from time to time, for example to pull in the latest security updates. OpenSSL is a notorious example. Vulnerabilities are discovered in OpenSSL on a regular basis, so you should keep OpenSSL up-to-date as much as you can.
While we release Baseimage-docker images with the latest OS updates from time to time, you do not have to rely on us. You can update the OS inside Baseimage-docker images yourself, and it is recommended that you do this instead of waiting for us.
To upgrade the OS in the image, run this in your Dockerfile:
RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box. However, you may occasionally encounter situations where you want to login to a container, or to run a command inside a container, for development, inspection and debugging purposes. This section describes how you can administer the container for those purposes.
### Running a one-shot command in a new container_**Note:** This section describes how to run a command insider a -new- container. To run a command inside an existing running container, see [Running a command in an existing, running container](#run_inside_existing_container)._
Normally, when you want to create a new container in order to run a single command inside it, and immediately exit after the command exits, you invoke Docker like this:
docker run YOUR_IMAGE COMMAND ARGUMENTS...
However the downside of this approach is that the init system is not started. That is, while invoking `COMMAND`, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because `COMMAND` is PID 1.
Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems. Run a single command in the following manner:
docker run YOUR_IMAGE /sbin/my_init -- COMMAND ARGUMENTS ...
This will perform the following:
* Runs all system startup files, such as /etc/my_init.d/* and /etc/rc.local.
* Starts all runit services.
* Runs the specified command.
* When the specified command exits, stops all runit services.For example:
$ docker run phusion/baseimage: /sbin/my_init -- ls
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 80
*** Running ls...
bin boot dev etc home image lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
*** ls exited with exit code 0.
*** Shutting down runit daemon (PID 80)...
*** Killing all processes...You may find that the default invocation is too noisy. Or perhaps you don't want to run the startup files. You can customize all this by passing arguments to `my_init`. Invoke `docker run YOUR_IMAGE /sbin/my_init --help` for more information.
The following example runs `ls` without running the startup files and with less messages, while running all runit services:
$ docker run phusion/baseimage: /sbin/my_init --skip-startup-files --quiet -- ls
bin boot dev etc home image lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
### Running a command in an existing, running containerThere are two ways to run a command inside an existing, running container.
* Through the `docker exec` tool. This is builtin Docker tool, available since Docker 1.4. Internally, it uses Linux kernel system calls in order to execute a command within the context of a container. Learn more in [Login to the container, or running a command inside it, via `docker exec`](#login_docker_exec).
* Through SSH. This approach requires running an SSH daemon inside the container, and requires you to setup SSH keys. Learn more in [Login to the container, or running a command inside it, via SSH](#login_ssh).Both way have their own pros and cons, which you can learn in their respective subsections.
### Login to the container, or running a command inside it, via `docker exec`You can use the `docker exec` tool on the Docker host OS to login to any container that is based on baseimage-docker. You can also use it to run a command inside a running container. `docker exec` works by using Linux kernel system calls.
Here's how it compares to [using SSH to login to the container or to run a command inside it](#login_ssh):
* Pros
* Does not require running an SSH daemon inside the container.
* Does not require setting up SSH keys.
* Works on any container, even containers not based on baseimage-docker.
* Cons
* If the `docker exec` process on the host is terminated by a signal (e.g. with the `kill` command or even with Ctrl-C), then the command that is executed by `docker exec` is *not* killed and cleaned up. You will either have to do that manually, or you have to run `docker exec` with `-t -i`.
* Requires privileges on the Docker host to be able to access the Docker daemon. Note that anybody who can access the Docker daemon effectively has root access.
* Not possible to allow users to login to the container without also letting them login to the Docker host.Start a container:
docker run YOUR_IMAGE
Find out the ID of the container that you just ran:
docker ps
Now that you have the ID, you can use `docker exec` to run arbitrary commands in the container. For example, to run `echo hello world`:
docker exec YOUR-CONTAINER-ID echo hello world
To open a bash session inside the container, you must pass `-t -i` so that a terminal is available:
docker exec -t -i YOUR-CONTAINER-ID bash -l
### Login to the container, or running a command inside it, via SSHYou can use SSH to login to any container that is based on baseimage-docker. You can also use it to run a command inside a running container.
Here's how it compares to [using `docker exec` to login to the container or to run a command inside it](#login_docker_exec):
* Pros
* Does not require root privileges on the Docker host.
* Allows you to let users login to the container, without letting them login to the Docker host. However, this is not enabled by default because baseimage-docker does not expose the SSH server to the public Internet by default.
* Cons
* Requires setting up SSH keys. However, baseimage-docker makes this easy for many cases through a pregenerated, insecure key. Read on to learn more.Baseimage-docker disables the SSH server by default. Add the following to your Dockerfile to enable it:
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.shAlternatively, to enable sshd only for a single instance of your container, create a folder with a [startup script](#running_startup_scripts). The contents of that should be
### In myfolder/enable_ssh.sh (make sure this file is chmod +x):
#!/bin/sh
rm -f /etc/service/sshd/down
ssh-keygen -P "" -t dsa -f /etc/ssh/ssh_host_dsa_keyThen, you can start your container with
docker run -d -v `pwd`/myfolder:/etc/my_init.d my/dockerimage
This will initialize sshd on container boot. You can then access it with the insecure key as below, or using the methods to add a secure key. Further, you can publish the port to your machine with -p 2222:22 allowing you to ssh to 127.0.0.1:2222 instead of looking up the ip address of the container.
First, you must ensure that you have the right SSH keys installed inside the container. By default, no keys are installed, so nobody can login. For convenience reasons, we provide [a pregenerated, insecure key](https://github.com/phusion/baseimage-docker/blob/master/image/services/sshd/keys/insecure_key) [(PuTTY format)](https://github.com/phusion/baseimage-docker/blob/master/image/services/sshd/keys/insecure_key.ppk) that you can easily enable. However, please be aware that using this key is for convenience only. It does not provide any security because this key (both the public and the private side) is publicly available. **In production environments, you should use your own keys**.
#### Using the insecure key for one container onlyYou can temporarily enable the insecure key for one container only. This means that the insecure key is installed at container boot. If you `docker stop` and `docker start` the container, the insecure key will still be there, but if you use `docker run` to start a new container then that container will not contain the insecure key.
Start a container with `--enable-insecure-key`:
docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
Find out the ID of the container that you just ran:
docker ps
Once you have the ID, look for its IP address with:
docker inspect -f "{{ .NetworkSettings.IPAddress }}"
Now that you have the IP address, you can use SSH to login to the container, or to execute a command inside it:
# Download the insecure private key
curl -o insecure_key -fSL https://github.com/phusion/baseimage-docker/raw/master/image/services/sshd/keys/insecure_key
chmod 600 insecure_key# Login to the container
ssh -i insecure_key root@# Running a command inside the container
ssh -i insecure_key root@ echo hello world
#### Enabling the insecure key permanentlyIt is also possible to enable the insecure key in the image permanently. This is not generally recommended, but is suitable for e.g. temporary development or demo environments where security does not matter.
Edit your Dockerfile to install the insecure key permanently:
RUN /usr/sbin/enable_insecure_key
Instructions for logging into the container is the same as in section [Using the insecure key for one container only](#using_the_insecure_key_for_one_container_only).
Edit your Dockerfile to install an SSH public key:
## Install an SSH of your choice.
COPY your_key.pub /tmp/your_key.pub
RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pubThen rebuild your image. Once you have that, start a container based on that image:
docker run your-image-name
Find out the ID of the container that you just ran:
docker ps
Once you have the ID, look for its IP address with:
docker inspect -f "{{ .NetworkSettings.IPAddress }}"
Now that you have the IP address, you can use SSH to login to the container, or to execute a command inside it:
# Login to the container
ssh -i /path-to/your_key root@# Running a command inside the container
ssh -i /path-to/your_key root@ echo hello worldLooking up the IP of a container and running an SSH command quickly becomes tedious. Luckily, we provide the `docker-ssh` tool which automates this process. This tool is to be run on the *Docker host*, not inside a Docker container.
First, install the tool on the Docker host:
curl --fail -L -O https://github.com/phusion/baseimage-docker/archive/master.tar.gz && \
tar xzf master.tar.gz && \
sudo ./baseimage-docker-master/install-tools.shThen run the tool as follows to login to a container using SSH:
docker-ssh YOUR-CONTAINER-ID
You can lookup `YOUR-CONTAINER-ID` by running `docker ps`.
By default, `docker-ssh` will open a Bash session. You can also tell it to run a command, and then exit:
docker-ssh YOUR-CONTAINER-ID echo hello world
## Building the image yourselfIf for whatever reason you want to build the image yourself instead of downloading it from the Docker registry, follow these instructions.
Clone this repository:
git clone https://github.com/phusion/baseimage-docker.git
cd baseimage-dockerStart a virtual machine with Docker in it. You can use the Vagrantfile that we've already provided.
First, install `vagrant-disksize` plug-in:
vagrant plugin install vagrant-disksize:
Then, start the virtual machine
vagrant up
vagrant ssh
cd /vagrantBuild the image:
make build
If you want to call the resulting image something else, pass the NAME variable, like this:
make build NAME=joe/baseimage
You can also change the `ubuntu` base-image to `debian` as these distributions are quite similar.
make build BASE_IMAGE=debian:stretch
The image will be: `phusion/baseimage-debian-stretch`. Use the `NAME` variable in combination with the `BASE_IMAGE` one to call it `joe/stretch`.
make build BASE_IMAGE=debian:stretch NAME=joe/stretch
To verify that the various services are started, when the image is run as a container, add `test` to the end of your make invocations, e.g.:
make build BASE_IMAGE=debian:stretch NAME=joe/stretch test
### Removing optional servicesThe default baseimage-docker installs `syslog-ng`, `cron` and `sshd` services during the build process.
In case you don't need one or more of these services in your image, you can disable its installation through the `image/buildconfig` that is sourced within `image/system_services.sh`. Do this at build time by passing a variable in with `--build-arg` as in `docker build --build-arg DISABLE_SYSLOG=1 image/`, or you may set the variable in `image/Dockerfile` with an ENV setting above the RUN directive.
These represent build-time configuration, so setting them in the shell env at build-time [will not have any effect](https://github.com/phusion/baseimage-docker/issues/459#issuecomment-439177442). Setting them in child images' Dockerfiles will also not have any effect.)
You can also set them directly as shown in the following example, to prevent `sshd` from being installed into your image, set `1` to the `DISABLE_SSH` variable in the `./image/buildconfig` file.
### In ./image/buildconfig
# ...
# Default services
# Set 1 to the service you want to disable
export DISABLE_SYSLOG=0
export DISABLE_SSH=1
export DISABLE_CRON=0Then you can proceed with `make build` command.
* Using baseimage-docker? [Tweet about us](https://twitter.com/share) or [follow us on Twitter](https://twitter.com/phusion_nl).
* Having problems? Want to participate in development? Please post a message at [the discussion forum](https://groups.google.com/d/forum/passenger-docker).
* Looking for a more complete base image, one that is ideal for Ruby, Python, Node.js and Meteor web apps? Take a look at [passenger-docker](https://github.com/phusion/passenger-docker).
* Need a helping hand? Phusion also offers [consulting](https://www.phusion.nl/consultancy) on a wide range of topics, including Web Development, UI/UX Research & Design, Technology Migration and Auditing.[](https://www.phusion.nl/)
Please enjoy baseimage-docker, a product by [Phusion](http://www.phusion.nl/). :-)