My Learning Journey: Do more on containers in Docker 🐳

It's been a long time since my last post. Life has been busy. But let's quickly get down to the stuff that matters. This week I was learning more about containers and their different configurations.

Past Learnings

Last to last week, I learnt more Dockerfiles, and more about images. I learnt about multi-stage builds, how to optimise a Dockerfile for most efficiency. This week, it's all about containers.

Publishing ports: Dictating where containers listen

Publishing ports in Docker is basically port forwarding from Docker's container to our host system for access to the resources. Port forwarding is a really cool thing, but it can be a little hassle to set up the traditional way. Well, Docker makes it easy to publish ports in docker run command simply with -p flag

docker run -p HOST_PORT:CONTAINER_PORT <container_name>

The above command exposes the CONTAINER_PORT of the Docker container and maps it to the HOST_PORT of the host. For example:

docker run -d -p 8080:80 nginx

This command takes in the 80 port of the Docker container built with the nginx image and maps it to port 8080 of the host. Publishing ports is just a quick snap of the fingers in Docker.

Another thing we can do is publish to ephemeral ports. This is done when we don't care to specify which port we want on the host machine for the container to map to.

docker run -p 80 nginx
docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS                    NAMES
a527355c9c53   nginx         "/docker-entrypoint.…"   4 seconds ago    Up 3 seconds    0.0.0.0:54772->80/tcp    romantic_williamson

From the above commands, we can see that we are mapping the 80 port of the Docker container to port 54772 on the host machine.

We can also publish to all ports on our host using the -P or --publish-all flag. This flag exposes all the ports configured by the image and makes them available to use by the host machine. This can be useful to prevent any conflicts during development.

Overriding containers: Get the default stuff out!

Every container that we create usually comes with default settings and configurations for the user to just start out and not worry about so much. But these configurations can also be modified according to our needs.

We already saw an example of overriding with port publishing. We can use port publishing to also run different, separate instances of the same image in different ports.

Another way of overriding containers is by changing environment variables. Usually, containers, such as database containers, would come with a set of environment variables which might contain the hostname and password of the container. We can override the default values by our own variables with the -e flag

docker run -e foo=bar postgres

Here is an example of creating a new environment variable foo. Another way of passing environment variables is by using a specific file called .env file which contains all the environment variables. A .env file can be set up with Docker using the following command:

docker run --env-file .env postgres

We can also restrict containers to use limited resources. This can be done in the following manner:

docker run -e POSTGRES_PASSWORD=secret --memory="512m" --cpus="0.5" postgres

Here, we are using the --memory and --cpus flags to limit the amount of memory and CPU quota for the Docker container.

Conclusion

It's really great that Docker gives so much control to the user over everything while simultaneously simplifying everything for use. No wonder it is so popular in the development world. It totally deserves it. Although I had much more to share, especially about persisting container data, I feel I should split it into another blog for more readability. Until next time, then!