Wed Sep 13 2023 00:00:00 GMT+0000 (Coordinated Universal Time) ~ 3 min read

Starting with Docker


Today I really dived into Docker for the first time: I’ve fiddled with it before, redeploying portainer applications when needed but not really understanding what’s going on, despite being familiar with the concept of containerization from a CS course.

To start, I had a fun manual install process on Arch, as always. I really only had to wrangle with the permissions process, where I had to add my user to the docker group and restart the service for each terminal session until I figured it out. Once it was up and running, I started and enabled docker.service so I can start Docker on boot.

My first project is a relatively simple app, I am working with a few others to connect Parsons, a python library of connectors to various APIs, to a web interface that will interact with our organization’s Google Calendar and Action Network accounts. It’s a quick and relatively painless API plug, we’re lucky to have Auth0 accounts and other standing web services to enable us to do projects like this relatively quickly. Eventually, the project will be hosted on our portainer instance, so I am starting from a containzerized state to make deployment virtually painless.

Overview

Here’s the high level of Docker, and what containerization is and how I aim to use it:

Containers represent far less code than virtual machines, instead of being instances of a whole OS, they are only the OS application layer - the libraries and binaries needed to run the application. This means that containers are much smaller and faster to deploy than VMs, and can be run on any machine with the same OS kernel or with a hypervisor in between.

In contrast, virtual machines are an entire OS, with the application layer, kernel, and hardware abstraction layer. While this gives them the benefit of being able to run on any machine no matter the underlying OS, it also means that they are much larger and slower to deploy than containers.

The goal of containerization is to make applications portable and easy to deploy while also standardizing the installation of dependencies and libraries, ensuring their consistency across any number of deployments. In Docker, you just run docker install node or docker install python and you have a self-contained version of that software. These images can be shared and deployed anywhere, but also, stacked up to create a full application utilizing these benefits, making a new image drawn from stable, tested images of the dependencies and libraries needed to run the application. It’s like someone built a CLI tool that wraps around every application on the internet.


Headshot of Victoria

Hi, I'm Victoria. I'm a software engineer, open-source enthusiast, and community organizer based in San Francisco.

Follow me on LinkedIn, or see some of my work on GitHub!