Using Containers in Local Development
👋 I’d like to share a little bit of my first few months at American Airlines.
It Worked on My Machine 🤷
When I came to American Airlines, I tried to contribute to several InnerSource projects in our corporate VCS. One of the things I observed is that local development leaned on local installations of the application framework extensively. In the past, I’ve been victim of “worked on my machine” in similar setups and wanted to get a better understanding of how pervasive containers were being used. I wanted to present options on how to use containers to help remove local dependency hell. There was plenty of opportunity to leverage this approach for local development 🙌
Using containers for a local development environment can help remove impediments in situations such as:
- My stack has several dependencies that are hard to emulate:
- Databases, and associated volumes for (short-lived) storage
- Caches
- Connectivity to the above, with declarative service names
- I have a bunch of application dependencies (
requirements.txt
,packages.json
,Gemfile
) - There are many moving parts to reproduce a production-like environment
- My team uses different operating systems, or versions of operating systems
The American engineers and architects (my new friends) all were intrigued to use containers in new ways and looked forward to learning how to consume this tech as part of local development. Generally speaking, application development and application workloads hadn’t used a containerized approach at American. This is understandable. The airline has gone through several years of change with a merger, delivery transformation, a pandemic, and simply running an airline!
The best way to demo to other engineers is through code. To illustrate the approach, we’ll use this blog as an example of how we do local development and enable both Windows and macOS operating systems. As our engineers and architects write content for this blog, update the aesthetics, or add new features, local development using containers will be the documented approach.
Preparing the Cabin for Takeoff
Since we have a couple of main operating systems for local development and our developers have some choices, we need to layout some basic minimums for the two primary developer rigs and establish a baseline for each:
Windows
- Install git-scm. The primary component, besides
git
(:awesome:) isgit-bash
. With this terminal/shell option, we can setup local development in an opinionated way. This doesn’t mandategit-bash
but allows us to code in a consistent way. For my PowerShell friends out there, they can probably adjust accordingly. - Install make from
ezwinports
. Thewithout-guile
option works for these needs.
macOS
Good enough. Moving on.
Windows and macOS
- Install Docker -
stable
oredge
is fine. For the purposes of this write-up, we’re using basic features.
I’m leveraging make
for our setup so that the macros in a local development environment can follow a consistent approach as well. Instead of .sh
and/or .bat
files and the like, we can just use a Makefile
with some conditionals.
The Makefile … File?
Let’s go through the Makefile for our Tech Blog. At first, our developers run on macOS and we can simplify our approach.
.PHONY = all
IMAGE = "local/aa-techblog"
HUGO_VERSION = "0.76.3"
build:
@docker build -t $(IMAGE) -f docker/Dockerfile . --build-arg HUGO_VERSION=$(HUGO_VERSION)
serve: build
@docker run -it --rm -p 1313:1313 -v $(PWD):/app -w /app $(IMAGE)
But wait … 🛑
These volume
mappings (using Docker’s -v
command switch) don’t work very well in Windows.
Our build
macro in the Makefile
is pretty safe across container hosts. We’ll skip changes to that section. However, the serve
macro will cause problems for Windows to understand how to mount the directory into the container run-time.
We add a conditional to the Makefile
and use a git-bash
environment variable we know doesn’t natively exist in macOS, MSYSTEM
.
.PHONY = all
IMAGE = "local/aa-techblog"
HUGO_VERSION = "0.76.3"
WINDOWS_MESSAGE = "Container host is Windows"
NWINDOWS_MESSAGE = "Container host is not Windows"
build:
@docker build -t $(IMAGE) -f docker/Dockerfile . --build-arg HUGO_VERSION=$(HUGO_VERSION)
serve: build
ifeq ($(origin MSYSTEM), undefined)
@echo $(NWINDOWS_MESSAGE)
@docker run -it --rm -p 1313:1313 -v $(PWD):/app -w /app $(IMAGE)
else
@echo $(WINDOWS_MESSAGE)
@winpty docker run -it --rm -p 1313:1313 -v "//$(shell PWD)":/app $(IMAGE)
endif
Let’s break down the Windows section
@winpty docker run -it --rm -p 1313:1313 -v "//$(shell PWD)":/app $(IMAGE)
@winpty
: the@
keepsmake
from echoing the rest of the command; thewinpty
emulates aTTY
for us to view the container output. Thewinpty
binary comes withgit-bash
-v "//$(shell PWD)"
: the$(PWD)
command in macOS has very different output on Windows, and resolves toC:/Program Files/...
, or something similar. This causes problems for the container engine. We need a more *nix-y path, like /c/our/current/directory. However, our container engine and the Windows file system API is really looking for//
in front of the path. So, we concatenate the output of theshell PWD
command with a couple of extra/
slashes and voila!
A couple of gotchas:
ifeq
conditions have to be at column 0, so left justify all the way overMSYSTEM
is the name of the environment variable, not a value- the
//
in the host volume mount section is needed so that the container engine mapping doesn’t fail
Ready for Takeoff
We now have a consistent way to build, test, and verify our code on macOS or Windows. We can make changes to our blog - layout, content, and images, and see those changes reload automatically when we run a local instance. We leverage a git based workflow so new articles have their own issue and are written under a distinct branch.
By running make serve
we can get a local instance of the blog up and running without other dependencies that may get introduced in the future.
With this little bit of Makefile
glue, our Windows and macOS developers, authors, and editors can enumerate an environment to validate the run-time without unnecessary files, installations, or configuration requirements. This reduces the #WorksOnMyMachine
results and enables a new delivery mechanism to reduce cycle time from idea to production.
The final setup allows for live reloads of the site from a developer’s workstation on every save. This approach accelerates the development cycle time. We reduce the setup and dependency hell that comes with updating the code base and decouple every contributor from having to know every detail of the framework. When we change to another blog content management system/static site generator, we can abstract those changes from our community and keep the pipeline consistent.
🏆 all around … Until next time