tl;dr
Visual Studio Code Remote - Containers is a tight integration between VSCode and Docker. It enables VSCode to launch within an isolated, repeatable, stateless Docker container.
Stateless development environments make jumping into an old project easy, because the runtime is standardized and requires minimal setup.
Use Docker Compose to make launching and managing Docker containers a breeze.
VSCode Remote Starter
I've put together a starter repo called VSCode Remote Starter.
It's on GitHub, and I'll reference it throughout this article. It has everything you need to get started with VSCode Remote Container and Dockerized development.
Why Docker for local development?
I've got hundreds of dev projects. I set some up on a Mac, some on Windows and a few use Linux on Windows via WSL.
Stepping into an old project is a nightmare. I often spend an hour or more just getting the project up and running.
Docker creates standardized, perfectly repeatable runtimes for your apps.
Create your Dockerized dev environment once and you can always boot it up, on any new machine that runs Docker—Linux, Mac, Windows, Chrome OS?—with perfect repeatability.
What does VSCode have to do with Docker?
VSCode recently released the Visual Studio Code Remote - Containers extension.
Remote - Containers tightly integrates VSCode with Docker containers to launch your VSCode instance within your Docker container. This solves the trickiest part of developing inside a Docker container; how on Earth do you edit the files?
You can use shared Docker volumes to link your Docker container out to the containing OS, but you have to traverse the boundaries between the Docker container and your OS for all sorts of mundane tasks like accessing git, which is NOT within your Docker container.
The Remote - Containers extension also lets you standardize your VSCode install within the Docker container. You can spec exactly the extensions that you want and VSCode will install and expose only those extensions within your new VSCode instance.
Alternatives: [Remote - SSH, Remote - WSL]
VSCode also has extensions for developing across an ssh connection on a remote machine—Remote - SSH—as well as within the Windows Linux Subsystem—Remote - WSL.
I haven't used either of these, but they both look extremely useful for specific use cases.
I'd use Remote - SSH if I needed a stateful runtime for my app. Docker containers are stateless. You get a fresh install every time. If I needed a ton of setup for my app that didn't work well with Docker, or if I wanted an always-running dev environment, I'd pay for a server and shell into with with Remote - SSH. You could run a Raspberry Pi in your office as an always-on dev environment!
I'd use Remote - WSL if I were developing on Windows and NOT using Docker. Maybe your development project needs access to the Windows OS or you don't like Docker. Remote - WSL will give you a tighter VSCode integration with your Linux Subsystem.
Docker Compose FTW
Docker containers require a bunch of configuration to run, usually with command line flags.
I hate remembering command line flags, and I hate trying to remember exactly how I last launched my Docker containers.
Docker Compose is a command-line tool, much like Docker. I use Docker Compose to launch Docker containers programatically. You simply specify your container options in docker-compose.yaml
and you're in business.
Pro Tip: Alias
docker-compose
todc
. You'll get sick of typingdocker-compose up -d
. Instead, typedc up -d
. It just rolls off of my fingertips.
Here's an example from my VSCode Remote Starter project on GitHub.
# docker-compose.yaml
version: '3'
services:
app:
container_name: app
build: ./dev/workspace
env_file: ./dev/workspace/env.list
volumes:
- ./app:/app
ports:
- 8080:8080
vault:
container_name: vault
build: ./dev/vault
env_file: ./dev/vault/env.list
volumes:
- ./dev/vault:/dev/vault
- ./app/vault:/app/vault
ports:
- 8200:8200
cap_add:
- IPC_LOCK
This file is much simpler than it looks.
It specifies two Docker containers under the services
attribute: app
and vault
.
It then provides the configuration for each container or "service".
container_name
is how Docker Compose will refer to the container. So I can rundocker-compose up app
and Docker Compose will boot up theapp
container.build
is a path to the folder containing theDockerfile
for this container.env_file
is optional. It's the path to a file with environment variables to be injected into the container.volumes
maps local volumes to container volumes. In this case, I want my local./app
folder to be mounted within the Docker container at/app
.ports
maps local ports to container ports. In this case, I'm mapping my local port8080
to the container's port8080
. The pattern islocalPort:containerPort
. You can map lots of ports.
I've got a similar configuration for the vault
container.
Productivity with Docker Compose
Check out VSCode Remote Starter for all of the details.
The file tree below shows how I've created a dev
folder at the top level of my project. docker-compose.yaml
belongs at the root of the project; however, you can put your Docker container folders wherever you like them. I prefer to nest them under dev
.
Each folder has a Dockerfile
and an env.list
. Only the Dockerfile
is required. Everything that the Dockerfile
needs to build must be within this folder, so you'll notice that I've got some extra files hanging around.
The Dockerfile
for the app
container happens to be ./dev/workspace
. Check out how simple it is.
# ./dev/workspace/Dockerfile
FROM ubuntu:latest
CMD tail -f /dev/null
The FROM
line tells Docker which base image to use. In this case, I'm using Ubunutu:latest
.
I don't have any other setup at this point. It's just a bare Ubuntu image.
CMD
tells Docker what commands to run inside the container when I launch it. In this case, I'm running tail -f /dev/null
because I want the container to stay alive, but I don't want it to do anything.
Docker is meant to run jobs. These jobs can be long-lived like running a Node.js server, or they can do something discrete like encode an image or build an application. The container will exit as soon as it's CMD
command is complete. I want this container to stay up indefinitely, and I only run it headlessly, so tail -f /dev/null
is an inexpensive way to create a long-running process that keeps the container up and running.
There are two ways to launch a Docker container. You can run docker-compose up app
to launch the container with an interactive shell. Or you can run it headless with the -d
flag for "daemon": docker-compose up -d app
. This will run it in the background without a shell. You can still connect to the running instance, but it will run in the background until you shut down Docker or run docker-compose down app
to bring it down.
If you run docker-compose up app
, you'll see the results of tail -f /dev/null
, which is just an empty terminal. Run cmd + c
to kill the process and the app
container will automatically shut down.
Run docker-compose up -d app
to launch it headlessly. Run docker ps
to get the container's CONTAINER ID
and run docker exec -it <container id> sh
to shell into the running container. Run docker-compose down
to bring down all of the containers in docker-compose.yaml
.
Run docker-compose run app sh
to launch app
and get a shell instance. The sh
at the end could be any command that you want to override CMD
from the Dockerfile. We're using sh
so that we can get a shell rather than see the empty output of tail -f /dev/null
.
What is Vault
Hashicorp Vault is purely optional for this setup.
Vault is an enterprise-grade tool for managing application secrets.
I hate migrating my service accounts and environment variables across projects. It's an annoying chore, and it makes booting up my app in a repeatable runtime just a little bit harder.
How to run Vault?
You don't need to run Vault, but if you're interested, see the README for VSCode Remote Starter.
I've set up Vault to run within a Docker container as specified by docker-compose.yaml
. I've backed vault with a GCP Storage bucket, but you can back it with any of Vault's many backend options.
The glory of Vault is that a single GCP Storage bucket can securely hold all of the secrets for all of my projects. All of my development environments can run statelessly within Docker. The only stateful part of my dev environment is the GCP Storage bucket.
I have some shell commands inside ./bin/vault that use the vault
Dockerfile
to manage secrets automatically. It makes setting up a new environment a breeze.
Putting it all together
Clone vscode-remote-starter
with git clone https://github.com/how-to-firebase/vscode-remote-starter.git
.
The README has all of the details that you'll need to get it up and running. You may need to learn a bit of Docker.
Also see the VSCode docs for Developing inside a container. There are so many options. You're bound to find something you like.
Still need help?
Find me online at firebaseconsulting.com or chrisesplin.com
Email me at chris@chrisesplin.com or find me on Twitter @ChrisEsplin. My DMs are always open!