Table of Contents
In our frontend project, we started facing several common issues that many teams can relate to:
- Every developer had a different Node.js version, which sometimes led to inconsistent behavior.
- Some teammates didn’t have the right VS Code extensions or settings, making the codebase feel messy and disorganized.
- Backend developers who occasionally worked on the frontend lacked a Node environment entirely, and setting it up—especially when dealing with tricky package issues—was often time-consuming.
- New developers had to go through a manual and error-prone onboarding process, involving checking versions, installing dependencies, and sometimes even going through Azure documentation just to get authorized.
So we asked ourselves: How can we make this faster and the same for everyone?
That’s when we decided on VS Code Devcontainers, a simple yet powerful way to provide a preconfigured, portable development environment. With DevContainers, we could:
- Ensure everyone uses the same Node.js version, extensions, and tools.
- Help backend developers work on frontend code with zero manual setup.
- Create an easy onboarding experience for newcomers, just clone the repo and open it in a DevContainer.
Of course, there were some minor limitations, such as relying on the VS Code terminal inside the container, but the benefits outweighed the trade-offs. And in this article, I will show you the process.
VS Code DevContainers: How Devcontainers Work Behind the Scenes
DevContainers (short for Development Containers) are a method for defining and configuring a consistent development environment using Docker.
- When you click “Reopen in Container” in VS Code, the editor uses Docker to build and start a development container defined by your .devcontainer configuration.
- Once the container is running, VS Code installs a headless backend process called the VS Code Server inside the container. This server handles all development features such as file editing, terminals, and debugging, while the UI remains on your host machine.
- A persistent Docker volume—often named vscode—is automatically created to store this server and any installed extensions, ensuring faster startup and reuse across sessions.
- When you open a terminal in the containerized environment, it launches a shell (e.g., bash) inside the container, typically under the default user defined in the Dockerfile. As a result, you may see the terminal start in a Linux path like /root or /home/vscode.
This architecture allows developers to work in isolated, reproducible environments, with all tools and dependencies encapsulated inside the container while maintaining a seamless experience within VS Code.
A typical DevContainer setup includes the following files:
devcontainer.json: Defines the container’s name, imports the Docker Compose file. You can also add the VSCode extensions and settings.
docker.compose.yml: Defines the container setup, volumes, ports, environment variables, and build arguments (NPM token, Git credentials, etc).
Dockerfile: Customizes the environment, like installing specific Node.js versions, package managers, or CLI tools.
Implementing Dev Containers in VS Code
devcontainer.json
{
"name": "frontendDev",
"dockerComposeFile": [
"docker-compose.yml"
],
"postCreateCommand": "/workspace/.devcontainer/init-git.sh",
"service": "master",
"runServices": [
"master"
],
"forwardPorts": [
3000
],
"shutdownAction": "stopCompose",
"overrideCommand": true,
"workspaceFolder": "/workspace",
"customizations": {
"extensions": [
"github.vscode-pull-request-github" // Github interaction
],
"settings": {
"prettier-eslint.eslintIntegration": true,
}
}
}
This configuration ensures that the correct services are initialized, the Docker Compose file is linked, essential ports like 3000 for the development server are exposed, and the extensions and settings are readily available in a consistent environment for all team members.
docker-compose.yml
This file orchestrates the containerized development environment by specifying how the container should be built and run, which volumes it should mount, and how it handles ports and environment variables.
version: "3.7"
services:
master:
build:
context: ..
dockerfile: .devcontainer/Dockerfile
args:
NPM_TOKEN: ${NPM_TOKEN}
GIT_EMAIL: ${GIT_EMAIL}
GIT_NAME: ${GIT_NAME}
GIT_USER: ${GIT_USER}
GIT_CREDENTIAL: ${GIT_CREDENTIAL}
volumes:
- node_modules:/workspace/node_modules
- build:/workspace/build
- ../:/workspace
ports:
- 3000:3000
env_file: .env
volumes:
node_modules:
build:
- services:
Defines individual container services. In this case, we have a single service called master, which represents our frontend development environment.
Service: master
- build:
- context: ..
The build context points to the parent directory of .devcontainer, allowing Docker to access all project files. - dockerfile: .devcontainer/Dockerfile
Specifies the Dockerfile used to build the container image. - args:
Passes secret build-time arguments (such as Git credentials and npm tokens) from environment variables. These values are injected securely via .env and are not hardcoded.
- context: ..
- volumes:
- node_modules:/workspace/node_modules
Isolates node_modules from the project folder to avoid conflicts between host and container installations. - build:/workspace/build:
- node_modules:/workspace/node_modules
Shared build artifacts (e.g., compiled files) between the container and host.
- ../:/workspace
Mounts the entire project directory into the container, making the source code available inside the container at /workspace.
Named Volumes
- node_modules
Keeps dependency installations isolated from the host system. - build: Shares build outputs between container and host.
Dockerfile
To ensure that our development container has everything it needs to work out of the box, we created a Dockerfile inside the .devcontainer folder. This file defines a minimal Node.js-based image preconfigured for our frontend project.
Here’s a breakdown of what it does:
1. Base Image and Working Directory
FROM node:20.15.1
EXPOSE 3000
WORKDIR /workspace
- Port 3000 is exposed, aligning with the port used by our development server.
- The working directory inside the container is set to /workspace, which will contain our project source code.
2. Cypress integration
RUN apt-get update && \
export DEBIAN_FRONTEND=noninteractive && \
apt-get -y install --no-install-recommends \
libgtk2.0-0 \
libgtk-3-0 \
libgbm-dev \
libnotify-dev \
libgconf-2-4 \
libnss3 \
libxss1 \
libasound2 \
libxtst6 xauth xvfb
These system libraries are necessary for running Cypress in a headless browser environment (Electron/Chrome). xvfb creates a virtual display for GUI tools in headless containers.
3. Preparing for Dependency Installation
COPY package.json pnpm-lock.yaml* ./
COPY .husky .husky
- We copy the dependency manifest files (package.json, pnpm-lock.yaml) to the container. This enables us to install packages without copying the entire codebase—saving time during image builds.
- We also copy the .husky folder, which contains Git hooks for enforcing code quality checks.
4. Secure Private Package Registry Access
ARG NPM_TOKEN
RUN echo "...auth token config..." > .npmrc
- Using a build argument (NPM_TOKEN), we inject credentials securely during build time to authenticate with our private package registry.
- The .npmrc file is dynamically generated inside the image, allowing access to scoped internal packages during installation.
5. Installing Dependencies and Preparing Tools
RUN npm install -g pnpm
RUN pnpm config set store-dir /root/.local/share/pnpm/store/v10 --global
RUN pnpm install
RUN pnpm config set side-effects-cache false --location project
RUN pnpm --allow-build=cypress add --save-dev cypress
RUN pnpm prepare
- pnpm is globally installed to handle dependency management efficiently.
- pnpm install installs all project dependencies listed in the lockfile.
- pnpm prepare runs project-specific setup scripts (such as setting up Git hooks with Husky), ensuring the container is ready for development tasks.
- pnpm –allow-build=cypress add –save-dev cypress adds Cypress with explicit permission to build native modules in the container.
How to Start The Project?
As part of our DevContainer setup, we created an initialization script that helps developers configure personal credentials and environment variables required for accessing private registries and Git repositories. This script simplifies onboarding while keeping secrets out of source control.
#!/bin/bash
set -e
echo "Welcome to ${project_name}"
echo ""
echo "Open the next url, create a new token (full access and lastest day of expiry):"
echo " 👉 https://project_url/tokens"
read -p "Paste here your token: " TOKEN
TOKEN_BASE64=$(python3 -c "import base64; print(base64.b64encode(b'$TOKEN').decode())")
echo ""
echo "2️⃣ Now open this link, click on 'Clone' and click 'Generate Git Credentials:"
echo " 👉 https://project_url"
read -p "Paste here your credential: " CREDENTIAL
# Create the .env
cat > .devcontainer/.env <<EOF
NPM_TOKEN=$TOKEN_BASE64
GIT_CREDENTIAL=$CREDENTIAL
EOF
echo "✅ Setup successfully completed!"
To streamline the setup process and avoid manual command execution, we introduced a Makefile with a convenient target to run the credential initialization script. This allows any developer to initiate the environment setup with a single command:
environment setup with a single command:
make init
- Follow the prompts to enter your personal access token, Git credentials, name, and email. This securely generates the .env file required for the container.
- Open the DevContainers extension in VS Code.
- Click on “Reopen in Container” or select “Rebuild and Reopen in Container” if changes were made to .devcontainer files.
- Once it is connected to the container, run pnpm start command
And voila! You have the project running on the dev container now!
TIP: When using Devcontainers, you might need to click ‘remove the Docker build cache and reopen the container’ to ensure a clean rebuild of the environment. This is especially useful when:
- Dockerfile or dependencies change: If you’ve updated your Dockerfile, devcontainer.json, or added/remapped volumes, Docker may reuse cached layers and not reflect the latest changes. Removing the cache forces Docker to rebuild everything from scratch, applying all updates properly.
- Stale node_modules or environment issues: Mounted volumes like node_modules might cause conflicts between host and container dependencies, especially in JavaScript/Node projects. Rebuilding helps reset these to match the container’s environment.
- Corrupted or outdated extensions/config: The .vscode-server volume can sometimes have mismatched versions or broken extensions. Rebuilding clears that and reinstalls clean versions.
Conclusion
VS Code DevContainers redefine the way we approach local development. With DevContainers, you don’t need to install Node.js, pnpm, or even configure Git or NPM tokens on your host machine. Everything, tools, extensions, settings, and credentials, lives inside the container. For teams juggling multiple projects or working across various Git profiles, this isolation is a game-changer: your credentials stay scoped and clean.
And yes, the project runs. No local configuration hell. No mismatched environments. Just clone, run make init, and dive into a fully-configured VS Code Devcontainer workspace.
That said, DevContainers are not perfect. They’re tied to IDEs. If you rely on external terminals like Warp or other custom tooling outside of your editor, this might pose a limitation.
Still, for most modern frontend workflows, DevContainers offer a balance of speed, consistency, and portability, making them a solid investment for any team serious about scalable development.
If you are interested in this type of content, feel free to visit Apiumhub’s tech blog. Every month, new content about the latest trends and technologies is published.
Author
-
Graduated from Istanbul Technical University with a bachelor degree of computer engineering in June 2018. During her studies, she has been involved in projects in various areas including web development, computer vision and computer networks.
View all posts