Back to catalog
Season 29 18 Episodes 1h 5m 2026

Docker Masterclass

2026 Edition. A comprehensive audio course on Docker, covering container basics, images, Dockerfile, networking, Compose, CI/CD, and the latest AI features like MCP Toolkit, Docker Sandboxes, and Docker Agent.

Containerization DevOps
Docker Masterclass
Now Playing
Click play to start
0:00
0:00
1
The Dev Equals Prod Promise
Discover why Docker fundamentally changed software development. This episode covers the core value proposition of separating applications from infrastructure and achieving perfect parity between development and production environments.
3m 20s
2
Containers vs Virtual Machines
Understand the architectural differences between containers and VMs. Learn how containers achieve isolation by sharing the host kernel, making them incredibly lightweight compared to traditional hypervisors.
4m 02s
3
The Anatomy of a Docker Image
Explore what a Docker image actually is. This episode explains the principles of image immutability and layer composition, showing how file system changes are stacked to create a container template.
3m 27s
4
The Dockerfile Blueprint
Learn how to write a Dockerfile to build custom images. We cover essential instructions like FROM, RUN, and CMD, and explain the crucial difference between shell and exec forms.
3m 19s
5
Mastering the Build Cache
Optimize your image builds using Docker's build cache. Learn why the order of instructions in your Dockerfile is critical to preventing unnecessary dependency installations.
3m 40s
6
Multi-Stage Builds
Keep your production images lean and secure. This episode introduces multi-stage builds, demonstrating how to separate your heavy compilation environment from your minimal runtime environment.
3m 58s
7
Running and Interacting
Learn the practical mechanics of running containers. We cover detached versus interactive modes, basic port publishing, and how to execute shell commands inside a running container.
4m 00s
8
Data Persistence Basics
Prevent catastrophic data loss when containers are deleted. This episode compares Bind Mounts for local development hot-reloading with Docker Volumes for secure database persistence.
3m 27s
9
Container Networking
Understand how Docker handles network traffic. Learn the basics of port publishing to the host and how containers securely communicate with each other over isolated bridge networks.
3m 38s
10
Introduction to Docker Compose
Move beyond single container commands. Learn how Docker Compose uses a declarative YAML file to define, network, and orchestrate multiple services simultaneously.
4m 06s
11
Docker in the CI/CD Pipeline
Eliminate flaky tests with containerized build environments. This episode covers how to use Docker in Continuous Integration pipelines to guarantee perfectly reproducible automated tests.
3m 27s
12
Multi-Platform Images
Solve the Apple Silicon vs. Cloud Server mismatch. Learn how Docker Buildx allows you to cross-compile and package applications for both ARM and AMD64 architectures simultaneously.
3m 26s
13
The Docker MCP Toolkit
Securely connect your AI agents to local tools. This episode introduces the Docker Model Context Protocol (MCP) Toolkit, explaining how to manage containerized MCP servers using catalogs and profiles.
3m 28s
14
Dynamic MCP Auto-Discovery
Explore Dynamic MCP, an experimental feature that allows AI clients to search the Docker MCP Catalog and dynamically install new tool servers during a conversation without manual setup.
3m 54s
15
Docker Sandboxes for AI
Understand the architecture of Docker Sandboxes. Learn why autonomous AI coding agents require isolated microVMs with dedicated Docker daemons instead of standard container namespaces.
3m 55s
16
Building AI Agent Teams
Stop relying on a single AI model for complex tasks. This episode introduces the Docker Agent framework, showing how to compose specialized teams of agents defined in YAML.
3m 51s
17
Agent Toolsets and Workflows
Make your AI agents actually useful by giving them the right constraints. Learn how to configure filesystem toolsets and enforce structured development workflows in Docker Agent.
3m 43s
18
AI Models in Compose
Treat your local LLMs just like any other application dependency. Learn how to declare, configure, and bind AI models directly inside your Docker Compose YAML file.
3m 02s

Episodes

1

The Dev Equals Prod Promise

3m 20s

Discover why Docker fundamentally changed software development. This episode covers the core value proposition of separating applications from infrastructure and achieving perfect parity between development and production environments.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 1 of 18. You just spent three days hunting down an error that only triggers on the staging server. The code executes flawlessly on your laptop, but the moment it hits the deployment pipeline, it breaks. The culprit is almost always a mismatched system library, a different runtime version, or a missing environment variable. This is the exact friction Docker was built to eliminate by delivering the dev equals prod promise. Docker is an open platform for developing, shipping, and running applications. Its core purpose is to separate your applications from your infrastructure. Historically, developers wrote code and operations teams provisioned servers. The developers would hand off the application, and the operations team would spend hours or days configuring the host machine to meet the software requirements. This manual alignment of environments is fragile and slow. Docker solves this by packaging the application together with its dependencies, system tools, libraries, and runtime into a standardized unit called a container. You might associate this concept with traditional virtual machines. While they share the goal of isolating applications, containers are vastly lighter because they do not require a full guest operating system. We will cover that architectural difference in the next episode. The focus right now is on what this packaging achieves. Here is the key insight. Because the container holds both the code and the precise environment needed to execute it, the underlying host machine becomes largely irrelevant. Docker ensures that if a container runs on your local development laptop, it will run exactly the same way on a quality assurance server, and exactly the same way in a production data center. You eliminate the phrase it works on my machine because your machine and the production machine are now providing the exact same execution environment. The workflow looks like this. A developer writes code locally and defines the required environment in a plain text configuration file. Docker reads that file and builds a static artifact called an image. That single, immutable image is what gets tested. When the tests pass, that exact same image is deployed to production. You are not copying code to a server and running a setup script. You are moving the entire working environment as one sealed unit. This portability changes how systems scale. Because containers are standardized and lightweight, spinning up new instances of an application in response to a traffic spike happens in seconds. You can easily shift workloads across different environments, moving an application from a local testing server to a cloud provider without changing a single line of code or reconfiguring the host. The ultimate takeaway is that Docker transforms infrastructure into a predictable commodity, creating a rigid boundary where developers own the complete environment inside the container, and operations simply provides the compute power to run it. If you want to help support the show, search for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
2

Containers vs Virtual Machines

4m 02s

Understand the architectural differences between containers and VMs. Learn how containers achieve isolation by sharing the host kernel, making them incredibly lightweight compared to traditional hypervisors.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 2 of 18. You do not need to boot an entire operating system just to run a single Python script. Yet, for years, developers accepted heavy overhead and slow boot times to keep their applications isolated from one another. Today, we fix that by looking at Containers vs Virtual Machines. Consider running a complex stack locally. You need a React frontend, a Python API, and a PostgreSQL database all operating simultaneously. If you install them directly onto your host machine, you invite dependency conflicts. The API might require a specific version of a system library that clashes with what your database needs. A container solves this by acting as a sandboxed process. Here is the key insight. A container is not a miniature computer. It is strictly a process running natively on your host machine. The magic lies in isolation. Through features built into the operating system, this process is given its own private filesystem, its own networking stack, and an isolated view of the system. To the Python API running inside, it appears to be the only software on the machine. To your host operating system, it is just another standard process, much like your web browser or text editor. Because a container is just a process, it shares the host operating system kernel. When the PostgreSQL database inside a container needs to allocate memory or write a record to disk, it talks directly to the host kernel. There is no middleman and no secondary operating system booting up in the background. This is why starting a container is nearly instantaneous. It takes exactly as long as starting the raw application itself. Now, contrast this directly with a Virtual Machine. A virtual machine approaches isolation by simulating hardware. It relies on a hypervisor, which is software that carves out a virtual CPU, virtual memory, and a virtual disk drive. On top of this fake hardware, you must install a complete guest operating system. If you want to run that same Python API in an isolated VM, you must boot an entire Linux distribution. The VM loads its own separate kernel, initializes device drivers, and starts background system services before it even thinks about executing your Python code. Every time the application needs to read a file, the request passes through the guest operating system, down to the hypervisor, and finally to the host hardware. This provides incredibly strong security isolation, but it comes at a steep cost in CPU cycles, memory usage, and boot time. Because of these differences, a common misconception is that you must choose one or the other. In reality, containers and virtual machines are not mutually exclusive. In modern cloud environments, they are almost always used together. When you provision a cloud instance, you are renting a virtual machine. That virtual machine provides a strong hardware-level boundary separating your workload from other customers on the same physical server. You then install a container runtime inside that virtual machine to run your React frontend, your API, and your database. The virtual machine isolates the infrastructure, while the containers isolate the individual applications. The distinction ultimately comes down to boundaries. Virtual machines virtualize the hardware to run multiple operating systems, while containers virtualize the operating system to run multiple isolated processes. Thanks for spending a few minutes with me. Until next time, take it easy.
3

The Anatomy of a Docker Image

3m 27s

Explore what a Docker image actually is. This episode explains the principles of image immutability and layer composition, showing how file system changes are stacked to create a container template.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 3 of 18. You deploy an application, and it works perfectly. Two weeks later, you restart the exact same application on the same machine, and it crashes because a system library updated itself in the background. That silent environmental drift is exactly what we eliminate by understanding the anatomy of a Docker image. First, clear up the most common point of confusion. An image is not a container. An image is a static template. It contains your application code, your libraries, your system tools, and your runtime. A container is simply a running instance of that image. You can start a thousand containers from one image, but the image itself just sits on disk, waiting to be read. The defining characteristic of a Docker image is immutability. Once an image is created, it is never modified. You cannot change a configuration file inside an existing image. If you want to change an image, you must build a completely new one. This immutability guarantees that an image tested on your laptop behaves identically in production. The template cannot drift over time. If you cannot modify an image, you must construct new ones. A Docker image is not a single, massive file. It is a composition of multiple, independent layers stacked on top of each other. Each layer represents a specific set of filesystem changes, which means adding, modifying, or removing files. Consider a Node dot js application. You rarely build the operating system yourself. Instead, you start with a base image. This base image contains a minimal Linux distribution and the Node runtime. That base is actually made of its own layers, but to you, it acts as the foundation. When you add your application to this foundation, Docker records your changes as new layers stacked on top. First, you bring in your dependency configuration file. That creates a new layer. Next, you instruct the system to download and install your dependencies. All those downloaded libraries are packaged into the next layer. Finally, you copy your application source code. That forms the top layer. When you start a container, Docker stacks these layers using a union filesystem. This makes all the independent layers look like one standard directory structure. If the exact same file path exists in two layers, the version in the upper layer hides the version in the lower layer. Here is the key insight. Because layers are immutable, they are heavily cached and shared. If you update your application source code and build a new image, Docker calculates what changed. It sees that the base Linux image, the Node runtime, and your dependency layer are identical to the previous build. It reuses those existing layers instantly and only creates a new layer for your updated code. This turns a deployment that could take minutes into an operation taking milliseconds. This architecture also saves disk space and network bandwidth. When you push your new image to a server, you only transmit the single layer containing the new source code. The server already has the base layers. By enforcing immutable layers, Docker guarantees that your application environment cannot silently change, while ensuring you only ever transmit or store the exact bytes you modified. That is your lot for this one. Catch you next time!
4

The Dockerfile Blueprint

3m 19s

Learn how to write a Dockerfile to build custom images. We cover essential instructions like FROM, RUN, and CMD, and explain the crucial difference between shell and exec forms.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 4 of 18. The entire operating environment for your complex application can be expressed in just ten lines of plain text. You hand a single file to a build system, and out comes a perfectly configured system ready to run anywhere. This is the Dockerfile Blueprint. A Dockerfile is a text document containing all the commands a user could call on the command line to assemble an image. The format is simple. Each line starts with an instruction, written in uppercase by convention, followed by the arguments for that instruction. Docker reads this file line by line, from top to bottom. Every valid Dockerfile must start with a foundation. You define this using the FROM instruction. If you write FROM ubuntu, Docker pulls the official Ubuntu image and uses it as the starting point. Every subsequent line in your file will modify this base environment. Once your base is set, you usually need to install dependencies. You do this using the RUN instruction. The RUN instruction executes any command inside the current image and commits the result. If you write RUN apt-get update followed by your package installation commands, Docker starts up the environment, runs the package manager, installs the software, and saves the new state. Next, you need your actual application. An operating system with installed dependencies is useless without your code. The COPY instruction handles this. You provide a source path from your local workspace and a destination path inside the image. Docker takes your files and copies them directly into the container filesystem. Building the image is only the first phase. You also have to tell Docker what application to launch when a container starts up. You define this default behavior using either the CMD or ENTRYPOINT instruction. Here is the key insight. There are two distinct ways to format these execution instructions, and mixing them up causes subtle bugs. The first approach is the shell form. You write the instruction followed by the command exactly as you would type it in a terminal. When Docker sees this, it wraps your command in a shell, executing it via bin slash sh. This is convenient because environment variables expand automatically. However, the shell process sits between Docker and your application. If Docker sends a signal to gracefully stop the container, the shell catches it and your application never receives it, leading to forced terminations. The second approach is the exec form. This is written as a JSON array. You format it with brackets, providing the executable as the first string and its arguments as the subsequent strings. When you use the exec form, Docker bypasses the shell entirely. It runs your executable directly. Your application becomes process ID one inside the container. This guarantees that system signals pass directly to your application, ensuring smooth and predictable shutdowns. If you want a stable production container, always use the exec form for your final commands so your application controls its own lifecycle. That is all for this one. Thanks for listening, and keep building!
5

Mastering the Build Cache

3m 40s

Optimize your image builds using Docker's build cache. Learn why the order of instructions in your Dockerfile is critical to preventing unnecessary dependency installations.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 5 of 18. If your Docker build takes ten minutes every time you change one single line of source code, you are doing it wrong. The fix is entirely in how you structure your file, and that means mastering the build cache. When you trigger a Docker build, the builder processes your Dockerfile sequentially, from top to bottom. Every single instruction, whether it is copying a directory or running a script, generates a separate layer in the resulting image. Because executing these steps requires compute time and network bandwidth, Docker automatically saves the output of each step into a local build cache. On subsequent builds, the engine attempts to reuse these saved layers to skip redundant work. To determine if a cache hit is possible, Docker evaluates each instruction against the existing cache history. For instructions that run commands, it checks if the command string itself is identical to the one used in the previous build. For instructions that copy files from your host machine into the image, Docker goes a step further. It calculates a checksum for the contents and metadata of every file being copied. It then compares this new checksum to the checksum of the files in the previously cached layer. If the checksums match perfectly, Docker reuses the cached layer and proceeds to the next line. If even a single byte differs, the cache is invalidated. Pay attention to this bit. Cache invalidation is a strict chain reaction. The instant Docker detects a change and invalidates a layer, it stops looking at the cache for the rest of the build. Every single instruction that comes after the invalidated layer is forced to execute from scratch. This happens because each layer relies on the exact state of the layer before it. This chain reaction dictates how you must organize your Dockerfile. Consider a Node application where you manage external dependencies. A frequent mistake is using a single instruction to copy your entire project folder into the image, followed by an instruction to run your package installation command. If you modify a single line in a text file somewhere in your source code, the checksum for the copy instruction changes. The cache breaks at that step. Consequently, the next instruction is forced to run. You wait for hundreds of megabytes of dependencies to download again, even though your actual list of dependencies remained entirely untouched. The optimal approach isolates the dependencies from the application code. First, you add an instruction to copy only your dependency configuration file, specifically your package manifest, into the image. Second, you run the command to download the dependencies. Third, you add a separate instruction to copy the rest of your general source code. Now, when you modify that same text file and rebuild, Docker evaluates the first instruction. The dependency manifest has not changed, so the cache is used. It moves to the installation step. Since the preceding layer was a cache hit and the command string is identical, the cache is used here again, skipping the massive download. The cache only breaks at the final instruction, where the builder copies your updated source files. A ten-minute wait becomes a two-second update. The most effective way to speed up your pipeline is to order your instructions strictly from the least likely to change to the most likely to change. That is all for this one. Thanks for listening, and keep building!
6

Multi-Stage Builds

3m 58s

Keep your production images lean and secure. This episode introduces multi-stage builds, demonstrating how to separate your heavy compilation environment from your minimal runtime environment.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 6 of 18. Shipping your compiler to production is a massive security risk, and it inflates your container size by gigabytes. You write clean code, but your final artifact gets dragged down by all the heavy tools needed just to build it. The solution to this is multi-stage builds. When you build applications in compiled languages like Java, Go, or C++, the compilation process requires build tools, software development kits, and raw source code. Historically, developers used a standard approach where they installed all these dependencies into the container, compiled the code, and then ran the application. The problem is that all those build tools remain in the final production image. You end up deploying your compiler, your package manager, and intermediate files alongside your actual application. This makes your container enormous. Large containers take longer to pull across the network and consume more storage. Even worse, it creates a massive attack surface. If an attacker breaches your container, they suddenly have a fully equipped development environment at their disposal. A common misconception is that fixing this requires maintaining two separate files—one file to build the software, and a script to extract the result and pass it to a second file for deployment. That is not the case. Multi-stage builds handle this entire separation of concerns inside a single file. Here is the key insight. A multi-stage build allows you to define multiple distinct environments, or stages, sequentially. Each stage begins by defining its own base image. You start the first stage with a heavy base image that contains all your development tools. You assign this stage a name, such as builder. Inside this builder stage, you copy your source code from your local machine and execute your compile commands. The builder stage does the heavy lifting, generating the final executable file. Then, further down in that exact same file, you define a second base image. This initiates a brand new stage. For this stage, you choose a minimal runtime image. This environment only contains the exact dependencies needed to run the application, with zero build tools. Instead of copying files from your local machine again, you use a specialized copy instruction. This instruction tells the build engine to reach back into the builder stage, grab only the finished, compiled artifact, and drop it into your new, minimal stage. When the build engine finishes, it produces a container based solely on the final stage. Everything from the first stage—the compiler, the downloaded packages, the source code—is completely discarded. It never makes it into your production image. Consider a concrete scenario involving a Java Spring Boot application. In your file, your first stage uses a bulky Maven image. Inside this stage, you run the Maven command to package your application. Maven downloads all necessary project dependencies, compiles the Java code, and packages it into a finished JAR file. Next, you start the second stage using a lightweight Java Runtime Environment base image. You do not install Maven in this environment. You do not copy your Java source files here. Instead, you instruct the engine to copy only the compiled JAR file directly from the Maven stage into this minimal runtime environment. Finally, you set the default command to execute that JAR file. By strictly separating the build environment from the runtime environment, you guarantee that your production container is completely isolated from your build tools. The final image only ever sees the compiled artifact and the bare minimum runtime, keeping your deployment fast, lean, and highly secure. That is your lot for this one. Catch you next time!
7

Running and Interacting

4m 00s

Learn the practical mechanics of running containers. We cover detached versus interactive modes, basic port publishing, and how to execute shell commands inside a running container.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 7 of 18. Starting a process in the background is exactly what you want for a production web server. But when that server refuses to load your page, you need a way to break the glass, step into the environment, and see what is actually broken. This episode covers running and interacting with containers. The core command to start any container is docker run. By default, if you run a container, it attaches its output directly to your terminal screen. It takes over your prompt, and if you press control C, the container terminates. For a long-running service like an Nginx web server, this is entirely impractical. You want the server to run in the background. You achieve this using the detached mode flag, which you type as a single dash d. When you pass dash d, Docker starts the container, prints a long, unique container ID to your screen, and immediately gives you your terminal prompt back. The container continues to run quietly in the background. However, that running container is isolated. Even if Nginx is actively serving traffic on port 80 inside the container, your host machine cannot see it. You have to explicitly punch a hole through that network isolation. You do this with the publish flag, typed as dash p. This allows you to map a specific port on your host laptop to a specific port inside the container. If you specify dash p 8080 colon 80, Docker intercepts any web traffic hitting your laptop on port 8080 and routes it directly to port 80 inside the container. Now you have a detached web server you can successfully reach from your local browser. But what happens when you load the page and see a configuration error? Your Nginx server is running in the background, but you need to read the configuration files on its filesystem. Here is the key insight. You do not need to stop a container to look inside it. Instead, you use the docker exec command. While docker run creates a brand new container, docker exec runs a brand new command inside an already existing, running container. To get a useful, working terminal, you need to combine two specific flags into dash i t. The i stands for interactive. This keeps the standard input channel open, allowing you to actually type commands into the container. The t allocates a pseudo-TTY. This tricks the container into thinking it is connected to a physical terminal, which is necessary for command prompts and text formatting to display correctly. If you run docker exec dash i t, followed by the container name and the command slash bin slash bash, you instantly drop into a command prompt inside the running Nginx container. You are now inside the box. You can read configuration files, check error logs, and inspect the filesystem exactly as you would on a standard Linux server. When you are finished, typing exit closes your temporary shell session. The Nginx container itself remains completely unaffected, still running in the background. Eventually, you will need to clean up. Running docker stop with the container name sends a termination signal, giving the application time to shut down gracefully. However, stopping a container does not delete it from your system. The stopped container remains on your hard drive, retaining its logs and any internal filesystem changes. To delete it permanently and free up that disk space, you run the docker rm command. The most critical distinction to memorize is the difference between run and exec. Docker run boots a brand new isolated system, while docker exec allows you to step inside a system that is already breathing. Thanks for listening. Take care, everyone.
8

Data Persistence Basics

3m 27s

Prevent catastrophic data loss when containers are deleted. This episode compares Bind Mounts for local development hot-reloading with Docker Volumes for secure database persistence.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 8 of 18. You deploy a database inside a container, write thousands of rows, and everything runs perfectly. Then you delete the container to update the image, and your entire database vanishes forever. By default, container storage is strictly temporary. To prevent data loss, we need Data Persistence Basics. When a container starts, it creates a writable layer on top of its underlying image. Any files the container creates or modifies are stored in this specific layer. If the container is destroyed, that layer is destroyed right along with it. The data is entirely ephemeral. It does not exist outside the container's own lifecycle. To keep data safe, you have to route it out of the container and onto the host machine. Docker offers two primary mechanisms for this: bind mounts and managed volumes. A bind mount maps a specific, explicit path on your host machine directly to a path inside the container. You tell Docker exactly which folder on your laptop should appear inside the container environment. This is heavily dependent on your host operating system and local file structure. The host machine retains full control over the files. This approach is perfect for local development. You bind mount your local source code directory into your container's web application path. When you edit and save a script on your laptop, the container reads that updated file immediately. You get instant hot-reloading without rebuilding the container image every time you change a line of code. The second mechanism is a managed volume. Instead of pointing to a specific path you control on your hard drive, you ask Docker to create a storage entity. Docker provisions the space on the host machine and manages it completely. You do not need to know where Docker physically puts the files on your host system. You just give the volume a name and tell the container where to mount it internally. Volumes are the standard solution for database persistence. When running PostgreSQL, you create a volume by running a simple command and giving it an identifier, like db-data. Then, when starting your container, you pass a configuration flag linking that db-data volume to the internal path where Postgres writes its table records. If you stop and delete the database container, Docker leaves the volume completely alone. When you spin up a new container later, you simply attach that existing volume, and all your records are intact. Here is the key insight. The choice between these two methods comes down to who needs to access the files. Use bind mounts when your host machine needs to actively interact with the data, like a developer editing source code. Use managed volumes when the container owns the data, like a database engine writing records, and you just want Docker to keep those files safe across container restarts. Ephemeral containers are a design choice, not a flaw, because they force you to decouple your data from your runtime compute. Always assume your container will be destroyed immediately, and explicitly map your persistent state outside of it. If you find these episodes helpful, you can support the show by searching for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
9

Container Networking

3m 38s

Understand how Docker handles network traffic. Learn the basics of port publishing to the host and how containers securely communicate with each other over isolated bridge networks.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 9 of 18. By default, a running container is completely sealed off from the outside world. It sits in a private bubble, and if you want the internet to reach it, you have to intentionally punch holes through that isolation. Managing those holes and the connections between containers is the job of Container Networking. When a container starts, Docker assigns it an internal IP address. The container can usually reach out to the internet to download updates or make network calls, but nothing outside the host machine can reach in. To accept incoming traffic, you use port publishing. Publishing takes a port on your physical host machine and binds it directly to a port inside the container. If you have a web server container listening internally on port eighty, you can publish it to port eighty-eighty on your host. When a user sends a request to your host machine at port eighty-eighty, Docker intercepts it and forwards it straight through the firewall to the container on port eighty. You configure this mapping at startup using the publish flag. Without that flag, the container remains inaccessible to the outside network. That covers external traffic. Now, the second piece of this is internal communication. Applications rarely run as a single isolated process. You usually have multiple containers that need to share data. By default, Docker attaches every new container to a built-in network called the default bridge. A bridge is a software-based network switch running on your host machine. It connects containers so they can exchange packets, while isolating them from external networks. Here is the key insight. The default bridge allows containers to communicate using their internal IP addresses, but container IP addresses change every time a container restarts or updates. Hardcoding an IP address in your application configuration will break your system almost immediately. To solve this, you create a user-defined bridge network. When you attach multiple containers to a custom user-defined bridge, Docker provides automatic internal DNS resolution. This means containers can find each other using their exact container names. Consider a scenario where you have a backend application container and a database container. You create a single custom bridge network and attach both containers to it. Inside your backend application code, you do not write a database connection string using a fragile IP address. You simply use the database container's name as the host address. Docker intercepts the DNS query, finds the database container on that specific bridge, and routes the traffic to the correct internal IP address dynamically. This design gives you total control over application security. The backend and the database can talk to each other freely across the custom bridge, but no external traffic can reach the database. To run your application securely, you leave the database hidden on the private internal bridge with no ports published. Then, you publish only the backend container's port to your host machine. External users hit the public backend port, and the backend securely queries the database over the private bridge. The architecture of your application dictates your network topology: use published ports to invite external users in, and custom user-defined bridges to let your internal containers talk to each other securely by name. That is all for this one. Thanks for listening, and keep building!
10

Introduction to Docker Compose

4m 06s

Move beyond single container commands. Learn how Docker Compose uses a declarative YAML file to define, network, and orchestrate multiple services simultaneously.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 10 of 18. You should not need a text document full of complex terminal commands just to start your local development environment. Relying on shell history to remember the exact flags, ports, and network names for multiple containers is a fragile way to work. Introduction to Docker Compose fixes this by turning your entire application stack into one declarative file. When you run an application, it rarely exists in isolation. You usually have a web server, a database, and maybe a caching layer. Starting them manually requires running multiple discrete commands. You have to create a custom network, attach each container to it, expose the right ports, and mount storage drives. If you make a typo in any of those steps, the containers cannot communicate and the application fails. Docker Compose replaces this imperative process with a declarative YAML file, normally named compose dot yaml. Instead of telling Docker exactly what to do step by step, you declare the desired final state of your entire system. Docker Compose figures out the necessary steps to achieve that state. The YAML file is divided into three main structural sections. The first and most prominent section is called services. A service is simply a definition for a specific container in your application. Take a scenario where you are running a Node application alongside a MySQL database. Under the services section, you define two entries. You name the first one web, specifying the Node image and the local ports you want to expose. You name the second one database, specifying the MySQL image and the required environment variables, like the root password. Here is the key insight. You do not need to link these containers manually. By default, Docker Compose automatically creates a single internal network for your application. It attaches all the defined services to this network and assigns each container a hostname that matches its service name. Your Node application code can connect to the database simply by targeting the hostname "database", and the internal DNS resolves it to the correct container IP. You can manually define custom networks in the networks section of the YAML file, but for most standard development setups, the default behavior does exactly what you need. The final structural piece is the volumes section. Databases require persistent storage. If the MySQL container shuts down, you do not want your data wiped out. At the bottom of your YAML file, you declare a named volume. Then, inside your database service definition, you map a specific path inside the container to that named volume. Docker Compose manages the creation and lifecycle of this storage for you. Once your file is written, you manage the entire stack with two commands. You type docker compose up. Compose reads the YAML file, creates the internal network, sets up the volumes, and starts the MySQL and Node containers. If you want to keep working in your terminal, you add the detach flag to run everything in the background. When you are done working, you do not stop and remove each container individually. You type docker compose down. Compose gracefully stops the Node app, stops the database, and removes the containers and the default network, keeping your system entirely clean. It leaves your named volumes intact, meaning your database records are waiting for you the next time you bring the stack up. Docker Compose shifts your mindset from managing individual isolated containers to managing complete application environments. Your infrastructure setup becomes a single piece of code that you can commit to version control and share instantly with your team. That is all for this one. Thanks for listening, and keep building!
11

Docker in the CI/CD Pipeline

3m 27s

Eliminate flaky tests with containerized build environments. This episode covers how to use Docker in Continuous Integration pipelines to guarantee perfectly reproducible automated tests.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 11 of 18. You push your code, the pipeline runs, and the tests fail. You run them locally, and they pass perfectly. Your CI server has a slightly older dependency version than your laptop. This drift is the leading cause of notoriously flaky tests, but containerizing your build environment makes every run perfectly predictable. Today, we cover Docker in the CI/CD Pipeline. Historically, Continuous Integration meant maintaining static build servers. Over time, engineers connect to these virtual machines to install packages, update runtimes, and tweak system configurations. These servers turn into pet VMs. They accumulate hidden state and leftover cache files. When a pipeline fails, you waste time figuring out if the code is actually broken or if the server just needs a software update. Using Docker as a build environment completely sidesteps this problem. Instead of executing your test scripts directly on the host operating system of a CI worker, the worker spins up a container. The CI runner pulls a specific Docker image, starts the container, mounts your source code, and executes your build steps inside that isolated boundary. Here is the key insight. When the job finishes, the container is destroyed. The next pipeline run gets a completely fresh, identical environment. There are no conflicting background processes from previous runs. The environment is stateless and entirely defined by the image. Think about the process of upgrading a programming runtime. Suppose you need to move your project from Node 18 to Node 20. In a traditional setup, someone has to log into the build server, update the software system-wide, and hope it does not break other projects sharing that same worker. With Docker as your build environment, that entire process is just a string change. You update the base image tag in your configuration from Node 18 to Node 20. The CI runner pulls the new image. Your build runs in the updated environment instantly. If a test fails, you revert the tag and try again later. You manage the infrastructure directly alongside your code. There is another layer to this. If you are using Docker to build your application, your CI pipeline needs the ability to build and push images. If your CI job is already running inside a container, how do you run Docker build commands? This requires a pattern called Docker-in-Docker. Docker-in-Docker means running an isolated Docker daemon inside your CI container. The outer container provides the controlled environment for your pipeline steps, while the inner daemon processes your application builds. This allows your CI job to pull base images, construct your application container, and push the final artifact to a registry, all without polluting the host machine running the CI worker. Moving your CI environment into a container shifts control of the build system to the developer. The exact same image that builds your code on a remote server can be run on your local machine, guaranteeing that if a test fails in CI, you can reproduce that exact failure locally. That is all for this one. Thanks for listening, and keep building!
12

Multi-Platform Images

3m 26s

Solve the Apple Silicon vs. Cloud Server mismatch. Learn how Docker Buildx allows you to cross-compile and package applications for both ARM and AMD64 architectures simultaneously.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 12 of 18. "It works on my machine" takes on a whole new meaning when your local machine uses an ARM processor, but your production cloud runs on Intel. You test your container locally, push it to a registry, pull it on the server, and it crashes instantly with an execution format error. The problem is a hardware architecture mismatch. To fix this, you use Multi-Platform Images. A container image is fundamentally a bundle of binaries and file systems. If you build an image on an Apple Silicon Mac, the resulting binaries are compiled for the ARM64 architecture. When you deploy that image to a standard cloud Linux server running an AMD64 processor, the host CPU literally does not understand the instructions inside the container. Historically, you had to maintain separate build pipelines for different hardware targets. Docker Buildx removes that requirement. Docker Buildx is a command line plugin that extends the standard Docker build system. It uses a backend engine called BuildKit to execute builds concurrently and handle complex tasks like targeting multiple platforms in a single pass. When you build a multi-platform image using Buildx, you are not stuffing two separate file systems into a single giant container. Instead, Buildx creates an image manifest list. Think of this manifest as a routing table. It holds a list of pointers to different architecture-specific images stored in your registry. When a machine pulls your image, its Docker daemon reads this manifest, identifies its own host CPU architecture, and automatically downloads only the image layers that match its hardware. To cross-compile and package a backend API for both architectures simultaneously, you use the docker buildx build command. You include a platform flag, passing it a comma-separated list of your targets. For example, you type the flag, followed by linux slash amd64 comma linux slash arm64. You append your standard image tag, and then you add a push flag. Here is the key insight. When building for multiple platforms at the same time, you cannot just load the final multi-platform image back into your local Docker engine cache. The local daemon is not designed to hold a manifest list pointing to multiple architectures. You must instruct Buildx to push the results directly to your container registry. The registry acts as the storage system that correctly organizes the manifest list and the individual architecture images. To physically execute the build for a processor you do not have, Buildx relies on an emulator called QEMU. Docker Desktop configures this automatically. When your ARM machine reaches a step requiring an AMD64 instruction, the emulator translates it on the fly. This requires zero changes to your Dockerfile. If you need faster build times, you can also use cross-compilation tools directly inside a multi-stage build, which skips emulation but requires setting up specific compiler flags in your code. The true power of a multi-platform manifest is that it completely isolates the consumer from the underlying hardware details. A developer on a Mac and a production cluster running Intel pull the exact same image tag, and the registry serves each the right binary automatically without any extra configuration. Thanks for spending a few minutes with me. Until next time, take it easy.
13

The Docker MCP Toolkit

3m 28s

Securely connect your AI agents to local tools. This episode introduces the Docker Model Context Protocol (MCP) Toolkit, explaining how to manage containerized MCP servers using catalogs and profiles.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 13 of 18. Giving an AI agent direct access to your local database or filesystem is incredibly powerful. But installing untrusted integration scripts directly onto your host machine to make that happen is a security disaster waiting to happen. The Docker MCP Toolkit fixes this by moving those integrations into isolated containers. The Model Context Protocol, or MCP, is an open standard that lets AI clients, like the Claude desktop app or the Cursor editor, connect to external data sources and tools. To give your AI a new capability, you run a small application called an MCP server. Historically, this meant downloading third-party Python or Node scripts and running them directly on your operating system. That introduces heavy operational friction with dependency conflicts, and more importantly, it gives untrusted code unrestricted access to your machine. The Docker MCP Toolkit solves this by wrapping these servers in standard Docker containers. The first piece of this system is the Catalog. A Catalog is a registry of verified, containerized MCP servers. Instead of pulling random repositories from the internet, you pull standardized Docker images. These images are pre-packaged to run the required tools without requiring any local language runtimes on your host machine. Once you have access to these servers, you need a way to organize them. This is done using Profiles. A Profile is a configuration grouping that defines exactly which tools are needed for a specific project. For example, you might create a profile named web-dev. Inside this configuration, you specify that this profile requires the GitHub server for reading code repositories and the Playwright server for browser automation. You set your API keys and environment variables for both tools once inside the profile configuration. Now, you have isolated tools and a defined profile. How does the AI connect to them? This is where it gets interesting. The connection is managed by the MCP Gateway. The Gateway acts as a central router running on your host. You do not configure your AI client to launch individual containers. Instead, you point Claude or Cursor at the MCP Gateway and request the web-dev profile. When the client connects, the Gateway reads the profile, automatically spins up the requested GitHub and Playwright containers in the background, and establishes the connection. The Gateway brokers the communication between the AI client and the containers using the standard protocol. The AI client believes it is talking to local tools, but all execution happens securely inside Docker. You only configure the tools once in the profile, and you can share that exact setup across any number of different AI applications. If one of those tools misbehaves or is compromised, it is trapped in a container, completely blind to the rest of your system. The true value of the MCP Toolkit is that it separates the configuration of your AI tools from the clients that use them, providing heavy isolation guarantees without sacrificing the intelligence of your workflows. Thanks for listening. Take care, everyone.
14

Dynamic MCP Auto-Discovery

3m 54s

Explore Dynamic MCP, an experimental feature that allows AI clients to search the Docker MCP Catalog and dynamically install new tool servers during a conversation without manual setup.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 14 of 18. You are mid-conversation with an AI coding agent, and you ask it to query a database. Normally, if you forgot to configure the database tool beforehand, the agent fails and asks you to intervene. The problem is manual tool configuration breaking the flow of work. But what if the agent realized it was missing a capability, searched a catalog, and installed the necessary server entirely on the fly? That is exactly what Dynamic MCP Auto-Discovery accomplishes. Typically, providing tools to a large language model means statically defining them in a configuration file before you start the session. If your agent might need to read a GitHub repository, post a Slack message, and query a database, you have to load all those Model Context Protocol servers up front. This approach clutters the context window with tools that might never be used and requires you to predict the agent's needs perfectly. Dynamic MCP shifts this paradigm. It allows the agent to discover and attach tools precisely when the task demands them, without any human intervention. When you enable the dynamic feature, the Docker MCP Gateway exposes a set of management tools directly to the AI agent. The gateway essentially gives the agent the ability to manage its own toolchain. The two critical tools provided by the gateway for this process are mcp-find and mcp-add. The agent interacts with these exactly like it interacts with any standard function call. We can look at how this logic flows using a concrete scenario. Suppose you ask your agent to analyze user metrics stored in a SQL database. The agent evaluates the request, checks its current toolkit, and realizes it does not have any database querying tools loaded. Instead of throwing an error, the agent invokes the mcp-find tool, passing in a relevant search string like postgres. The gateway intercepts this call and queries the configured Docker MCP catalog for available servers matching that string. It returns the metadata and descriptions of the matching servers back to the agent. The agent reads the description, confirms the Postgres server will solve the problem, and moves to the next step. The agent then invokes the mcp-add tool, passing the identifier of the Postgres server it just found. This is where it gets interesting. The gateway catches the mcp-add request, pulls the necessary image, spins up the MCP server in a Docker container, and dynamically binds the new tools to the active connection. The agent suddenly has access to the database tools, connects to your database, runs the query you originally asked for, and returns the result. The entire process happens in the background, keeping your conversation completely unbroken. There is a third tool provided in this management suite for experimental code execution, but it handles a completely different problem set, so we are keeping our focus strictly on discovery today. Here is the key insight about this process. When the agent uses mcp-add to load a new server, that addition is strictly scoped to the current session. The gateway does not rewrite your global configuration files, and the newly added tools do not persist across restarts. When you close the session, the temporary tool binding is destroyed. This ensures your baseline environment remains clean and secure, while still giving the agent maximum flexibility to solve complex, multi-step problems dynamically. By exposing catalog search and installation as standard function calls, Dynamic MCP removes the burden of upfront configuration and allows the agent to build its own environment on demand. That is all for this one. Thanks for listening, and keep building!
15

Docker Sandboxes for AI

3m 55s

Understand the architecture of Docker Sandboxes. Learn why autonomous AI coding agents require isolated microVMs with dedicated Docker daemons instead of standard container namespaces.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 15 of 18. An autonomous AI coding agent is exactly the kind of process you do not want running with root access on your laptop. You ask it to fix a bug, and suddenly it is downloading arbitrary packages, modifying system files, or trying to rebuild your local infrastructure. You need a place where the agent can act like an administrator without actually being one. This is what Docker Sandboxes for AI are designed to solve. Traditionally, Docker isolates processes using Linux namespaces and control groups. Those containers share the host operating system kernel. For a predictable web service, that model works perfectly. But an AI agent is inherently unpredictable. It generates unverified code, executes it, and often needs to install new system packages on the fly to test its own solutions. Sharing the host kernel with an unpredictable agent is too much of a security risk. To address this, Docker Sandboxes abandon standard container namespaces in favor of isolated microVMs. When you spin up a sandbox for an agent, it boots a dedicated, lightweight virtual machine. The agent gets its own distinct kernel. It cannot see your host processes. It cannot access your host network stack by default. Most importantly, it completely eliminates the risk of traditional container escape vulnerabilities. The agent is strictly confined to a hardware-virtualized box. This matters immensely when you consider what AI agents actually do. Imagine your agent is tasked with writing a complex web application, creating a Dockerfile for it, and testing the build. To accomplish this, the agent needs to run Docker commands. If you simply mapped your host system Docker socket into a standard container, the agent could theoretically launch privileged containers directly on your host machine. Docker Sandboxes prevent this by running a completely isolated Docker daemon inside the microVM itself. The agent can build images, pull external dependencies, and run nested containers all day long. Because it is talking to the isolated daemon inside the microVM, your host system Docker environment remains completely unaware and unpolluted. When the task finishes and the sandbox is destroyed, the internal daemon and all its downloaded images vanish immediately. This is where it gets interesting. If the microVM is completely isolated, how do you actually get the finished code back out? The architecture solves this using workspace mounting. This is a secure filesystem passthrough mechanism. When initializing the sandbox, you define a specific directory on your host to act as the workspace. This single directory is safely mounted into the microVM. As the agent writes code, runs tests, or generates assets, it saves them to this workspace directory. The passthrough synchronizes those specific files back to your host filesystem in real time. The agent delivers the requested output without ever having access to the rest of your hard drive. It can freely break things inside the microVM, but your local files remain untouched. The core insight is that isolation in this context is no longer just about protecting the host from malicious external software. It is about safely enabling the unpredictable, highly privileged system operations an autonomous agent must perform to actually be useful. If you enjoy these episodes and want to support the show, you can search for DevStoriesEU on Patreon. That is your lot for this one. Catch you next time!
16

Building AI Agent Teams

3m 51s

Stop relying on a single AI model for complex tasks. This episode introduces the Docker Agent framework, showing how to compose specialized teams of agents defined in YAML.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 16 of 18. You hand a massive application error to a single AI model. It tries to hold the entire architecture, the logs, and the target syntax in its head all at once. Halfway through, it gets confused and hallucinates a fix for a completely unrelated file. One generic model trying to do everything leads to context overload. To solve complex problems reliably, you need Building AI Agent Teams. The Docker Agent framework lets you define specialized teams of AI agents using a simple YAML configuration file. Instead of writing one monolithic system prompt, you break the workflow into discrete roles. You structure this as a hierarchy. There is a root agent that orchestrates the workflow, and multiple sub-agents that execute specific tasks. This isolates context. Each sub-agent only receives the information it needs for its specific job. Consider a debugging workflow. You need a team with two distinct roles. First, a bug investigator that analyzes stack traces. Second, a fixer that actually rewrites the broken code. You define this entire team composition in a file called docker dash agent dot yml. You start by configuring the root agent at the top of the file. You give it a name, select an underlying language model, and provide system instructions. The root agent acts as the manager. Its primary responsibility is not to solve the problem directly, but to delegate work. You instruct the root agent to coordinate between the investigator and the fixer based on the inputs it receives. Next, you define the sub-agents within the same YAML file. You declare the bug investigator agent. You assign it a model that excels at reasoning and reading logs. You give it strict instructions to only read stack traces, identify the failing function, and output a brief explanation of why it failed. Then, you declare the code fixer agent. You might assign it a model specifically optimized for code generation. Its instructions strictly tell it to take a failing function and output a corrected version. No log analysis, just code in and code out. When you run this team, the user only interacts with the root agent. You hand the root agent a massive dump of application logs. The root agent evaluates the request and reads the descriptions of its available sub-agents. It determines that the bug investigator is the right agent for the first step. The root agent passes the log dump down to the investigator. The investigator processes the noise, finds a null pointer exception in a specific function, and returns just that specific detail. The root agent takes that isolated piece of information and passes it over to the code fixer agent. The code fixer writes the patch and hands it back to the root manager, who then returns the final, clean result to you. Here is the key insight. The code fixer never sees the massive stack trace. It only sees the exact function it needs to fix. You protect the context window of the coding model by filtering out the noise beforehand. By assigning narrow, specific instructions to individual sub-agents in the YAML file, you prevent the models from drifting off task. The root agent handles the sequence, and the sub-agents handle the execution. Structuring agents hierarchically forces you to treat AI like a microservice architecture, assigning strict boundaries to what any single model is allowed to care about. That is all for this one. Thanks for listening, and keep building!
17

Agent Toolsets and Workflows

3m 43s

Make your AI agents actually useful by giving them the right constraints. Learn how to configure filesystem toolsets and enforce structured development workflows in Docker Agent.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 17 of 18. An AI agent with the most advanced language model is practically useless for development if it cannot read your source code or execute your test suite. Without external capabilities, it just guesses at syntax. Agent Toolsets and Workflows solve this by bridging the gap between a simple text generator and a working software engineer. By default, a Docker Agent runs in an isolated container. It has no idea what files exist in your project directory. To change this, you configure the toolsets array in your agent YAML file. Toolsets are pre-packaged capabilities that give the agent direct access to its host environment. For a development agent, you typically inject two primary toolsets: filesystem access and shell access. The filesystem toolset allows the agent to read your directory tree, open source files, and write code back to the disk. The shell toolset allows the agent to run terminal commands. Without the toolsets array, your agent is trapped in a box. With it, your agent has hands and eyes. However, giving an agent hands and eyes is a recipe for chaos if it lacks discipline. An unstructured agent might change a file, assume it worked, and report success without ever checking for syntax errors. You control this behavior using the instructions block in the YAML file. This block is not a place for vague suggestions. It is where you define a strict operational workflow. The most reliable way to structure these instructions is by breaking the agent tasks into four mandatory phases: Analyze, Examine, Modify, and Validate. You write these directly into the instructions block, telling the agent it must complete one phase before moving to the next. First is Analyze. The agent reads the user prompt to understand the requested feature or bug fix. Next is Examine. Here, you instruct the agent to use its filesystem toolset to search your codebase, find the relevant files, and read their contents to understand the current logic. Third is Modify. The agent writes the updated code to the disk. This is the part that matters. The fourth phase is Validate. This is where you force the agent to prove its work using the shell toolset. Consider an expert Go developer agent. In the Validate section of your instructions, you explicitly mandate that the agent must run the command go test dot slash dot dot dot, followed by golangci dash lint run. Because the agent has shell access, it executes those exact commands. If the Go compiler throws a syntax error, or if a test fails, the toolset feeds that terminal output directly back to the agent. Because your instructions state that the task is not complete until validation passes, the agent is forced to read the error, loop back to the Modify phase, fix the code, and run the tests again. It will repeat this cycle until the linter is happy and the tests pass. Providing access to the filesystem and shell makes your agent capable of writing software. But structuring its instructions to demand explicit test execution makes your agent reliable. You bind its tools to a strict validation loop so you never have to review broken code. That is all for this one. Thanks for listening, and keep building!
18

AI Models in Compose

3m 02s

Treat your local LLMs just like any other application dependency. Learn how to declare, configure, and bind AI models directly inside your Docker Compose YAML file.

Download
Hi, this is Alex from DEV STORIES DOT EU. Docker Masterclass, episode 18 of 18. Your application depends on a local Large Language Model. You probably boot up an external inference engine, configure the network manually, and inject the endpoint URLs by hand. It works, but it breaks the isolated reproducibility of your environment. Your AI model is just another dependency, and it belongs in your configuration file right next to your database. That is exactly what the top-level models element in Docker Compose achieves. Starting with Compose version 2.38, models are a native concept. Previously, running a local model meant writing complex service definitions for an inference engine, manually exposing ports, and configuring network bridges so your application container could talk to it. The new models block eliminates that friction by treating the AI model as a distinct piece of infrastructure. You add a models block at the very top level of your file, at the same indentation as services and volumes. Inside, you name your model. Let us use ai/smollm2 for a simple chat application. Under this name, you declare the actual model identifier to pull. This is where you also define hardware constraints and engine parameters. You can set the context size to restrict memory usage. If the underlying engine requires specific launch parameters, you define them using runtime flags. The model configuration is isolated and clear. Next, you bind your application to the model. Inside your services block, you locate your chat app service and add a models array. Using the short syntax, you simply list ai/smollm2. You do not need to configure dependencies manually or set up custom networking aliases. Here is the key insight. When you use this short binding syntax, Compose takes over the orchestration. It provisions the correct inference engine behind the scenes to serve your specified model. Most importantly, it auto-generates standard environment variables and injects them directly into your chat application container. Your code wakes up with variables like OPENAI_BASE_URL already populated, pointing perfectly to the internal endpoint of the model. You execute a single docker compose up command. Compose pulls the smollm2 model, configures the engine, starts your chat service, and wires the connection. No manual API keys, no guessing internal IP addresses. Everything routes correctly out of the box. I encourage you to explore the official documentation and try writing one of these files yourself. Since this wraps up our masterclass series, feel free to visit devstories dot eu to suggest topics for whatever we cover next. By elevating AI models to native elements in your configuration, your infrastructure becomes fully declarative, ensuring the exact model version your code expects is always the one that boots up. That is it for today. Thanks for listening — go build something cool.