Back to catalog
Season 25 14 Episodes 50 min 2026

GitLab CI/CD

2026 Edition. A comprehensive guide to understanding and using GitLab CI/CD for your software deployments, covering everything from the basics of .gitlab-ci.yml to advanced concepts like directed acyclic graphs and multi-project pipelines.

CI/CD DevOps
GitLab CI/CD
Now Playing
Click play to start
0:00
0:00
1
The .gitlab-ci.yml Paradigm
Discover the foundational concepts of GitLab CI/CD. This episode covers the .gitlab-ci.yml file, the stage and job architecture, and how sequential execution works by default.
3m 47s
2
Runners and Executors
Learn about GitLab Runners, the execution engines behind your CI/CD pipelines. We explore the difference between GitLab-hosted and self-managed runners, and how executors define the job environment.
3m 45s
3
Anatomy of a CI/CD Job
Dive into the fundamental building block of pipelines: the job. This episode explains job scripts, default keywords, and how to organize complex pipeline logs.
3m 36s
4
CI/CD Variables and Secrets
Explore how to manage configuration and sensitive data in GitLab CI/CD using variables. Learn the differences between predefined variables, custom UI variables, and file-type variables.
3m 59s
5
Artifacts vs Caches
Understand the critical difference between artifacts and caches in GitLab CI/CD. Learn when to use each to pass data between stages or speed up your pipeline execution.
3m 39s
6
Controlling Execution with Rules
Discover how to dynamically control when jobs are added to your pipeline using the rules keyword. Learn to use conditions, variables, and file changes to optimize execution.
3m 18s
7
Directed Acyclic Graphs with Needs
Break free from strict sequential stages. This episode explains how to use the needs keyword to create Directed Acyclic Graphs (DAGs) and dramatically speed up pipeline execution.
3m 35s
8
Merge Request Pipelines
Learn how to configure pipelines that only run in the context of a merge request. We cover pipeline sources and security considerations for handling community forks.
3m 26s
9
Downstream Pipelines
Master pipeline triggers to orchestrate complex architectures. This episode breaks down the differences between Parent-Child pipelines for monorepos and Multi-project pipelines for microservices.
3m 36s
10
Environments and Deployments
Bring visibility to your deployments with GitLab Environments. Learn how to map CI/CD jobs to specific targets like staging and production, and track what code lives where.
3m 08s
11
Dynamic Environments and Review Apps
Spin up temporary infrastructure for every pull request. This episode dives into dynamic environments, capturing generated URLs, and cleaning up resources with on_stop jobs.
3m 34s
12
DRY Configurations with Includes
Keep your CI/CD configuration DRY (Don't Repeat Yourself). Discover how to use the include keyword to modularize your pipeline configuration across multiple files and projects.
3m 53s
13
CI/CD Components and the Catalog
Explore the modern evolution of pipeline reusability: CI/CD Components. Learn how to create component projects, use semantic versioning, and leverage the GitLab CI/CD Catalog.
3m 43s
14
Compile-Time CI Expressions
Unlock ultimate pipeline dynamism with CI/CD configuration expressions. Learn how the compile-time syntax evaluates inputs and matrices before jobs ever execute.
3m 52s

Episodes

1

The .gitlab-ci.yml Paradigm

3m 47s

Discover the foundational concepts of GitLab CI/CD. This episode covers the .gitlab-ci.yml file, the stage and job architecture, and how sequential execution works by default.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 1 of 14. Many engineering teams rely on messy, undocumented release processes that only one person truly understands. The actual path to production remains invisible until something breaks. The dot gitlab dash c i dot yml paradigm changes this by translating your entire build and release lifecycle into a transparent, sequential graph. GitLab CI/CD is the built-in continuous integration and continuous deployment system for GitLab. It is entirely controlled by a single configuration file named dot gitlab dash c i dot yml. You place this file in the root directory of your project repository. Because it is committed alongside your application code, your deployment process is version-controlled, auditable, and accessible to any developer looking at the repository. When you push a commit containing this file, GitLab immediately detects it and triggers a pipeline. A pipeline is the top-level architecture of your CI/CD process. It is composed of two primary components: jobs and stages. Jobs dictate what actually happens. A job contains the specific shell commands or scripts needed to execute a task, such as compiling source code, formatting text, or moving files to a server. Stages dictate when those jobs run. You organize jobs into stages to control the chronological flow of execution. Consider a standard pipeline with three stages. At the top of your YAML configuration file, you declare your stages in the exact order you want them to run: build, test, and deploy. Below that list, you define your individual jobs and map them to those defined stages. You start by writing a job called build dash job and assign it to the build stage. Its script tells the system to compile your application. Next, you write a job called test dash job, assign it to the test stage, and provide the command to run your test suite. Finally, you write a deploy dash prod job, link it to the deploy stage, and give it the instructions to push the compiled application to your production environment. Here is the key insight. GitLab processes these stages strictly in sequence. The pipeline begins with the build stage. The system executes your build job. If that job finishes successfully, the pipeline automatically advances to the test stage and executes the test job. If the tests pass, it moves forward to the deploy stage. This strict ordering acts as a definitive quality gate. If a job fails at any point—for instance, if a unit test fails during the test stage—the entire pipeline halts. The deploy stage will never execute, meaning broken code cannot reach production. Because this logic is declared clearly in the YAML file, the GitLab user interface translates it into a visual pipeline graph. Anyone on your team can view a commit, look at the graph, and instantly understand exactly where the code is in the pipeline, which stage succeeded, and precisely where a failure occurred. The core strength of this paradigm is centralization. By defining stages and jobs in one root file, your deployment sequence stops being a mystery and becomes a readable, repeatable process that lives in the exact same place as your code. If you would like to help support the show, you can search for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
2

Runners and Executors

3m 45s

Learn about GitLab Runners, the execution engines behind your CI/CD pipelines. We explore the difference between GitLab-hosted and self-managed runners, and how executors define the job environment.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 2 of 14. You spend hours writing a perfect pipeline configuration file, commit it, and wait. But nothing happens. Your pipeline is completely useless without a computational engine waiting to pick up those instructions. This is where GitLab Runners and Executors come in. There is a common misconception that the GitLab application itself runs your build scripts. It does not. GitLab manages the repository and tracks the status of your pipelines, but it strictly delegates the actual execution. A GitLab Runner is a separate application that operates as an agent. It continually polls the GitLab instance, asking if there are any pending jobs it is authorized to handle. When it finds one, it pulls the job payload, executes the commands, and sends the logs and results back to GitLab. You have two main options for sourcing these agents. The easiest path is using GitLab-hosted runners. These are managed for you on GitLab SaaS and cover common environments like Linux, macOS, and Windows. Often, though, you need a custom setup. Maybe your build requires specialized hardware, like a specific GPU, or needs access to a private internal network. In that case, you use self-managed runners. You install the runner application on your own infrastructure. You can scope these self-managed runners broadly across an entire GitLab instance, share them across a group of related projects, or lock them down to a single specific project. Once a runner picks up a job, it needs to know exactly how and where to run the commands. This is defined by the executor. The executor dictates the specific execution environment for the job. Two of the most common types are the shell executor and the Docker executor. The shell executor is straightforward. It runs the job directly on the host machine's operating system using a standard terminal shell, like Bash or PowerShell. Because of this, all dependencies must be pre-installed on that host machine. The Docker executor operates differently. It spins up a fresh, isolated container for every single job, runs the scripts inside it, and tears it down afterward. This guarantees a completely clean environment every time. Let us walk through a concrete scenario by registering a self-managed project runner locally using a shell executor. First, you go into your specific project settings in GitLab and create a new runner. GitLab will generate a unique authentication token. Next, you install the GitLab Runner application on your local machine. From your local terminal, you run the register command. The prompt will ask for two main pieces of information. It needs the URL of your GitLab instance and the authentication token you just generated. This step securely links your local machine to that specific project. Finally, the setup will ask you to choose an executor. You type shell. From that moment on, whenever a job triggers in that GitLab project, your local machine will pull it and execute the commands directly in its local terminal environment. Here is the key insight. GitLab is the orchestrator, but the Runner and its Executor form the actual factory floor. By decoupling the management of jobs from the execution of jobs, you gain the flexibility to run pipelines on anything from a shared cloud container to a bare-metal server in a locked basement. Thanks for listening, happy coding everyone!
3

Anatomy of a CI/CD Job

3m 36s

Dive into the fundamental building block of pipelines: the job. This episode explains job scripts, default keywords, and how to organize complex pipeline logs.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 3 of 14. You have a pipeline with twenty jobs, and every single one starts by running the exact same setup command. When that setup process changes, you now have twenty different places to update, and missing even one will break your build. Resolving that repetition starts with understanding the anatomy of a CI/CD job. A job is the fundamental unit of execution in GitLab CI/CD. It is defined by a name and must contain at least one command to run. This main execution block is defined using the script keyword. The script represents an array of commands executed in sequence by the runner. If one command fails, the job stops immediately and is marked as failed. Jobs rarely run in a vacuum. They often need the environment prepared before they run, and sometimes they require cleanup afterward. This is where the before script and after script keywords come in. Consider a Ruby project. Before running your tests or executing a linter, you need to install your dependencies. You place your bundle install command inside the before script block. The runner executes this before script first. If it succeeds, the main script runs. Once the main script finishes, the after script executes. Here is the key insight. The after script runs even if the main script fails. This makes it the correct place for closing network connections, wiping temporary credentials, or cleaning up test databases. Writing that same bundle install command inside every individual test job gets repetitive fast. To fix this, you use the default keyword at the top level of your configuration file. Any configuration defined under default is automatically inherited by all jobs in the pipeline. You declare your bundle install before script once inside the default block. Now, every job runs it automatically. If one specific job does not need it, you define an empty before script inside that specific job. That local definition overrides the global default. Sometimes you need a job definition in your file, but you do not want it to actually run. You might be writing a base template that other jobs will inherit from, or you might want to temporarily disable a flaky test without deleting the code entirely. You do this by hiding the job. Simply add a period at the very start of the job name. When GitLab parses the pipeline configuration, it completely ignores any job name starting with a dot. The job will not show up in the user interface, and it will not execute. As your pipeline grows, the web interface can become cluttered with dozens of individual jobs. If you have several closely related jobs, you can group them together visually in the pipeline graph. You accomplish this by naming the jobs with a shared prefix, followed by a slash or a colon. For example, if you name three separate jobs build slash ruby one, build slash ruby two, and build slash ruby three, the interface will collapse them into a single dropdown group simply labeled build. Clicking the group expands it to show the individual jobs inside. This changes nothing about how the jobs execute on the runners, but it makes a massive pipeline much easier to read at a glance. A well-structured pipeline separates the setup from the execution, uses defaults to eliminate duplicated code, and relies on naming conventions to keep the visual interface strictly focused on what matters. That is all for this one. Thanks for listening, and keep building!
4

CI/CD Variables and Secrets

3m 59s

Explore how to manage configuration and sensitive data in GitLab CI/CD using variables. Learn the differences between predefined variables, custom UI variables, and file-type variables.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 4 of 14. Hardcoding an API key directly into your pipeline configuration is the absolute fastest way to compromise your entire production environment. You need a way to pass configuration and credentials into your jobs dynamically, without exposing them to anyone who can read your repository. CI/CD variables and secrets are the mechanisms that handle this. Whenever a GitLab runner picks up a job, it does not start with an empty state. GitLab automatically injects predefined variables into the environment. These give your script immediate context. You have access to variables like C I commit branch, the pipeline I D, and the project name. You do not define these. They are simply there, ready to use in your scripts to route logic or tag build artifacts. Beyond the predefined context, you will need to supply your own custom variables. You can define these in two places: inside your yaml configuration file, or in the GitLab user interface. The rule for choosing where to put them is straightforward. If the value is safe to read, like a compiler flag or a development server URL, put it in the yaml file. The configuration stays with the code. If the value is sensitive, like a database password, define it in the GitLab project UI. When you place a secret in the UI, you have to configure its security boundaries. People frequently mix up masking a variable and protecting a variable. They are completely different concepts. Masking a variable prevents its value from appearing in the job logs. If you mask a database password, and a poorly written script tries to print it to the console, GitLab intercepts the output stream. It replaces the actual text of the password with a string of asterisks before the log is ever saved. Masking controls log visibility. Here is the key insight. Masking does nothing to stop a developer from writing a script that sends the password to an external server. That is where protection comes in. Protecting a variable limits its availability. A protected variable is only injected into pipelines that run on protected branches or protected tags. If someone opens a merge request from a standard feature branch, the pipeline they trigger simply will not contain that variable. This prevents untrusted code from accessing production secrets. Additionally, you can use hidden settings in the UI for extreme sensitivity. Once a variable is saved, its value is obscured in the interface. Even project maintainers cannot easily retrieve the raw text later, meaning someone with access cannot just casually scrape the settings page to steal all your tokens. Now, consider how the variable gets to your application. Most variables are injected as standard environment variables. But some tools refuse to read environment variables and insist on reading from a physical file. The AWS command line interface, for example, often expects a formatted credentials file residing on disk. Instead of writing a pipeline script that creates a file, dumps the variable text into it, and then tries to securely delete it later, you can use a File type variable. When you configure a variable as a File type in the UI, the runner handles the logistics automatically. When the job starts, the runner takes the text value, writes it securely to a temporary file on the runner disk, and sets the environment variable to contain the file path, not the file contents. You simply point your AWS tool at the path provided by the variable. When the job finishes, the runner destroys the file automatically. Securing your pipeline is about minimizing exposure at every step. Do not rely on log masking to protect against malicious code, and do not write temporary credential files yourself when the runner can handle the lifecycle safely. That is all for this one. Thanks for listening, and keep building!
5

Artifacts vs Caches

3m 39s

Understand the critical difference between artifacts and caches in GitLab CI/CD. Learn when to use each to pass data between stages or speed up your pipeline execution.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 5 of 14. Your build job completes perfectly, but your deployment job mysteriously fails with a file not found error. You check the runner, and the files were definitely generated just a few minutes ago. The problem usually boils down to a fundamental misunderstanding between two keywords: artifacts and cache. These two concepts are frequently confused because they both involve saving files on a runner, but they exist for entirely different reasons. The cache keyword defines a list of files and directories intended to speed up your pipeline. It stores downloaded dependencies between different pipeline runs. Take a standard Node project as an example. Your build job requires thousands of packages in the node modules folder. Instead of downloading them from the public registry every single time a developer pushes code, you configure a cache. You give it a key based on your package lock file and point it to the node modules directory. On subsequent runs, GitLab restores those files locally. Here is the key insight. Caching is strictly an optimization. If the cache is purged, expires, or fails to extract, your job must still be able to run and succeed by downloading the packages from scratch. A missing cache just means a slower pipeline, not a broken one. Your job logic should never depend on a cache being present. Artifacts serve a completely different architectural purpose. The artifacts keyword specifies files and directories generated by a job that must be passed to subsequent jobs within the same pipeline run. They are not an optimization. They are a structural requirement for your CI flow. Returning to the Node project, your build job compiles your source code into a final output directory called dist. Your deployment job runs in a later stage and needs that exact dist folder to push to your production server. Because jobs run in isolated environments, the deploy job cannot simply reach into the build job workspace. You bridge this gap using artifacts. You define the dist folder under the artifacts keyword in your build job. Upon job success, GitLab takes those files, packages them, and attaches them to the pipeline. When the downstream deployment job starts, GitLab automatically downloads that artifact package and extracts it into the working directory. If an artifact is missing, the downstream job will fail because the required files literally do not exist in the workspace. Unlike a cache, you cannot simply download an artifact from the internet if it is missing. It contains the unique, intermediate state of your current pipeline run. In a properly configured YAML file, you often use both in the same build job. You set the cache to target your dependencies folder to save time. You set the artifacts to target your compiled output folder to pass data forward. The deployment job later in the pipeline does not need the cache at all. It only requires the built code, which it receives seamlessly because artifacts are downloaded by default in all subsequent stages. When you configure your next pipeline, remember this rule. Use the cache for external dependencies you pull down to save time, and use artifacts for the internal build results you must pass forward to complete the deployment. That is it for today. Thanks for listening — go build something cool.
6

Controlling Execution with Rules

3m 18s

Discover how to dynamically control when jobs are added to your pipeline using the rules keyword. Learn to use conditions, variables, and file changes to optimize execution.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 6 of 14. You just committed a typo fix to a markdown file, and suddenly your pipeline kicks off a twenty-minute end-to-end test suite. That is a massive waste of both compute minutes and your patience. To stop running heavy jobs when they are not needed, you control execution using the rules keyword. The rules keyword determines exactly when a job is added to a pipeline. If you have been around GitLab for a while, you might remember using only and except to filter jobs. Those are legacy keywords. Rules is the modern, more powerful replacement. Under the hood, rules takes a list of conditions. GitLab evaluates this list from top to bottom. As soon as it finds a match, it stops looking and either adds the job or skips it, depending on how you configured that specific rule. There are three main conditions you can evaluate. The first is if. The if condition evaluates pipeline variables using a simple logical expression. You can tell a deployment job to run only if the commit branch matches the default branch. If the variable evaluation returns true, the rule matches. Now, the second condition checks file modifications using the changes keyword. This evaluates whether the current push modified any files matching a specific path or wildcard. This is where you save real money. Consider a heavy JavaScript linter job. You do not want this linter chewing up CPU cycles if a backend developer only touched database configuration files. You add a rule using changes, and specify the wildcard for dot js files. If the commit includes changes to a javascript file, the rule matches and the linter runs. If no javascript files were touched, the job is entirely excluded from the pipeline. The third condition is exists. Instead of checking variables or recent modifications, exists simply checks if a specific file is present in the repository at that moment. You might have a generic pipeline template used by multiple projects. You can define a container build job with an exists rule pointing to a Dockerfile. If the project has a Dockerfile in its root directory, the job runs. If it does not, the job is skipped entirely. Here is the key insight. Finding a match does not automatically mean the job runs. When a rule evaluates to true, it applies a secondary instruction using the when attribute. By default, a matching rule assumes the job should be added to the pipeline. But you can explicitly define a rule with when set to never. This is highly effective for blocklisting. You can put a rule at the top of your list that says if the commit message contains the word draft, set when to never. Since rules are evaluated top-to-bottom, that job is instantly killed before any other conditions are checked. If you remember one thing about controlling execution, it is that order dictates outcome. Place your exclusion rules at the very top of your list, because the moment GitLab finds a true condition, it stops reading and locks in the decision. I would like to take a moment to thank you for listening — it helps us a lot. Have a great one!
7

Directed Acyclic Graphs with Needs

3m 35s

Break free from strict sequential stages. This episode explains how to use the needs keyword to create Directed Acyclic Graphs (DAGs) and dramatically speed up pipeline execution.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 7 of 14. Your pipeline is not actually slow. It is just artificially bottlenecked. You likely have fast jobs waiting around for slow jobs that have absolutely nothing to do with them, simply because they sit in the same column on your screen. To fix this, you drop the rigid columns and build Directed Acyclic Graphs with Needs. By default, standard pipelines execute sequentially by stage. You define stages like build, test, and deploy. Every job in the test stage must completely finish before any job in the deploy stage can start. If your test stage has five jobs, and four of them finish in two minutes while one takes ten minutes, the entire deploy stage is blocked until minute ten. The execution barrier between stages is absolute. The needs keyword breaks this barrier. It allows you to define explicit relationships between jobs, shifting your pipeline from a strict sequence to a Directed Acyclic Graph. When you use the needs keyword in a job definition, you tell the system exactly which previous jobs must finish before this one starts. The moment those specific dependencies succeed, your job triggers. It stops waiting for the rest of the stage to clear. Consider a monorepo containing both a frontend and a backend. Your backend build and test jobs are fast, taking about two minutes. Your frontend build is heavy, taking ten minutes. In a traditional pipeline, the deploy stage cannot trigger until both the frontend and backend tests finish. Your backend deployment is effectively held hostage by the frontend build. Here is the key insight. You can add the needs keyword to your backend deploy job and list only the backend test job as its dependency. Now, the execution logic changes. The backend test job finishes at minute two. The backend deploy job sees that its explicit dependency is met and starts immediately. It completely ignores the fact that the frontend build is still running for another eight minutes. The stages still exist for visual organization in the user interface, but the actual execution order is now dictated by the graph you built. To set this up, you add the needs keyword to a job and provide an array of exact job names. There is a secondary benefit here regarding data transfer. Normally, a job downloads artifacts from all successful jobs in previous stages. When you use needs, artifact downloading becomes targeted. Your job will only download artifacts from the specific jobs listed in the needs array. This prevents your backend deploy job from downloading massive, irrelevant frontend assets, saving even more time during job initialization. If you require a job to start immediately when the pipeline is created, bypassing all stage delays, you can pass an empty array to the needs keyword. This tells the system the job has zero dependencies, pushing it to execution at second zero. The true value of a Directed Acyclic Graph is decoupling independent workflows within a single pipeline. You stop organizing jobs by when they should run, and start organizing them purely by what inputs they require to execute. Thanks for listening. Take care, everyone.
8

Merge Request Pipelines

3m 26s

Learn how to configure pipelines that only run in the context of a merge request. We cover pipeline sources and security considerations for handling community forks.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 8 of 14. You maintain an open-source project. A stranger forks it, hides a script that exports environment variables inside a unit test, and opens a contribution. If your system automatically runs that code on your private servers, your secrets are gone. To prevent this, you need a strict boundary between simply pushing code and executing it in a privileged environment. That boundary is handled by Merge Request Pipelines. Normally, GitLab triggers a pipeline every time you push a commit to any branch. A merge request pipeline behaves differently. It is a specific type of pipeline configured to run on the contents of the source branch, but only within the context of an open merge request. This context gives you access to specific environment variables related to the merge itself, like the target branch name or the merge request identifier. You tell a job to run as a merge request pipeline using the rules section in your configuration file. You write a rule evaluating the pipeline source variable. You check if the CI pipeline source equals the exact text merge request event, written with underscores between the words. When you apply this rule, the job will only run when a merge request is created or updated. It is very common to pair this with rules that prevent the job from running on standard branch pushes. If you do not separate them, pushing a commit to an open merge request will trigger two pipelines at the exact same time doing the exact same work. This brings us to the security implications of external contributions. When someone forks your repository, they create a completely isolated copy. If they open a merge request from their fork back to your parent project, any pipeline that runs automatically executes inside their fork. It uses their runners and their variables. This is by design. Your parent project secrets are safe because the contributor code has no access to your infrastructure. But eventually, you need to verify that their code passes your official test suite using your own database credentials and deployment targets. GitLab allows you to run pipelines for these fork merge requests within the parent project, but it requires a deliberate human action. A developer or maintainer in the parent project must manually trigger the execution. In a proper workflow, the maintainer reads the submitted code from the fork first. They must look for anything malicious, destructive, or poorly constructed. Only when the maintainer is absolutely certain the code is safe do they click the button to trigger the pipeline. Once triggered, that external code executes on the parent project runners and has access to the parent project secrets. The manual trigger acts as a physical security gate. This is the part that matters. Merge request pipelines are not just a way to group jobs in the user interface, they are a fundamental control mechanism that lets you evaluate external code without blindly exposing your internal infrastructure. If you want to help keep the lights on here, you can support the show by searching for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
9

Downstream Pipelines

3m 36s

Master pipeline triggers to orchestrate complex architectures. This episode breaks down the differences between Parent-Child pipelines for monorepos and Multi-project pipelines for microservices.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 9 of 14. You have a single YAML file with five hundred jobs. Changing one line feels like defusing a bomb, and execution takes an hour because everything runs in one massive block. Downstream pipelines solve this by splitting that monolith into modular, independent workflows. A downstream pipeline is simply any GitLab CI/CD pipeline triggered by another pipeline. The pipeline doing the triggering is called the upstream pipeline. Instead of running every job in a single sequence, the upstream pipeline acts as a coordinator, delegating work to smaller, focused pipelines. GitLab divides downstream pipelines into two specific types based on where they execute. The first type is a parent-child pipeline. This happens entirely within the same project. If you have a monorepo, a parent-child setup is exactly what you need. The parent pipeline detects what changed and triggers only the relevant child configuration. The second type is a multi-project pipeline. This occurs when a pipeline in one repository triggers a pipeline in a completely different GitLab project. You use this for architectures spread across multiple repositories, like triggering an integration testing project only after a standalone API project pipeline succeeds. You configure both types using a specific job keyword called trigger. A trigger job is fundamentally different from a standard job. It never contains a script section. It does not execute commands on a runner. Its only purpose is to start the downstream pipeline. For a multi-project pipeline, you pass the trigger keyword the path to the target project. For a parent-child pipeline, you use the trigger keyword combined with the include keyword, pointing to a different YAML file located in the same repository. Think of a routing parent pipeline managing three separate microservices stored in a monorepo. The parent pipeline evaluates the commit and triggers three parallel child pipelines. One handles the user service, one handles the payment service, and one handles the inventory service. Each child pipeline has its own stages, jobs, and rules. They execute independently. By default, a trigger job fires and forgets. The upstream job starts the downstream pipeline and immediately succeeds. If you need the upstream pipeline to wait and mirror the status of the downstream pipeline, you add the strategy depend parameter to the trigger job. This forces the parent pipeline to wait, meaning a failure in the child pipeline will bubble up and mark the parent job as failed. Here is the key insight. Engineers often configure parent-child pipelines and immediately assume they are broken because the child pipelines do not appear on the main pipelines index page. This is an intentional design choice. The main index only shows the parent pipelines to prevent clutter. To view the child pipelines, you must click into the parent pipeline detail view. They are nested under the specific job that triggered them. Multi-project pipelines, on the other hand, do appear in the pipeline index of their respective target projects, because they are top-level pipelines in those repositories. Moving to downstream pipelines forces you to treat your CI/CD configuration like actual software architecture, replacing one brittle script with distinct components that have isolated failure domains. That is your lot for this one. Catch you next time!
10

Environments and Deployments

3m 08s

Bring visibility to your deployments with GitLab Environments. Learn how to map CI/CD jobs to specific targets like staging and production, and track what code lives where.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 10 of 14. If your team relies on asking in Slack to know who just pushed to the staging server, you have a visibility gap in your workflow. You need a definitive, automated way to track exactly what code is running where at any given second, and that is exactly the problem solved by GitLab Environments and Deployments. In GitLab, an environment is a logical tracking entity. GitLab does not automatically provision your servers or spin up your cloud infrastructure just because you created an environment. The environment is simply a label. It tells GitLab that a specific job in your pipeline is responsible for pushing code into a specific destination, like staging or production. You set this up by adding the environment keyword directly inside your deployment job. You provide a static name for the destination. For example, you might create a job called deploy to staging, and inside that job, you set the environment name to staging. The script section of your job still handles the actual work. It runs the commands that copy files, apply configurations, or restart remote services. Consider the scenario where you are deploying a new release candidate. You merge your code and the pipeline starts. It builds the application, runs your test suite, and reaches the staging deployment job. Because you attached the environment keyword to this job, GitLab changes how it treats the execution. It watches the job output closely. When your deployment script finishes and the job exits successfully, GitLab registers a formal deployment event for the staging environment. Here is the key insight. Using that single keyword activates an entire suite of deployment tracking features within the GitLab interface. If you navigate to the Operate menu on the sidebar and select Environments, you are presented with a dashboard showing the real-time status of your deployment targets. For your staging environment, you no longer have to guess what is running. The dashboard displays the exact commit hash currently active, the branch that commit came from, the author of the code, and how long ago the deployment finished. It creates an immutable, shared record of the server state. Everyone on the project has the exact same visibility without needing to check server logs or ask colleagues. Clicking into the staging environment reveals the full deployment history. This historical record is what enables manual rollbacks directly from the user interface. If a new release candidate completely breaks the staging server, fixing it does not require writing a revert commit or manually constructing an older pipeline. You open the history list, identify the last known successful deployment, and click the rollback button next to it. GitLab immediately re-runs the deployment job from that older, stable commit. It replaces the broken code and restores your staging environment to a working state fast. By using the environment keyword, you bridge the gap between running scripts and tracking actual deployments. It turns an isolated pipeline job into a clear, visible record of what software version currently lives on your infrastructure. I would like to take a moment to thank you for listening — it helps us a lot. Have a great one!
11

Dynamic Environments and Review Apps

3m 34s

Spin up temporary infrastructure for every pull request. This episode dives into dynamic environments, capturing generated URLs, and cleaning up resources with on_stop jobs.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 11 of 14. Handing your product manager a live, clickable URL for every single feature branch completely eliminates the argument that it works on your machine. This workflow relies entirely on Dynamic Environments and Review Apps. Standard environments like staging or production are static. You define them once and they persist. Dynamic environments are created on the fly. You spin them up to test a specific branch, use them, and then throw them away. When you use a dynamic environment specifically to preview code changes inside a merge request, GitLab calls this a Review App. To create a dynamic environment, you cannot hardcode its name in your configuration file. Instead, you use pipeline variables. The most critical variable for this is the CI commit ref slug. This variable takes your branch name, converts it to lowercase, removes special characters, and shortens it. It guarantees you have a DNS-safe string to use for naming both your environment in GitLab and your actual infrastructure resources. By defining your environment name as the word review followed by a slash and this slug variable, GitLab automatically generates a separate, tracked environment for every single branch you push. Here is the key insight. Creating the environment record in GitLab is only half the battle. You also need to route reviewers to the actual deployed code. Let us say you are spinning up a temporary AWS Lambda instance for your feature branch. When your deployment script runs, AWS generates a random URL for that new Lambda function. You do not know this URL beforehand. You need a way to pass that dynamically generated address back to the GitLab user interface so reviewers can click it. You solve this using a specific artifact type called a dotenv report. Inside your deployment job, after AWS provisions the Lambda function and returns the endpoint, your script writes that URL into a simple text file formatted as a key-value pair. You configure your job to output this file as a dotenv report artifact. GitLab reads this file at the end of the job and exposes the variable. You then configure the environment URL parameter in your pipeline definition to read that exact variable. Because of this connection, your merge request will now display a View App button that routes users directly to that specific AWS Lambda endpoint. Temporary infrastructure costs money. Leaving hundreds of stale Lambda functions running will drain your budget quickly. You need an automated way to clean up. You handle this using the on stop keyword. In your deployment job, you add the on stop property and give it the exact name of a different job in your pipeline. This second job contains your infrastructure teardown script. By linking them this way, GitLab takes over the lifecycle management. When a developer merges the feature branch, or deletes the branch, GitLab automatically executes that teardown job. The infrastructure is destroyed immediately. The true value of Review Apps is not just previewing code, but automating the entire infrastructure lifecycle. You dynamically provision, seamlessly link, and reliably destroy temporary environments without a single developer ever touching a hosting console. That is all for this one. Thanks for listening, and keep building!
12

DRY Configurations with Includes

3m 53s

Keep your CI/CD configuration DRY (Don't Repeat Yourself). Discover how to use the include keyword to modularize your pipeline configuration across multiple files and projects.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 12 of 14. Copying and pasting the exact same deployment script into fifty different repositories guarantees one thing: when that script inevitably needs to change, you will forget to update at least one repository. The solution is writing DRY configurations using the include keyword. In GitLab CI, the include keyword lets you split your pipeline configuration into multiple smaller, reusable files. Instead of maintaining one massive YAML file, you build a modular system. When a pipeline triggers, GitLab pauses, fetches all the included files, merges them together, and then evaluates the combined configuration as a whole. There are three primary subkeys you use to pull in these external files. The simplest is local. You use include local to pull in a file that lives in the exact same repository as your main pipeline file. This is strictly for organizing a single large project. You can carve out all your testing logic into a file called test pipeline dot yml, put your deployment logic in deploy pipeline dot yml, and keep your root configuration file clean by simply referencing those local paths. The second subkey is project, and this is where pipeline modularity scales across an organization. Include project pulls a YAML file from a completely different repository hosted on the same GitLab instance. You specify the path of the external project, along with the file path and optionally the specific branch or commit reference you want to pull. Consider a platform team that maintains a central security scanning pipeline. Instead of fifty different microservice teams writing and maintaining their own security jobs, the platform team maintains a single authoritative YAML template in a dedicated platform project. The fifty microservice projects simply add an include project block pointing to that central repository. When the platform team updates the scanner version or tweaks the security rules in their template, all fifty microservices automatically run the updated checks on their next pipeline execution. No repetitive copy and pasting is required. The third subkey is remote. Include remote takes a full HTTPS URL and fetches a configuration file from any public web server. You might use this to pull in vendor supplied pipeline definitions or open source community standards. The only strict requirement is that the URL must be publicly accessible via a standard web request without authentication. Now, pay attention to this bit. You will often encounter a situation where an included file defines a job, but the local project needs to modify it slightly. GitLab handles this through its merge behavior. When files are included, GitLab performs a deep merge of the configurations. If an included file defines a job called run security scan, and your main file also defines a job called run security scan, the configuration in your main file takes precedence. This means you do not have to discard a centralized template just because your specific project needs a minor tweak. You can include the platform team template, and then locally define just the run security scan job with an updated variable or a custom script addition. Your local overrides apply, while the rest of the job definition remains exactly as the platform team wrote it. The true power of pipeline modularity is not just centralizing code, but designing templates as sensible defaults that downstream projects can locally override without breaking the inheritance chain. That is all for this one. Thanks for listening, and keep building!
13

CI/CD Components and the Catalog

3m 43s

Explore the modern evolution of pipeline reusability: CI/CD Components. Learn how to create component projects, use semantic versioning, and leverage the GitLab CI/CD Catalog.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 13 of 14. Your pipeline just failed because someone updated a shared YAML snippet three repositories over. You had no warning, and now you are spending an hour fixing a job you did not even write. The days of relying on fragile, unversioned YAML snippets are over. CI/CD Components and the CI/CD Catalog fix this by bringing package manager reliability directly to your pipelines. Components are reusable, single-purpose pipeline configurations. They act as the modern evolution of the old include template method. With legacy templates, you essentially imported raw YAML over the network. If the upstream file changed, your pipeline changed instantly, often with disastrous results. Components solve this by enforcing strict versioning. The CI/CD Catalog serves as the centralized registry where your organization can publish, discover, and share these versioned components. To build a component, you need a repository with a very specific file structure. The core logic must live inside a directory explicitly named templates. You can place multiple YAML files inside this directory, with each file representing a distinct component. At the root of the repository, you must also provide a readme dot md file. This markdown file is not just a polite suggestion. It acts as the official documentation displayed in the Catalog, detailing what the component does and what parameters it requires. Here is the key insight. Components are too rigid to be useful without inputs. Inputs act exactly like function arguments for your pipeline configuration. When you write the component YAML, you declare a block at the top that defines the accepted inputs. You specify their names, their default values, and whether they are required. Consider a platform engineering team rolling out mandatory security scans. They create a component project called corporate-security. Inside the templates directory, they write a file specifically for secret detection. To keep it flexible, they define a single required input called stage. Application developers across the company no longer need to write or maintain secret detection jobs themselves. To consume that exact secret detection component, a developer uses the include component syntax in their pipeline configuration. They specify the path to the component on the server. Then, they append an at-symbol followed by a semantic version, such as one dot zero dot zero. This is the crucial step. Pinning the semantic version guarantees the pipeline will never break unexpectedly, even if the platform team releases a heavily modified version two point zero point zero later that week. If a developer intentionally wants the bleeding edge, they can append the special tag tilde latest instead of a version number, but semantic versioning is the safer default. Just below the include declaration, the developer passes their variables, mapping the stage input to whichever pipeline stage fits their specific project, like test or pre-build. Treating pipeline logic as versioned software changes how teams scale infrastructure. The real power of components is not just code reuse, it is the absolute guarantee that a pipeline running successfully today will run exactly the same way six months from now. Thanks for listening. Take care, everyone.
14

Compile-Time CI Expressions

3m 52s

Unlock ultimate pipeline dynamism with CI/CD configuration expressions. Learn how the compile-time syntax evaluates inputs and matrices before jobs ever execute.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitLab CI/CD, episode 14 of 14. You try to define a job's stage name using a standard CI variable, push your code, and immediately get a syntax error. The pipeline refuses to run. The problem is timing. The runner cannot define the pipeline structure using variables it only receives after the pipeline has already started. This is exactly where Compile-Time CI Expressions come in. Here is the key insight. Standard runtime variables, written with a single dollar sign, are evaluated by the shell when the job actually executes. By the time the runner sees them, the entire pipeline architecture is already locked. You cannot dynamically change a stage name, a service, or an image version during execution because GitLab needs that information upfront to build the pipeline graph. Compile-time expressions solve this by evaluating logic at the exact moment GitLab parses your YAML configuration, long before any runner is ever assigned. The syntax uses a dollar sign followed by double square brackets. Inside those brackets, you write an expression that evaluates to a value before the pipeline is created. These expressions draw their data from specific contexts. A context is essentially a restricted set of data available during YAML parsing. The most prominent context is the inputs context, which is heavily used when building CI/CD components. Take the scenario of a dynamic deployment component. You want the project consuming this component to pass an environment name as an input. You then want to use that input to set the job's stage name and dictate the specific Docker image version it pulls. Inside your component configuration, you write the stage field and assign it to a compile-time expression containing inputs dot environment. When a project includes your component, GitLab reads the provided environment input. It evaluates the expression immediately. The resulting pipeline graph sees a static, hardcoded stage name and a hardcoded image tag. The runner never encounters the double brackets. It simply receives standard configuration. Beyond inputs, compile-time expressions also support a matrix context, which is currently in beta. When you generate parallel jobs using a keyword like parallel matrix, you can use compile-time expressions to dynamically adjust job properties based on the specific variables assigned to each parallel instance. This prevents you from having to duplicate job definitions just to change one or two fields per matrix run. These expressions are more powerful than basic text replacement. You can write logic directly inside the brackets using equality operators, as well as logical AND and OR operators. You can evaluate whether an input matches a specific string, and conditionally change a value based on the result. You also have access to built-in functions. The expand vars function, for example, allows you to safely inject compile-time values into a string while explicitly preserving standard runtime variable syntax. This guarantees the runner still gets the runtime variables it expects without causing parsing conflicts early on. The critical takeaway is that compile-time expressions give you a native mechanism to template your pipeline graph, cleanly separating structural generation logic from the actual script execution. Take some time to read through the official GitLab documentation, review the supported functions, and try deploying a component hands-on. If you have topics you would like to see covered in a future series, visit devstories dot eu and let us know. Thanks for spending a few minutes with me. Until next time, take it easy.