Back to catalog
Season 26 10 Episodes 35 min 2026

Azure Pipelines

2026 Edition. Familiarize yourself with Azure DevOps (ADO) and how to build well-structured pipelines. Learn best practices, manage variables and secrets, and get practical advice on using it for enterprise software development needs.

CI/CD DevOps
Azure Pipelines
Now Playing
Click play to start
0:00
0:00
1
YAML vs Classic Pipelines
We introduce Azure DevOps Pipelines and explore the critical shift from Classic UI-based pipelines to Pipeline-as-Code using YAML. You will learn why storing pipeline configurations alongside your application code is the industry standard for enterprise software.
3m 16s
2
Anatomy of a Pipeline: Stages, Jobs, and Steps
Dive into the structural hierarchy of Azure Pipelines. You will learn how to organize your CI/CD process logically using Stages, distribute workloads with Jobs, and execute exact commands with Steps.
3m 53s
3
Execution Context: Agents and Demands
Discover how Azure Pipelines execute your code by utilizing Agents. We cover the differences between Microsoft-hosted and Self-hosted agents, and how to use Demands to route jobs to the correct infrastructure.
3m 59s
4
Automating the Workflow with Triggers
Learn how to make your pipelines react automatically to events. We explore Continuous Integration (CI) triggers, Pull Request (PR) triggers, and Scheduled triggers to orchestrate complex release cadences.
3m 16s
5
State Management: Variables and Variable Groups
Master the art of passing state and configuration through your pipelines. This episode breaks down predefined system variables, custom pipeline variables, and how to share configurations across projects using Variable Groups.
3m 29s
6
Securing Secrets with Azure Key Vault
Stop storing sensitive credentials in your CI/CD tool. We explain how to integrate Azure Key Vault into Azure Pipelines to dynamically fetch passwords, API keys, and connection strings at runtime.
3m 45s
7
Dynamic Control: Conditions and Expressions
Learn how to make your pipelines smart and reactive. We dive into custom Conditions and Expressions to dynamically control which jobs and steps execute based on variable values and previous job outcomes.
3m 11s
8
Enterprise Reusability: YAML Templates
Scale your pipeline architecture across dozens of repositories using YAML Templates. Learn the difference between 'Includes' and 'Extends', and how to enforce security mandates organization-wide.
3m 45s
9
Targeting Deployments with Environments
Elevate your pipeline from just 'running code' to managing actual deployments. We cover the Deployment Job type, Environments, and deployment strategies like runOnce and Canary.
3m 45s
10
Enterprise Gates: Approvals and Checks
Put guardrails on your automated deployments. In this final episode, we explore how to configure Approvals, Branch Control, and Exclusive Locks on your Environments to protect production.
3m 32s

Episodes

1

YAML vs Classic Pipelines

3m 16s

We introduce Azure DevOps Pipelines and explore the critical shift from Classic UI-based pipelines to Pipeline-as-Code using YAML. You will learn why storing pipeline configurations alongside your application code is the industry standard for enterprise software.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 1 of 10. You click around a web interface, drag a few boxes, and your build works perfectly. Then someone changes a setting, the build breaks, and you have absolutely no history of who did what or why. That is the trap of visual editors, and it is exactly why the shift from Classic UI to YAML pipelines is the foundation of modern CI/CD. Azure Pipelines gives you two distinct ways to automate your software delivery. The older method is the Classic interface. It relies on a graphical web portal where you configure tasks using forms and drop-down menus. It feels accessible when you are starting out. But as your system grows, the Classic approach becomes a major liability. The configuration lives in the Azure DevOps database, entirely disconnected from your actual source code. YAML pipelines replace this with the concept of Pipeline-as-Code. Instead of configuring settings in a web portal, you define your entire build and release process in a plain text file. You commit this YAML file directly into your repository, keeping it right next to the application code it is meant to build. Some developers hesitate to switch, worrying that a text-based configuration might restrict features compared to the visual editor. That is not the case. YAML pipelines offer full feature parity for continuous integration and continuous delivery. Microsoft considers the Classic UI legacy and focuses new development on YAML. It is the enterprise standard for repeatability and auditing. Think about a team migrating a complex, drag-and-drop build definition into a repository file. In the old system, testing a new build step meant editing the shared pipeline configuration, potentially breaking the build for the rest of the team. With YAML, the pipeline is version-controlled just like your application. If you create a feature branch to upgrade a core dependency, you can modify the YAML file in that exact same branch. The updated build logic applies only to your isolated branch. The rest of the team continues using the main branch pipeline without interruption. Here is the key insight. Because your pipeline is now just a text file in Git, it falls under your standard code review process. When you move away from the UI, no one can quietly alter a deployment target or skip a testing step. Every change to the pipeline requires a commit. It requires a pull request. It requires an approval from a colleague. You gain a permanent, undeniable audit trail of every modification. Furthermore, your pipeline state is forever locked to your application state. If you need to roll back to a release from six months ago, the YAML file from that exact point in time is preserved in your Git history. You are guaranteed to have the correct build instructions for that specific older version of the code. The transition to YAML is about treating your delivery mechanism with the exact same engineering rigor as the software it delivers. If you find these episodes helpful and want to support the show, you can search for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
2

Anatomy of a Pipeline: Stages, Jobs, and Steps

3m 53s

Dive into the structural hierarchy of Azure Pipelines. You will learn how to organize your CI/CD process logically using Stages, distribute workloads with Jobs, and execute exact commands with Steps.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 2 of 10. The difference between a stage and a job dictates exactly where your pipeline will break if you misunderstand agent allocation. Get it wrong, and you might find your build artifacts suddenly missing right when it is time to run your tests. The anatomy of a pipeline—specifically stages, jobs, and steps—is the structure that resolves this. A pipeline is a hierarchical system. This structure strictly enforces how work is divided, where it runs, and in what order. The hierarchy is three levels deep. Stages contain jobs, and jobs contain steps. A stage acts as a logical boundary. It groups related work together. In an enterprise application, you use stages to separate major phases of your software delivery cycle. You might define a build stage, followed by a separate testing stage. By default, stages run sequentially. One stage must finish completely before the next begins. They act as organizational fences for your overall process. Here is the key insight. Engineers frequently confuse stages and jobs, treating them interchangeably. They do not serve the same purpose. A stage is purely a logical container. A job is a physical execution boundary. A job defines the actual environment where your code runs. Every job is assigned to an agent, which is the machine or container executing your tasks. All operations inside a single job execute on that one specific agent. Because jobs are the unit of execution, they behave differently than stages. If you place three jobs inside a single stage, Azure Pipelines will attempt to run those three jobs in parallel across three different agents, assuming you have the available capacity. This means jobs do not share local file systems or memory. If job A compiles your application code, job B cannot simply access the compiled binaries from the local drive. Job B is running on an entirely different machine. If you need two processes to share the same local disk space sequentially, they must be placed inside the same job. Inside a job, you define steps. A step is the smallest building block of a pipeline. It is a concrete instruction, usually a task or a script. Because steps live inside a specific job, they all run sequentially on the exact same agent. Step one might be a task to download your source code. Step two might be a bash script that runs your compiler. Since they share the same execution environment, step two has immediate, direct access to the files step one just downloaded. Apply this to structuring an enterprise pipeline that builds an application, runs unit tests, and packages the result. You create a single stage named Continuous Integration. Inside this stage, you define two separate jobs to speed up execution. Job one handles the primary build. Its steps check out the code, run the compiler, and package the binary. Job two handles static code analysis and standalone unit testing. Because these are separate jobs within the same stage, they run concurrently on separate agents. They do not block each other, but they also do not share a file system. If job two needs the binary created by job one, you must explicitly instruct job one to publish the artifact to pipeline storage, and instruct job two to download it. Structure dictates capability in your continuous integration architecture. Use stages to organize your workflow logically, but rely on jobs to control exactly where and how your execution scales across machines. Thanks for hanging out. Hope you picked up something new.
3

Execution Context: Agents and Demands

3m 59s

Discover how Azure Pipelines execute your code by utilizing Agents. We cover the differences between Microsoft-hosted and Self-hosted agents, and how to use Demands to route jobs to the correct infrastructure.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 3 of 10. Every pipeline needs computing power, but picking the wrong execution context can secretly double your build times. You might write perfect pipeline code, but if your build has to download gigabytes of dependencies from scratch every single run, your team will waste hours waiting. That bottleneck is exactly why you need to understand Execution Context, specifically Agents and Demands. To build code or deploy software in Azure DevOps, you need an agent. An agent is simply installable software that connects to your Azure DevOps organization, listens for work, and executes jobs one at a time. Every agent runs on a host machine, and you generally choose between two types of hosts. You can use Microsoft-hosted agents, or you can use self-hosted agents. Microsoft-hosted agents are the default choice for convenience. You ask for a machine, and Microsoft provides one from their cloud pool. You never have to patch the operating system or upgrade the agent software. But this convenience comes with a catch that trips up many teams. Here is the key insight. A Microsoft-hosted agent gives you a brand new, completely fresh virtual machine for every single pipeline run. It does not remember your last build. When your job finishes, that machine is permanently destroyed. If your build needs a specific gigabyte-sized package cache, it will download it over the network from zero on every run, unless you add explicit pipeline caching steps to save and restore that data. If you want to avoid downloading the world every time, or if your build needs to reach behind a strict corporate firewall, you use self-hosted agents. You install the agent software on your own infrastructure. That might be a server in your local data center or a persistent virtual machine in your own cloud tenant. Because the machine survives between runs, your package caches, downloaded software development kits, and incremental build files stay right where you left them. This drastically speeds up execution times. If you want a modern middle ground, you can look into Managed DevOps Pools. These act as a scale-set alternative where you define custom base images and sizes, and Azure handles the automatic provisioning and scaling. That covers what agents are. Now, how does Azure DevOps know which agent to use for a specific job? This relies on a system of capabilities and demands. Every self-hosted agent reports a list of capabilities back to the server. Many of these are discovered automatically by the agent software, like the operating system type or the path to installed tools like Node or Python. You can also define custom user capabilities, perhaps labeling a specific machine as having a specific graphics card. In your pipeline definition, you write demands to match those capabilities. A demand guarantees that your job will only be routed to an agent that possesses the exact capability you request. Consider a scenario where you are compiling an iOS application. Compiling iOS requires Xcode, which strictly requires Apple hardware. Suppose you have a single pool of twenty self-hosted agents, but only two of them are Mac Minis. You simply add a demand for the macOS capability to your pipeline job. When Azure DevOps evaluates the run, it filters out all the Windows and Linux machines in the pool, and routes your iOS build directly to one of the available Mac Minis. Your choice of agent dictates your entire pipeline architecture. Microsoft-hosted gets you out of the server maintenance business at the cost of managing state yourself, while self-hosted trades infrastructure upkeep for raw speed and persistent caches. That is all for this one. Thanks for listening, and keep building!
4

Automating the Workflow with Triggers

3m 16s

Learn how to make your pipelines react automatically to events. We explore Continuous Integration (CI) triggers, Pull Request (PR) triggers, and Scheduled triggers to orchestrate complex release cadences.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 4 of 10. You might think your pipeline only executes when someone pushes a commit. But in a mature system, pipelines are far more proactive, running different types of validation at entirely different times. We are talking about Automating the Workflow with Triggers. Triggers define exactly what events cause your pipeline to run. The most common type is the Continuous Integration, or CI, trigger. A CI trigger fires automatically whenever code is pushed to a specified branch. In your pipeline file, you define this using a simple trigger block. You can tell it to listen to the main branch, but ignore specific file paths, like documentation folders. This keeps your pipeline from wasting time building the application when someone only fixed a typo in a text file. That handles code that is already moving into a branch. But you usually want to catch errors before the merge happens. This is the job of Pull Request triggers. A PR trigger fires when a pull request is opened or when new commits are pushed to that existing pull request. Its primary purpose is to protect the target branch by validating the incoming code. Here is the key insight. There is a common trap developers fall into with PR triggers. When a pull request triggers a pipeline, Azure Pipelines evaluates the configuration using the YAML file located in the source branch, not the target branch. The logic governing the validation comes from the feature branch itself. If you make changes to the pipeline configuration in your feature branch, those changes apply to the PR run. PR triggers need to be fast. If you configure a PR trigger, you should use it to run quick operations, like unit tests and code linting. You want developers to get feedback in minutes. But some operations, like deep static analysis or heavy security scans, take much longer. A two-hour security scan will paralyze your team if you attach it to a PR trigger. That brings us to the third type: Scheduled triggers. Instead of reacting to code movement, scheduled triggers execute pipelines based on a clock. They use standard cron syntax to define specific days and times for the pipeline to run. You define a schedules block in your YAML file, specify the cron expression, and list the branches you want to build. You can combine these triggers to build an efficient workflow. During the day, your PR triggers run fast unit tests on feature branches to keep development moving. Meanwhile, you configure a scheduled trigger to kick off that two-hour security scan every midnight against the main branch. You can even set the scheduled trigger to execute only if new commits were merged since the previous night. This skips the run entirely over a quiet weekend, saving compute costs. Using these triggers together allows you to decouple fast developer feedback from deep system validation. Triggers are not just start buttons; they are the architectural controls that decide how and when your computing resources are spent. Appreciate you listening — catch you next time.
5

State Management: Variables and Variable Groups

3m 29s

Master the art of passing state and configuration through your pipelines. This episode breaks down predefined system variables, custom pipeline variables, and how to share configurations across projects using Variable Groups.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 5 of 10. Hardcoding environment names directly into your YAML files is technical debt waiting to explode during your next infrastructure migration. When those names change, you do not want to hunt through dozens of repositories to update them. State Management using Variables and Variable Groups solves this by centralizing your configuration. The pipeline already knows a lot about its execution context out of the box. Azure provides predefined system variables to give your pipeline environmental awareness without any manual setup. For example, if you need to know which branch triggered the run, you use Build dot SourceBranch. If you need the unique ID of the current run, you use Build dot BuildId. These are automatically populated and ready to read. When you need custom configuration, you define inline variables. You place these directly inside your YAML file under the variables block. This is perfect for values specific to a single pipeline, like a build configuration flag or a local file path. It keeps the logic self-contained. However, inline variables fail when you scale. Take a scenario where you have three separate microservice pipelines. They all need to know the name of the target deployment environment and a set of shared API endpoint URLs. If you define these inline, you are repeating yourself across three repositories. If an endpoint changes, you have three pull requests to make. The solution is a Library Variable Group. This is a centralized key-value store kept inside the Azure DevOps Library. You create the group once in the user interface, populate it with your environment names and API endpoints, and then reference that group by name in the variables block of all three microservice pipelines. When migration day comes, you update the Variable Group in the DevOps portal, and every pipeline instantly uses the new values on their next run. You enforce the Don't Repeat Yourself principle, keeping your shared configuration in exactly one place. Here is the key insight. How you call a variable changes when Azure evaluates it. There are two primary syntaxes, and mixing them up will break your pipeline. The first is macro syntax, written as a dollar sign followed by parentheses enclosing the variable name. Macro syntax is evaluated at runtime. Azure replaces the variable right before the specific task that uses it executes. The second is template expression syntax, written as a dollar sign followed by double curly braces containing the word variables dot and your variable name. Template expressions are evaluated at compile time, before the pipeline even starts running. If you are trying to use a variable to decide whether a stage should run at all, or to define a loop over a list of jobs, you must use template expression syntax. The pipeline needs that value up front to build the execution graph. If a value is generated dynamically during the pipeline run by a script, you must use macro syntax, because the value simply does not exist at compile time. Master the difference between compile time template expressions and runtime macros, and you will eliminate the most frustrating variable parsing errors in your pipeline development. That is your lot for this one. Catch you next time!
6

Securing Secrets with Azure Key Vault

3m 45s

Stop storing sensitive credentials in your CI/CD tool. We explain how to integrate Azure Key Vault into Azure Pipelines to dynamically fetch passwords, API keys, and connection strings at runtime.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 6 of 10. Masking a secret in your pipeline logs does not mean it is secure if you typed it directly into your CI/CD configuration. When a database administrator rotates that password, your deployments instantly break until someone manually updates the pipeline. To fix this, you fetch credentials dynamically using Azure Key Vault. People often define secret variables straight in the Azure DevOps user interface. You click the padlock icon, the text disappears, and you assume you are secure. Storing secrets in the pipeline user interface creates a fragmented security posture. If an enterprise policy mandates rotating credentials every thirty days, finding and updating every pipeline variable across dozens of projects is an operational nightmare. Furthermore, you lose central auditing over who or what accessed that data. The secure-by-design approach is fetching secrets dynamically at runtime from an enterprise Key Vault. The pipeline never owns the secret. It just borrows it exactly when needed. To pull this off, you use the Azure Key Vault pipeline task. First, the pipeline needs permission to look inside your vault. You configure an Azure Resource Manager service connection in your DevOps project. This connection relies on either a Service Principal or a Managed Identity. You then grant that identity explicit read access to the secrets in your Key Vault using Azure role-based access control or vault access policies. Here is the key insight. The pipeline runs as this identity, proving who it is to Azure before pulling any sensitive data. This fits a zero-trust model. No permanent credentials exist in your code repository or pipeline definitions. Consider a pipeline fetching a highly sensitive database connection string just before deploying a backend service. In your pipeline definition, right before the deployment step, you add the Azure Key Vault task version two. You provide three main inputs to this task. First, you specify the name of your Azure subscription service connection. Second, you provide the name of your actual Key Vault. Third, you define a list of the specific secret names you want to download. You can pass a wildcard asterisk to download everything, but least-privilege principles dictate you only ask for exactly what you need. You specifically request a secret named production-database-connection. When the pipeline reaches this step, it calls out to the vault. If the identity has permission, the vault hands back the secret. The task takes that secure value and automatically transforms it into a standard pipeline variable. The name of the new pipeline variable exactly matches the name of the secret in the vault. Your next deployment step can now reference that variable to configure the database, just like any standard text input. Azure Pipelines also registers this downloaded value as a secret variable in the background. If a script accidentally prints the connection string to the console, the system intercepts it and replaces the actual text with asterisks in the logs. Dynamic fetching ensures that when the database team updates the primary password in the vault, your pipeline automatically uses the new string on its very next run with zero manual intervention. Your continuous delivery system should act like a secure courier, picking up the locked briefcase right before delivery, rather than storing a permanent copy at the depot. That is all for this one. Thanks for listening, and keep building!
7

Dynamic Control: Conditions and Expressions

3m 11s

Learn how to make your pipelines smart and reactive. We dive into custom Conditions and Expressions to dynamically control which jobs and steps execute based on variable values and previous job outcomes.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 7 of 10. The most reliable enterprise pipelines do not just run blindly from top to bottom. They react dynamically when things go wrong, tearing down infrastructure after a crash or skipping heavy test suites during a minor hotfix. To build that kind of resilience, you need Dynamic Control: Conditions and Expressions. Every job and step in Azure Pipelines has a condition attached to it, whether you write it explicitly or not. By default, the system applies a built-in condition called succeeded. This means a step only executes if all its previous dependencies finished without throwing an error. If step one fails, step two gets skipped automatically. People often misunderstand when these checks happen. A condition evaluates whether a step should run, and it does this completely before the step or job begins. A step can never use a condition to evaluate its own internal output. The condition is the gatekeeper standing at the entrance. It looks at the state of the pipeline up to that exact moment and decides if the step is allowed to start. Sometimes you explicitly need a job to run when a previous step breaks. Take a deployment job that spins up temporary test infrastructure. If the deployment crashes halfway through, the default behavior skips the rest of the pipeline. Your temporary servers stay online, silently burning through your cloud budget. You fix this by adding a dedicated cleanup job at the end of the pipeline, but you change its condition from succeeded to failed. Now, this cleanup job ignores successful runs completely. It only wakes up to destroy the temporary infrastructure if the primary deployment jobs crash. You are not limited to basic status checks like succeeded, failed, or always. You can write custom expressions to evaluate variable states and make granular routing decisions. Azure Pipelines uses a functional syntax for these expressions. Instead of writing mathematical symbols, you use named functions. If you want to check if a variable equals a specific value, you use a function called eq. You open parentheses, pass in the pipeline variable you are checking, add a comma, and provide the value you expect. You can combine multiple checks by nesting these functions. Suppose you have a release job that should only run if the pipeline succeeded and the current branch is main. You start with an and function. Inside its parentheses, you pass two arguments. The first argument is the succeeded function. The second argument is your eq function, which compares the source branch variable against the text string for the main branch. The release job will only trigger if both statements evaluate to true. Using expressions lets you build pipelines that adapt to the context of the run. Before we wrap up, if you want to support the show, you can search for DevStoriesEU on Patreon, which is always appreciated. Here is the part that matters. True pipeline resilience does not come from preventing every possible error, but from using conditions to ensure your system knows exactly how to handle failure when it inevitably happens. Thanks for tuning in. Until next time!
8

Enterprise Reusability: YAML Templates

3m 45s

Scale your pipeline architecture across dozens of repositories using YAML Templates. Learn the difference between 'Includes' and 'Extends', and how to enforce security mandates organization-wide.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 8 of 10. You have fifty microservices, and you just updated the build definition for one of them. Now you have to manually copy and paste that YAML block forty-nine more times. If you are copying and pasting code between pipelines, you are accumulating massive technical debt and making global security updates impossible. The solution is Enterprise Reusability: YAML Templates. Templates let you define pipeline logic once and reuse it anywhere. In Azure Pipelines, templates work in two entirely different ways: Includes and Extends. An Include template acts exactly like a copy-and-paste operation performed by the pipeline compiler. You take a common sequence, like installing a set of dependencies or publishing an artifact, save it as a standalone YAML file, and then reference it from your main pipeline. When the pipeline runs, it pulls the contents of that external template directly into your active job or stage. This is useful to avoid repeating yourself, but it still leaves the developer in complete control of the pipeline structure. They decide if, when, and where to include your template. Here is the key insight. When a central platform team needs to enforce rules, they do not use Includes. They use Extends. An Extends template flips the control structure upside down. Instead of the developer pipeline pulling in pieces of logic, the developer pipeline declares that it extends a central template. That central template dictates the exact stages, jobs, and overall skeleton of the entire pipeline. The developer is only allowed to pass their specific instructions into the specific slots the template explicitly leaves open. Take a security team mandate as an example. They require every microservice to run a static application security testing, or SAST, code scanner before any code compiles. To enforce this, they write an Extends template that defines a job with two steps. The first step is the mandatory SAST scanner. The second step is a placeholder for developer actions. The development team pipeline file does nothing but point to this central template and supply their specific build commands to that placeholder. The platform team guarantees the scanner runs first, every single time, without ever needing to audit fifty individual YAML files. To pass these commands or other information into templates, you use Template Parameters. People often confuse parameters with variables, but their behavior is fundamentally different. Variables are evaluated at runtime and are generally just loosely structured text. Parameters are evaluated at compile time. Before the pipeline even starts executing, Azure DevOps parses the templates and resolves all parameters. Because this happens at compile time, parameters offer strict safety checks. You can define precise parameter types, like boolean, number, or a list of steps. You can enforce default values or restrict inputs to a predefined list of allowed strings. If a developer tries to pass a text string into a boolean parameter, the pipeline refuses to compile. This strong typing prevents runtime failures and ensures the template behaves exactly as the central team intended. By forcing the pipeline structure to be evaluated and type-checked before execution begins, an Extends template acts as an architectural boundary. It completely separates what developers build from how the organization secures it. That is all for this one. Thanks for listening, and keep building!
9

Targeting Deployments with Environments

3m 45s

Elevate your pipeline from just 'running code' to managing actual deployments. We cover the Deployment Job type, Environments, and deployment strategies like runOnce and Canary.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 9 of 10. You can easily write a standard pipeline job that runs a shell script to push code to a server. But when a critical bug hits and someone asks what exactly went into production last Tuesday, that standard job gives you nothing but console logs. To get a real audit trail of what was deployed and where, you need Targeting Deployments with Environments. An Environment in Azure DevOps is a logical collection of resources that you target with a deployment. You name it something like production or staging, and it serves as the tracking anchor. Instead of using a standard pipeline job to push your code, you use a special keyword called a deployment job. When you link a deployment job to an Environment, Azure DevOps automatically tracks the exact commits and work items that are being deployed to that specific target. You get a complete deployment history in the user interface without writing any extra logging logic. There is a major difference in how deployment jobs behave compared to standard jobs. A standard job automatically downloads your source code repository. A deployment job does not. It completely skips the checkout step by default. This trips up a lot of developers. The reason is simple. By the time you reach the deployment phase, you should be deploying a pre-compiled artifact or a container image created in an earlier build stage. You generally do not need raw source code on a deployment runner. If you actually need the source repository, you have to explicitly tell the deployment job to check it out. When you define a deployment job, you do not list a flat sequence of tasks. You wrap those tasks in a deployment strategy. The most common strategy is run once. As the name suggests, it simply executes your deployment steps sequentially against the environment. If you need something more complex, you can use the canary strategy. Canary lets you deploy to a small percentage of your servers, monitor for errors, and then gradually roll out the new version to the rest. This limits the damage of a bad release. Inside these strategies, your tasks are organized into lifecycle hooks. This enforces a clean structure. First, you have the pre-deploy hook, where you might initialize resources or run database migrations. Then comes the deploy hook, which pushes the new version of your application. After that, the route-traffic hook handles shifting network requests over to the newly deployed version. Finally, you can use post-route-traffic to run health checks or clean up old resources. If anything goes wrong, there are also on-failure and on-success hooks to handle rollbacks or notifications. Take a scenario where you use the run once strategy to push a container image to a Kubernetes namespace. In your pipeline, you define a deployment job targeting your production environment. Inside the run once strategy, you use the deploy hook to define a task that takes your built container artifact and applies the deployment manifest to the cluster. You do not need to check out the repository because the container image is already built and stored in your registry. When this pipeline runs, the tasks execute and Azure DevOps records the action. Because it targets an Environment, you can open the interface, click on production, and see exactly which commit triggered the image build, which developer authored it, and that it successfully reached the Kubernetes cluster. Here is the key insight. Moving from a standard job to a deployment job does not just change how your pipeline is structured. It turns your pipeline from a blind automation script into a fully traceable deployment history. Thanks for tuning in. Until next time!
10

Enterprise Gates: Approvals and Checks

3m 32s

Put guardrails on your automated deployments. In this final episode, we explore how to configure Approvals, Branch Control, and Exclusive Locks on your Environments to protect production.

Download
Hi, this is Alex from DEV STORIES DOT EU. Azure Pipelines, episode 10 of 10. You never want an automated pipeline pushing a massive, irreversible database migration to production on a Friday evening. A fully automated pipeline is great until it triggers at the wrong time or without human oversight. This is where Enterprise Gates, specifically Approvals and Checks, resolve that tension. Before examining the specific checks, we need to clear up a common misconception. Approvals and checks are not defined in your YAML pipeline file. They are configured in the Azure DevOps user interface, directly on the resource itself, like an Environment or a Service Connection. This distinction matters deeply. If these rules were in the YAML file, a developer could edit the file on a feature branch, remove the production checks, and bypass the controls. By placing the configuration on the resource, the resource owner enforces the rules. It makes your compliance tamper-proof. The pipeline code simply requests to use the resource, and the resource decides if the deployment conditions are met. Let us apply this to a concrete scenario. You have an environment named Production. You want to enforce release safety and ITIL compliance without losing your automation. First, you want to prevent deployments during the weekend. On the Production environment in the UI, you add a Business Hours check. You define the allowed time window, perhaps Monday to Thursday, nine to five in your local time zone. If a pipeline tries to deploy outside this window, it pauses. It waits in a pending state until the business hours begin. No more Friday night migrations. Next, you need a human sanity check. You add a Manual Approval check and assign it to a specific group, like the QA team. When the pipeline reaches the deployment stage for Production, it halts. An email goes out to the QA team. They review the changes and explicitly approve or reject the run in the Azure DevOps portal. Only after approval does the pipeline resume. You can even enforce the evaluation sequence, requiring the Business Hours check to pass before the manual approval notification is sent. Now, you must guarantee that experimental code does not slip through. You implement Branch Control. You add a check stating that only the main branch is allowed to target the Production environment. If someone triggers the pipeline from a feature branch, the check fails automatically. The deployment is blocked before it even attempts to run. Finally, there is the issue of concurrent deployments. If two developers merge code ten minutes apart, you might end up with two pipelines trying to update the same production infrastructure simultaneously. The Exclusive Lock check prevents this race condition. It ensures only one pipeline run can access the environment at a time. The second pipeline simply waits in a queue until the first one finishes, guaranteeing a clean, sequential deployment history. Here is the key insight. Approvals and checks take the power to deploy away from the pipeline code and hand it to the infrastructure owner, creating a secure, unchangeable boundary around your critical systems. Since this is the final episode of our Azure Pipelines series, I highly recommend digging into the official Microsoft documentation and trying these configurations hands-on. If you want to suggest topics for our next series, drop by devstories dot eu. That is all for this one. Thanks for listening, and keep building!