Back to catalog
Season 24 14 Episodes 52 min 2026

GitHub Actions

2026 Edition. A comprehensive, technical deep dive into GitHub Actions, covering its execution model, advanced workflows, environment gating, runners, and security. 2026 Edition.

CI/CD DevOps
GitHub Actions
Now Playing
Click play to start
0:00
0:00
1
The Enterprise Case for GitHub Actions
An executive comparison of GitHub Actions versus traditional CI/CD tools like Azure DevOps and GCP Cloud Build. We explore the architectural advantages of event-driven automation living alongside your source code.
4m 29s
2
The Execution Mental Model
A technical breakdown of the GitHub Actions hierarchy. Understand the critical relationship between Workflows, Jobs, Steps, and Actions.
3m 30s
3
Event-Driven Triggers and Filters
Deep dive into GitHub Actions event triggers. Learn how to configure precise path and branch filters to control exactly when your workflows execute.
3m 58s
4
State Evaluation with Variables and Contexts
Understand the critical differences between environment variables and GitHub Contexts. Learn when each is evaluated during the workflow lifecycle.
3m 36s
5
The Security Boundary: Secrets and GITHUB_TOKEN
A technical look at secrets management in GitHub Actions. We explore the ephemeral GITHUB_TOKEN and the hierarchy of repository and organization secrets.
3m 54s
6
Optimizing Data: Caching vs Artifacts
Learn the precise difference between Dependency Caching and Workflow Artifacts. Stop slowing down your builds with the wrong storage mechanism.
4m 10s
7
Controlling Flow with Concurrency
Master workflow execution control. Learn how to use the concurrency keyword to cancel redundant runs and prevent overlapping deployments.
3m 16s
8
Gating Deployments with Environments
Discover how to map your GitHub Actions workflows to external deployment targets using Environments to enforce manual approvals and isolate secrets.
3m 57s
9
Passwordless Cloud Access via OIDC
Eliminate long-lived cloud credentials from your repositories. Learn how to use OpenID Connect (OIDC) to securely authenticate GitHub Actions with AWS, Azure, and GCP.
4m 08s
10
Scaling DRY Pipelines
Compare Reusable Workflows and Composite Actions. Learn which mechanism to choose when standardizing your CI/CD pipelines across an entire enterprise.
3m 48s
11
Crafting Custom Actions: Docker vs JavaScript
Take control of your pipeline by building Custom Actions. We explore the performance and compatibility tradeoffs between JavaScript and Docker container actions.
3m 19s
12
Fleet Management: Hosted vs Self-Hosted Runners
Navigate the boundaries of GitHub's runners. Learn when to rely on GitHub-hosted machines and when your architecture demands Self-Hosted runners.
3m 27s
13
Kubernetes Scale: Actions Runner Controller
Discover how the Actions Runner Controller (ARC) orchestrates ephemeral, auto-scaling runner fleets natively on your Kubernetes clusters.
3m 40s
14
Supply Chain Integrity with Attestations
Secure your software supply chain. Learn how to generate unfalsifiable artifact attestations and provenance directly from your workflows.
3m 40s

Episodes

1

The Enterprise Case for GitHub Actions

4m 29s

An executive comparison of GitHub Actions versus traditional CI/CD tools like Azure DevOps and GCP Cloud Build. We explore the architectural advantages of event-driven automation living alongside your source code.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 1 of 14. Most enterprise builds are treated as an afterthought in an entirely separate tool. You push code, a context switch happens, and you hope a webhook fires correctly on a remote server somewhere else. The Enterprise Case for GitHub Actions resolves this gap by moving your pipeline execution directly next to your source code. The standard misconception is treating GitHub Actions strictly as a CI/CD tool, as if it were just a modern replacement for Jenkins. Name it what it actually is. It is a flexible, event-driven automation engine deeply embedded into your repository. It responds to practically any state change within the version control platform. Look at a standard enterprise setup using tools like GCP Cloud Build or Azure DevOps. The source code lives in GitHub, but the execution happens elsewhere. This requires managing cross-platform service accounts, maintaining fragile webhooks, and synchronizing access controls across multiple vendors. When a pipeline fails, developers leave their repository, log into a separate cloud console, and dig through logs disconnected from their pull request. Proximity to code removes this friction. When your automation engine is built directly into the version control platform, you eliminate the integration tax. The user identity, the branch protection rules, and the context of the code change are natively understood by the compute instance running your pipeline. Consider a traditional pipeline in a separate cloud build tool. It typically listens for a code commit, pulls the source down, builds an artifact, and reports a simple pass or fail status back to the repository. Its entire worldview is limited to code compilation and deployment. A GitHub Action operates on a much wider scale. Because it natively understands repository events, you can build a workflow that triggers the exact moment a pull request opens. In one seamless run, it can read the event payload, assign the right senior engineers as reviewers based on which specific files changed, execute a linter, and post any syntax failures directly as inline comments on the exact lines of code. The developer resolves the issues without ever leaving the pull request view. You automate the workflow itself, not just the build artifact. From an architectural standpoint, this shifts your repository from a passive storage volume into an active controller. You stop automating just your deployments and start automating your operational governance. If an issue is tagged with a critical label, an action can automatically provision a temporary testing database. If a security vulnerability is flagged by an automated dependency scan, an action can instantly open a tracking ticket, assign the security team, and ping your internal chat system. Everything uses the exact same underlying compute infrastructure. In a large organization, you can define these workflows centrally and share them across hundreds of repositories. If your security team updates the required container scanning policy, they update one central action. Every repository calling that action immediately inherits the new security requirement without individual product teams rewriting their pipeline scripts. This centralization provides massive leverage for platform engineering teams. Here is the key insight. Enterprise architecture consistently struggles with tool sprawl. Every new pipeline tool adds cognitive load, requires dedicated maintenance, and creates another surface area for security policies. By consolidating your execution layer directly into GitHub, you standardize how every repository behaves. The exact same infrastructure that deploys your production microservices also manages your repository maintenance. The ultimate architectural advantage of GitHub Actions is not that it has faster build agents or better caching. It is that it removes the boundary between the developer workflow and the continuous integration system. If you find these episodes useful and want to support the show, you can search for DevStoriesEU on Patreon. Thanks for listening, happy coding everyone!
2

The Execution Mental Model

3m 30s

A technical breakdown of the GitHub Actions hierarchy. Understand the critical relationship between Workflows, Jobs, Steps, and Actions.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 2 of 14. A workflow is not just a simple script that runs from top to bottom. It is a parallelized state machine that distributes your code across multiple virtual machines at the exact same time. If you write your configuration expecting one continuous process, your data will vanish halfway through. Understanding the execution mental model prevents this. At the highest level is the workflow. This is a configurable automated process defined in a YAML file inside your repository. It contains the blueprint for what should happen when triggered. A workflow itself does not execute code directly. It orchestrates the next level down. A workflow contains one or more jobs. This is where the physical execution boundaries are set. By default, every job in a workflow runs in parallel. If you define a build job and a test job, GitHub spins up a separate virtual machine, called a runner, for each job. They start at the exact same time. They know nothing about each other. The build job runs on runner A, and the test job runs on runner B. Because they execute on entirely different virtual machines, they do not share a filesystem, environment variables, or memory. Inside a job, the execution model changes completely. A job is made up of a sequence of steps. While jobs run in parallel across different machines, steps run sequentially on the exact same machine. Step one finishes before step two begins. Because they run on the same runner, steps share data. In our build job, step one might download your application code. Step two compiles it. Step three packages it. Because these steps happen on the same virtual machine, step two natively reads the files downloaded by step one. This brings up a common point of confusion. People often mix up steps and actions. A step is not an action. A step is simply a unit of execution within a job. It is a slot in your sequence. You can fill that slot in two ways. You can write a raw shell command, or you can call an action. An action is a packaged, reusable block of code designed to perform a specific complex task, like setting up a language environment. An action is the reusable payload. The step is the container holding it inside the job sequence. Let us look at our build and test scenario again. The workflow starts. Two runners boot up simultaneously. On the build runner, the steps execute one by one. The first step calls a checkout action to fetch the repository. The second step runs a shell command to compile the code. These steps share the local disk space seamlessly. Meanwhile, on the test runner, a completely different sequence of steps is running in isolation. If your test job needs the compiled output from the build job, it cannot just look on the hard drive. The test job is on a different server. Here is the key insight. The boundary of a job is the physical boundary of the virtual machine, which means writing a GitHub Actions workflow is actually an infrastructure mapping exercise. Thanks for hanging out. Hope you picked up something new.
3

Event-Driven Triggers and Filters

3m 58s

Deep dive into GitHub Actions event triggers. Learn how to configure precise path and branch filters to control exactly when your workflows execute.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 3 of 14. A single automated commit is pushed to your repository. That commit triggers a workflow. That workflow makes another commit, which triggers the workflow again. Within minutes, you have burned through hundreds of run minutes in an accidental infinite loop. The way you prevent this, and control exactly when your workflows execute, is through Event-Driven Triggers and Filters. Every GitHub Actions workflow starts with the on keyword. This tells GitHub which events should wake up your workflow. You can specify a single event, like a push, or multiple events in an array. But events are not always simple triggers. Some events, like issues or pull requests, have multiple activity types. When an issue is opened, edited, or closed, it fires the same base event. If you just write that a workflow runs on issues, it runs for all of those activities. To be precise, you specify the exact activity types. You can tell GitHub to only run the workflow when an issue is opened, ignoring edits or closures. This saves run minutes and prevents unnecessary processing. Activity types handle what happened, but branch and path filters handle where it happened. When you trigger a workflow on a push or pull request, you usually do not want it running on every single branch. You use branch filters to target specific destinations, like the main branch or release branches. You can also filter by paths. If a developer fixes a typo in a readme file, you do not need to run your entire test suite. Path filters allow you to include or exclude specific files and directories. Here is the key insight. When you mix positive and negative path filters, the order in which you write them matters. A positive filter tells the workflow to run if a specific path changes. A negative filter, denoted by an exclamation mark, tells the workflow to ignore changes in a path. GitHub evaluates these top to bottom. If you put a negative filter after a positive filter, the negative filter overrides the positive one for any matching files. Let us put this into practice. You want to trigger a build whenever code is pushed to the main branch, but you want to save money by ignoring changes that only affect documentation. Under the on keyword, you specify the push event. Below that, you define a branch filter for main. Then, you add a paths filter. You might start with a positive filter for everything, using a wildcard. Right below it, you add a negative filter for the docs folder, specifically targeting markdown files. If a commit only modifies a markdown file in the docs folder, the workflow stays asleep. If a commit modifies a python file and a markdown file, the workflow runs because the python file triggers the positive filter. Now back to that infinite loop. When your workflow runs, GitHub provides a temporary credential called the GitHub Token to authenticate against the repository. By design, any events triggered using this specific token will not create new workflow runs. This is a built-in safety mechanism to prevent recursive loops. However, if your workflow uses a Personal Access Token to commit code or push tags, that safety net is gone. The new commit will trigger another workflow run, which makes another commit, creating an infinite loop. If you must use a Personal Access Token, you have to be extremely disciplined with your branch and path filters to ensure the automated commit does not meet the trigger conditions for the workflow. The most effective way to optimize your compute costs is not writing faster code, but simply ensuring your workflows only run when they absolutely have to. Thanks for listening, happy coding everyone!
4

State Evaluation with Variables and Contexts

3m 36s

Understand the critical differences between environment variables and GitHub Contexts. Learn when each is evaluated during the workflow lifecycle.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 4 of 14. You try to use a runner environment variable to dictate whether a job should start, and your pipeline fails before it even spins up a machine. You are certain the variable exists, but GitHub acts like it is empty. The problem is not your variable, it is your timing. To fix this, we need to talk about State Evaluation with Variables and Contexts. In GitHub Actions, your workflow state is evaluated in two completely separate phases, happening in two completely different places. This is the core difference between a Context and an Environment Variable. Let us look at Contexts first. Contexts are collections of information evaluated directly by GitHub, before your workflow is even sent to a runner. They hold data about the workflow run, the repository, the webhook event that triggered the run, and the user who initiated it. Because GitHub evaluates Contexts immediately, you can use them to control the structure of your pipeline. You access a Context using a specific expression syntax, usually a dollar sign followed by double curly braces containing the context name. On the other hand, default environment variables are evaluated later, on the actual runner machine that executes your job. When the runner boots up, it automatically sets several default environment variables, like the repository name or the current branch. You access these exactly like you would in a normal bash or PowerShell script. You can also define your own environment variables using the env key in your workflow file. You can attach the env key to an entire workflow, a single job, or a specific step. Here is the key insight. The lifecycle dictates what you can use and where. A common mistake is trying to use a runner environment variable inside an if conditional at the job level. If you tell a job to run only if a specific bash environment variable equals a certain value, the workflow will fail. The runner has not started yet. The machine does not exist, so the environment variable does not exist. To make decisions before the runner boots, you must use a Context. Let us look at a practical scenario. You want a deployment job to run only if the code is merged into the main branch. At the job level, you write an if conditional. You use the context expression to check if the github dot ref context equals the string refs slash heads slash main. GitHub evaluates this instantly. If it is true, GitHub provisions a runner and sends the job to it. Once the job is on the runner and your steps begin executing, you transition to environment variables. Inside a bash script step in that same job, you might need to know the branch name to tag a build artifact. Here, you simply type a dollar sign followed by GITHUB underscore REF. The runner reads this from its local operating system environment. You are referencing the exact same piece of data, the branch name, but you are accessing it through completely different mechanisms depending on where the execution currently lives. Contexts route the workflow on GitHub servers. Environment variables drive the execution scripts on the runner. If you ever find yourself fighting empty variables in your workflow logic, ask yourself whether the machine evaluating that logic has actually booted up yet. As always, thanks for listening. See you in the next episode.
5

The Security Boundary: Secrets and GITHUB_TOKEN

3m 54s

A technical look at secrets management in GitHub Actions. We explore the ephemeral GITHUB_TOKEN and the hierarchy of repository and organization secrets.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 5 of 14. Developers often mint long-lived Personal Access Tokens for basic repository tasks, leaving permanent credentials sitting in their code base. But for most GitHub interactions, you do not need to create a token at all—there is a secure, ephemeral token waiting for you in every job. This is the focus of today's episode: The Security Boundary: Secrets and GITHUB_TOKEN. When you trigger a workflow, GitHub automatically provisions a unique secret called the GITHUB_TOKEN. This is not a standard personal access token. It functions as a short-lived GitHub App installation access token. It exists exclusively for the duration of the workflow job. The moment the job finishes, or after a maximum of 24 hours, the token expires and becomes completely useless. A common mistake is generating a permanent Personal Access Token just to have a workflow add a label to an issue or post a comment on a pull request. This expands your attack surface unnecessarily. The built-in GITHUB_TOKEN already possesses the permissions needed to interact with the repository that triggered the workflow. If your job needs to add a label to an issue, you pass this built-in token to the step executing the API call. No permanent credential is ever created, stored, or exposed. That covers interacting with GitHub itself. But your workflow will inevitably need to talk to the outside world. This is where custom encrypted secrets come in. When you create a custom secret, it does not sit as plain text on a GitHub server. The value is encrypted locally using a Libsodium sealed box before it is even transmitted. The encryption relies on public-key cryptography. When you add a secret via the web interface or API, GitHub provides a public key. Your client uses that key to seal the box. Only the isolated runner virtual machine holds the corresponding private key required to open that box, and it only decrypts the payload at the exact moment the job executes. You can define these encrypted secrets at different levels depending on your architecture. Repository-level secrets apply to a single codebase. Organization-level secrets allow you to share a single credential, like a production database password, across multiple repositories. This centralizes credential management but requires strict control. Organization secrets use access policies where you explicitly define which repositories are allowed to read the secret. Let us apply this to a concrete scenario. Say you have an organization-level database password required for a database migration step. You do not want this password accessible to the entire workflow. GitHub Actions enforces a strict security boundary here. Secrets are not automatically injected into the environment of every step. You must explicitly map the secret to an environment variable in the specific step that requires it. When writing the workflow file, you access the encrypted value using a specific context reference, calling it from the secrets object, and assign it to a local environment variable. Because you mapped it explicitly, the step before the migration cannot see the password, and the step after it cannot see the password. Here is the key insight. Security in automation is about minimizing lifespan and limiting scope. Rely on the ephemeral GITHUB_TOKEN for internal repository actions to avoid managing permanent credentials, and strictly map encrypted secrets only to the individual steps that require external access. That is your lot for this one. Catch you next time!
6

Optimizing Data: Caching vs Artifacts

4m 10s

Learn the precise difference between Dependency Caching and Workflow Artifacts. Stop slowing down your builds with the wrong storage mechanism.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 6 of 14. You set up a multi-job workflow, and to make sure the second job has everything it needs, you bundle up your entire dependencies folder and pass it along. But suddenly, your build takes five minutes longer, and your storage usage hits the roof. The problem is a fundamental mix-up between two distinct data mechanisms, and that is exactly what this episode covers: Optimizing Data: Caching vs Artifacts. We will clear up the confusion right away. Caching and artifacts both move files around your GitHub Actions runners, but they solve entirely different problems. Think of artifacts as what your workflow produces. Think of caching as what your workflow consumes. If you swap these roles, you will silently bottleneck your entire pipeline. Artifacts are generated files that you want to keep after a job finishes. This could be a compiled binary, a test coverage report, or a zipped archive of your final build directory. They exist for two primary reasons. First, to let you download the final output of your workflow once it completes. Second, to pass generated data between different jobs within the exact same workflow run. Because each job runs on a fresh virtual machine, any files created in job one are instantly lost when that job ends unless you explicitly upload them. By using the upload artifact action, you save those files to GitHub storage. Then, job two uses the download artifact action to pull them into its own clean workspace. Now, compare that to dependency caching. Caching is purely a performance optimization designed to speed up your workflow across different runs over time. When you build software, you usually download thousands of third-party dependencies, like packages from NPM or pip. Fetching these over the network on every single run is slow. Instead of downloading fresh dependencies every time, the cache action saves your downloaded dependency folder on the cache servers. It assigns this cache a unique key, almost always based on the hash of your lock file. On tomorrow's workflow run, GitHub checks if the lock file matches an existing key. If it does, it restores the folder directly into your runner in seconds, bypassing the package registry entirely. Here is the key insight. The biggest mistake you can make is using the upload artifact action to move a massive dependency folder, like node modules, between jobs. Artifacts process data by zipping it, uploading it, downloading it, and unzipping it. Doing this with tens of thousands of tiny text files adds massive network latency to your run time and eats into your account storage quotas. Artifacts are not built for raw speed; they are built for safe data transfer and permanent record keeping. Caches, on the other hand, are highly optimized to pull heavy dependency trees quickly, and they automatically age out and delete themselves to save space. Picture a proper pipeline that uses both correctly. You have a workflow with a build job and a deploy job. In the build job, your first step uses the cache action to instantly restore your NPM dependencies from yesterday's run. Your code compiles quickly, producing a final application binary. You then use the upload artifact action to save just that single binary file. The build job ends, and the runner is destroyed. The deploy job spins up on a new runner. It does not need the cache, and it does not need the NPM dependencies. It just uses the download artifact action to grab the compiled binary you just built, and then pushes it to your production server. Caching is a disposable shortcut for things you download, while artifacts are the essential hand-off for things you create. Thanks for hanging out. Hope you picked up something new.
7

Controlling Flow with Concurrency

3m 16s

Master workflow execution control. Learn how to use the concurrency keyword to cancel redundant runs and prevent overlapping deployments.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 7 of 14. A developer pushes three rapid-fire commits to an open pull request in five minutes. If your CI server is grinding through three redundant test suites, you are burning money. You need a way to tell the system that only the latest code matters, and that is exactly what controlling flow with concurrency handles. By default, GitHub Actions runs triggered workflows in parallel. If you push five times, it spins up five independent runners. For basic checks, this wastes runner minutes. For deployments, it is actively dangerous. If two workflow runs try to deploy to the same staging environment simultaneously, you create a race condition. The older commit might even finish deploying after the newer one, overwriting your updates and leaving your environment in an outdated state. You fix this using the concurrency keyword. You can apply it at the top level of the entire workflow or restrict it to a specific job. The mechanism relies entirely on concurrency groups. A concurrency group is just a string name you define. If two runs share the exact same group name, GitHub enforces concurrency limits between them. Many people make the mistake of hardcoding this string. If you set the group name to just the word deploy, then pushing to your testing branch will cancel an active deployment on your main production branch. The group name must be dynamic. You construct it using context variables, like the current branch reference. If you name the group using the pull request number, then concurrency limits apply only to pushes within that specific pull request. Your main branch remains completely unaffected. When a new workflow is triggered, GitHub checks if a run is already active for that concurrency group. If there is, the default behavior is to place the new run in a pending state. It waits in a queue. Here is the key insight. The queue only holds one pending job. If a third run is triggered while the first is running and the second is pending, the second run is ejected from the queue. Only the absolute latest run gets to wait. Waiting is safer for deployments, but for pull request testing, waiting still consumes unnecessary time. This is where the cancel-in-progress setting comes in. It is a boolean flag you add under your concurrency group definition. When you set cancel-in-progress to true, you change the behavior from queuing to terminating. Go back to the developer pushing three rapid-fire commits. The first push triggers a test suite. Two minutes later, they push again. Because you set the concurrency group to the branch name and enabled cancel-in-progress, GitHub sees the new run, instantly terminates the active test suite for the first commit, and starts testing the second commit. When the third push happens a minute later, it terminates the second run and starts the third. The developer still gets the final pass or fail, but you only paid to execute the tests once. Concurrency controls turn your CI pipelines from a reactive system that blindly executes every trigger into a state-aware system that only spends resources on the code that actually matters. Thanks for listening, happy coding everyone!
8

Gating Deployments with Environments

3m 57s

Discover how to map your GitHub Actions workflows to external deployment targets using Environments to enforce manual approvals and isolate secrets.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 8 of 14. Your production database credentials should never be accessible to a random pull request from a feature branch. Yet, if all your secrets are stored at the repository level, any workflow can potentially grab them. Resolving this security gap is exactly why you need Gating Deployments with Environments. First, we need to clear up a common misconception. An environment in GitHub Actions is not a physical server. GitHub does not spin up a virtual machine or a cloud instance called production for you. An environment is purely a logical boundary configured inside your GitHub repository settings. It acts as a gatekeeper, controlling when a job is allowed to run and what data that job is allowed to see. The primary reason this feature exists is to enforce release governance and protect sensitive data. When you create an environment, you can attach specific secrets and variables directly to it. You might create one environment named staging and another named production. Each gets its own unique set of API keys, stored under the exact same variable name. In your workflow file, you link a specific job to an environment simply by stating its name. When the workflow runs, the job referencing the staging environment gets the staging keys. The job referencing the production environment gets the live keys. This isolates your sensitive data completely. A job running in a feature branch cannot read the production keys because it does not run in the production environment context. But isolating secrets is only half the power of environments. The other half is protection rules. These rules act as strict gates in your deployment pipeline. The most common protection rule is a required reviewer. Let us look at how this flows in a real scenario. You have a workflow that builds your application and automatically deploys it to staging. That job succeeds. The very next job in the workflow is configured to deploy to production, and it references your production environment. If you have a required reviewer rule set up for that environment, the workflow stops right there. The job pauses. It is not dispatched to a runner, and the production API keys remain locked safely away. GitHub sends a notification to the designated manager or team indicating that a deployment is waiting. The workflow will sit in this pending state until action is taken. Only when the manager clicks approve in the GitHub interface does the gate open. At that exact moment, the job is dispatched to an available runner, the production secrets are decrypted and injected, and the deployment script executes. You can layer other protection rules on top of manual approvals. One option is a wait timer, which forces a job to delay for a specified number of minutes before it starts. This gives you a buffer to cancel a rollout if you notice a spike in error rates on your monitoring dashboards right after a staging deployment. You can also configure deployment branches. This restricts the environment so that it only accepts jobs running from specific branches, like your main branch or specific release tags. If a developer tries to force a deployment job to production from a random bugfix branch, the environment gate simply rejects the run. Here is the key insight. Environments decouple your deployment mechanics from your access control. Your workflow file describes the exact steps required to deploy your code, but the environment settings inside GitHub dictate who has the authority to let that happen and which secrets are unlocked when they do. If you enjoy these technical deep dives, you can support the show by searching for DevStoriesEU on Patreon. Thanks for listening, happy coding everyone!
9

Passwordless Cloud Access via OIDC

4m 08s

Eliminate long-lived cloud credentials from your repositories. Learn how to use OpenID Connect (OIDC) to securely authenticate GitHub Actions with AWS, Azure, and GCP.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 9 of 14. The most secure cloud credential is the one that expires five minutes after your deployment finishes. Yet, many teams still copy long-lived administrator keys into repository settings, hoping they never leak. Today, we fix that by looking at Passwordless Cloud Access via OIDC. Historically, connecting a GitHub workflow to a cloud provider meant generating an access key in AWS, Google Cloud, or Azure, and saving it as a long-lived GitHub Secret. If that secret leaked, anyone could use it from anywhere until an administrator manually revoked it. People often think OpenID Connect, or OIDC, is just a new type of secret you paste into your repository. It is not. OIDC replaces stored secrets entirely. It is a protocol that generates a dynamic, cryptographic token on the fly to prove the workflow identity to your cloud provider. GitHub acts as an OIDC Identity Provider. When a workflow runs, it can ask GitHub to issue a JSON Web Token, or JWT. To make this happen, you must add a specific permission to your workflow file. You set the permissions block to allow write access for the ID token. This tells GitHub the workflow is authorized to generate an identity credential. Here is the key insight. This token does not grant access on its own. It is simply a digitally signed document containing claims. Claims are pieces of metadata that state verifiable facts about the workflow currently running. The token includes the repository name, the organization, the branch, the environment, and the event that triggered the run. Because GitHub cryptographically signs the token, your cloud provider can trust these claims. This forms the foundation of a zero-trust deployment. Let us look at a deployment job pushing an image to an AWS Elastic Container Registry. The workflow starts and requests an OIDC token from GitHub. The token is generated, containing claims that verify it comes from the main branch of your backend repository. The workflow then sends this token to AWS. AWS first checks the digital signature to ensure the token actually came from GitHub. Then, it reads the claims. It checks these claims against a strict trust policy you defined earlier. The policy might say it only accepts tokens from your specific repository and only if the workflow is running on the main branch. Because the claims match, AWS accepts the token. AWS does not return a permanent key. Instead, it issues a temporary access token. This token might be valid for just fifteen minutes. Your workflow uses this temporary credential to push the container image to the registry. When the job finishes, the credential expires. There is nothing to rotate, and nothing stored in GitHub that an attacker could extract. When setting this up, pay close attention to the subject claim, often called the sub claim. This is the primary field cloud providers use to filter access. By default, GitHub formats the subject claim to include the repository name and the git reference, such as the branch or tag. You must ensure your cloud trust policy strictly validates this subject claim. If you only check the organization name, any repository in your organization could request cloud resources. By linking temporary cloud access to specific workflow metadata, you guarantee that a compromised repository cannot touch your infrastructure unless the request comes from the exact branch and environment you explicitly trust. That is all for this one. Thanks for listening, and keep building!
10

Scaling DRY Pipelines

3m 48s

Compare Reusable Workflows and Composite Actions. Learn which mechanism to choose when standardizing your CI/CD pipelines across an entire enterprise.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 10 of 14. When managing continuous integration across fifty repositories, copying and pasting the exact same YAML is a maintenance nightmare waiting to happen. You change one security scanning tool, and suddenly you have to manually update fifty separate files. This episode is about Scaling DRY Pipelines to solve exactly that. To stop repeating yourself in GitHub Actions, you have two primary tools. Reusable Workflows and Composite Actions. Engineers frequently pick the wrong one because both avoid duplication. A Composite Action is not a workflow. It is simply a bundle of steps. A Reusable Workflow is a complete pipeline that bundles entire jobs. If your shared logic needs to span multiple machines, orchestrate complex dependencies between jobs, or manage secrets securely, you must use a Reusable Workflow. A Composite Action takes a sequence of steps, like checking out code, setting up a language environment, and installing dependencies, and packs them into a single custom action. When a workflow uses this action, all those bundled steps execute sequentially inside the current existing job. The Composite Action does not decide what runner machine it executes on. It runs wherever the parent job places it. It exists purely to clean up repetitive step logic within a single execution environment. Reusable Workflows operate at a much higher architectural level. They are complete YAML files that another workflow can trigger. The workflow making the request is the caller workflow, and the workflow being triggered is the called workflow. Because a called workflow defines entire jobs, it controls the infrastructure. One job inside the reusable workflow could run on an Ubuntu runner to build an application, while a dependent job runs on a macOS runner to test it. Consider a Platform Engineering team standardizing a Node deployment pipeline. They want to ensure every team runs identical security checks before shipping code. Instead of trusting fifty product teams to maintain identical YAML files, the platform engineers create one central Reusable Workflow in a shared repository. This central file defines the exact sequence of jobs required to scan, build, and deploy the application. The fifty product repositories then create a minimal caller workflow. This caller workflow contains just one job that points directly to the platform team's shared file using its path. You pass configuration data down to the called workflow using inputs, specifying parameters like the target environment name or the node version. The caller workflow can also pass secrets down. You can map specific secrets explicitly, or instruct the called workflow to simply inherit all secrets available to the caller. When the platform team needs to rotate a deployment secret or add a new static analysis tool, they update the central called workflow. Instantly, all fifty repositories run the new security check on their next commit. The product teams touch zero configuration. Here is the key insight. Use Composite Actions to hide messy step logic inside a single environment, but use Reusable Workflows to enforce standardized pipeline architectures across your entire organization. That is all for this one. Thanks for listening, and keep building!
11

Crafting Custom Actions: Docker vs JavaScript

3m 19s

Take control of your pipeline by building Custom Actions. We explore the performance and compatibility tradeoffs between JavaScript and Docker container actions.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 11 of 14. A container guarantees exact tool versions, but it comes with a hidden latency cost on every single workflow run. Your environment consistency might be slowing your team down and locking them into a single operating system without you even realizing it. Crafting Custom Actions: Docker vs JavaScript resolves this tension. A custom action is reusable logic you write once and share across multiple repositories. When you build one, you must choose an execution architecture. The two primary models are JavaScript and Docker. Many engineers default to Docker actions because they want absolute safety and predictable dependencies. They assume a container is the most robust choice. The reality is that building a Docker action permanently locks out any macOS or Windows runners from using your tool. Docker actions only execute on Linux environments. JavaScript actions take a different approach. They execute directly on the host machine. You write your logic, compile it down to a single file with all its dependencies, and point the action metadata to that entry file. When a workflow triggers, the runner uses its built-in Node runtime to execute your script. This decouples your logic from the underlying operating system. The exact same action will run natively on Linux, macOS, and Windows runners without modification. Docker container actions, by contrast, package the operating system, the system dependencies, and your code together into an immutable unit. You dictate the exact environment. The runner reads a Dockerfile provided by your action, builds or pulls the container image, and runs your code inside that isolated space. Here is the key insight. The strict isolation of a Docker action introduces a cold-start penalty. Consider a team building a custom code linting tool to share across their organization. If they build it as a Docker action, the runner has to pull that container image before it can evaluate a single line of code. That might add a fifteen-second delay to the start of every linting job. Across hundreds of pull requests a day, that idle time compounds into hours of wasted compute. If the team builds that identical linting logic in JavaScript, the runner simply downloads the script file and executes it instantly. Your choice of architecture dictates how your action behaves in the wild. If your tool relies on complex system binaries, requires a very specific version of a language compiler, or wraps legacy Bash scripts that are brittle outside a specific Linux distribution, Docker is the correct choice. You pay the latency tax in exchange for guaranteed stability. If your goal is to build a fast, widely adopted tool that works across any project type, JavaScript is the better path. The architectural choice between Docker and JavaScript for a custom action is never about which programming language you prefer writing in, it is a hard trade-off between strict environment control and cross-platform execution speed. That is your lot for this one. Catch you next time!
12

Fleet Management: Hosted vs Self-Hosted Runners

3m 27s

Navigate the boundaries of GitHub's runners. Learn when to rely on GitHub-hosted machines and when your architecture demands Self-Hosted runners.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 12 of 14. GitHub cloud runners are extremely convenient, until your integration tests need to hit a legacy database securely hidden behind a corporate firewall. Suddenly, a public runner cannot reach your private data. This is exactly where Fleet Management: Hosted vs Self-Hosted Runners comes in. By default, GitHub Actions use GitHub-hosted runners. When a workflow triggers, GitHub spins up a fresh virtual machine. You can request Ubuntu, Windows, or macOS. The runner executes your job, reports the result, and then the virtual machine is immediately destroyed. It is a clean slate every single time. You do not manage the operating system, you do not install security patches, and you do not worry about leftover files from a previous build. However, that isolation is a double-edged sword. Because those runners exist in GitHub cloud infrastructure, they operate with dynamic IP addresses and have no direct access to your private networks. If you have an internal application, or a database that cannot be exposed to the public internet, you need a runner that lives inside your own security perimeter. This is a self-hosted runner. You provide the hardware. It can be a physical server in a data center, a virtual machine in your cloud provider, or a container. You install the GitHub Actions runner application on that machine. The runner connects outward to GitHub, retrieves pending jobs, runs them locally, and sends the logs back. Because it lives on your network, it can talk securely to your internal infrastructure without punching holes through your firewall. Here is the key insight. You own the hardware, which means you own the maintenance. You are responsible for operating system updates, network security, and installing necessary dependencies like language runtimes or build tools. A common misconception is that self-hosted runners behave exactly like the hosted ones. They do not. GitHub-hosted runners are ephemeral by design. Standard self-hosted runners are stateful. When a job finishes on a default self-hosted runner, the machine stays running. If your job writes a temporary file, starts a background process, or pulls a large container image, all of that remains on the disk when the next job starts. This creates serious cross-contamination risks. A faulty build script in one pull request could leave corrupted files behind, causing the deployment job that runs right after to fail. You have to actively configure your self-hosted infrastructure to be ephemeral if that is what you want, often by using webhooks to spin up fresh containers per job. If you simply need more processing power or a static IP address, but still want GitHub to handle the machine maintenance, there is a middle ground called larger runners. These are GitHub-hosted machines where you define the hardware specifications and network features, but they remain ephemeral and managed by GitHub. Ultimately, the decision between hosted and self-hosted is rarely just about compute costs. It is a fundamental trade-off between the convenience of a zero-maintenance, disposable environment and the necessity of controlling your own network boundaries. Thanks for spending a few minutes with me. Until next time, take it easy.
13

Kubernetes Scale: Actions Runner Controller

3m 40s

Discover how the Actions Runner Controller (ARC) orchestrates ephemeral, auto-scaling runner fleets natively on your Kubernetes clusters.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 13 of 14. You have a massive team, and at nine in the morning, a hundred developers push code simultaneously. Your static build servers either choke on the queue, or you over-provisioned them and they sit there wasting expensive compute all night long. Kubernetes Scale: Actions Runner Controller fixes this by turning your build pipeline into a dynamic, container-native system. Standard self-hosted runners are usually static virtual machines or long-lived containers. You set them up, register them to a repository, and they sit there polling for work. This setup guarantees that you are paying for idle time, and when a sudden spike in builds hits, your queue simply backs up until an existing runner finishes its current task. Actions Runner Controller, or ARC, fundamentally changes that model. ARC is a Kubernetes operator that orchestrates auto-scaling runner scale sets. Instead of maintaining long-lived worker nodes, it provisions ephemeral, Just-in-Time runners based on the exact size of your queue. To pull this off without overwhelming API rate limits, ARC relies on two main architectural components inside your Kubernetes cluster. First is the Listener pod. The Listener uses an HTTPS long poll to connect to GitHub. Rather than requiring you to open inbound firewall ports to receive webhooks, the Listener reaches out to GitHub and holds the connection open. It sits there quietly waiting for GitHub to pass down a Job Available message. When the Listener receives that message, it hands the information off to the Controller pod. The Controller acts as the provisioning engine. It immediately talks to the Kubernetes API to spin up a brand new runner pod specifically for that single pending job. This pod is a Just-in-Time ephemeral runner. It boots up, receives a short-lived registration token, executes the workflow, and then immediately terminates itself. Let us look back at that nine AM code rush. A hundred developers push commits at the exact same time. The Listener pod detects the sudden burst of Job Available messages from GitHub. It alerts the Controller, which instantly requests one hundred ephemeral Kubernetes pods. Your cluster scales out, allocating nodes if necessary, and the jobs execute in parallel. As each workflow finishes, its pod is completely destroyed. By nine fifteen, the queue is clear, and your runner count scales all the way back to zero. You used massive parallel compute for exactly fifteen minutes, and then stopped paying for it. Here is the key insight. Because every single job runs in a freshly provisioned, isolated pod that is destroyed right after execution, you completely eliminate state contamination between builds. A dirty cache, a leftover background process, or an altered environment variable from a previous run simply cannot break the next one. You gain the security of a pristine build environment every time, coupled with the exact compute efficiency of Kubernetes auto-scaling. The true value of the Actions Runner Controller is that it stops you from treating continuous integration runners as heavy infrastructure you have to maintain, turning them instead into purely transient compute that only exists while a job is actively running. Thanks for spending a few minutes with me. Until next time, take it easy.
14

Supply Chain Integrity with Attestations

3m 40s

Secure your software supply chain. Learn how to generate unfalsifiable artifact attestations and provenance directly from your workflows.

Download
Hi, this is Alex from DEV STORIES DOT EU. GitHub Actions, episode 14 of 14. A user downloads your compiled command line tool from a release page. If a compromised dependency or a malicious actor swapped that binary after it was built, a standard checksum check will not save them. Standard hashes only prove the file they downloaded matches the file hosted, not where it actually came from. To guarantee a file was generated by your exact code, you need Supply Chain Integrity with Attestations. An attestation is not just a build log or a text file containing a hash sitting next to your release. It is a cryptographically signed, unfalsifiable claim about the provenance of an artifact. It binds your final binary directly to the exact commit SHA, the specific workflow run, and the OpenID Connect identity of the build environment. The process happens entirely within your GitHub Actions workflow. After your code compiles, you use the official build provenance action. This action calculates the checksum of your finished artifact and contacts a centralized signing authority, specifically Sigstore. The workflow exchanges its temporary, job-specific OpenID Connect token for a short-lived signing certificate. Because this token is minted by GitHub Actions and mapped directly to your repository, it acts as an unforgeable identity card for the workflow runner. This interaction generates a permanent cryptographic record. This record states unequivocally that this exact file hash was produced by your repository, triggered by a specific commit, during a specific workflow run. The signature is attached to the artifact, creating a bundle of proof that travels with the file wherever it gets hosted. Here is the key insight. The real security value happens on the consumer side. When a user downloads your command line tool, they do not have to blindly trust the hosting provider or the download mirror. Before they execute the binary, they use the GitHub CLI to verify the attestation locally. They run an attestation verify command against the downloaded executable, explicitly specifying your repository owner as the expected source. The CLI inspects the cryptographic signature and checks it against the public transparency log. If the signature is valid, it mathematically proves the file was compiled by your official workflow. If a malicious actor intercepted the download, tampered with the compilation, or replaced the binary on the release page, the verification fails instantly. The signature cannot be forged because the attacker can never possess the temporary OpenID Connect identity token generated inside your secure workflow run. The identity is inextricably linked to the GitHub Actions infrastructure at the exact moment of the build. This mechanism closes a critical gap in software supply chain security. You are no longer asking users to trust a storage location. Instead, the artifact becomes self-authenticating. Attestations shift your security model from trusting where a file lives to mathematically proving where it was born. Since this is the final episode of our GitHub Actions series, I strongly encourage you to get hands-on and start building these workflows yourself. Visit devstories dot eu to suggest topics you want to see covered in our future series. That is all for this one. Thanks for listening, and keep building!