Back to catalog
Season 27 10 Episodes 38 min 2026

Pulumi: Infrastructure as Code

2026 Edition. A step-by-step guide to using Pulumi for Infrastructure as Code, covering core concepts, Azure deployments, and Terraform migration.

Infrastructure as Code DevOps
Pulumi: Infrastructure as Code
Now Playing
Click play to start
0:00
0:00
1
The Developer's Infrastructure: Why Pulumi?
Discover why developers are moving away from domain-specific languages and YAML for cloud provisioning. We explore how Pulumi enables Infrastructure as Code using general-purpose programming languages. You will learn the fundamental difference between declarative cloud state and the imperative languages used to define it.
3m 27s
2
Under the Hood: Pulumi's Architecture
Take a deep dive into the inner workings of a Pulumi deployment. We break down the roles of the language host, the deployment engine, and resource providers. You will understand exactly how a function call in your code becomes a physical resource in the cloud.
3m 34s
3
Hello Azure: Creating Your First Project
Kickstart your infrastructure journey by creating a Pulumi project targeting Microsoft Azure. We walk through the CLI setup process and examine the auto-generated files. You will learn how to bootstrap a clean, ready-to-deploy cloud project in seconds.
4m 05s
4
Projects and Paths: Structuring Your Code
Understand the anatomy of a Pulumi Project and how to correctly reference local files. We explore the Pulumi.yaml file and the critical difference between absolute and project-relative paths. You will learn how to ensure your code deploys cleanly across different machines and CI pipelines.
3m 53s
5
Stacks: Managing Environments
Discover how to safely manage multiple environments like Development, Staging, and Production. We introduce Stacks and how they isolate deployment state. You will learn how to share data between environments using Stack References.
3m 50s
6
The Building Blocks: Pulumi Resources
Dive into how cloud resources are represented and named in code. We compare Custom Resources with Component Resources and unravel the mystery of logical versus physical names. You will learn how auto-naming prevents global collisions and keeps your deployments safe.
3m 47s
7
Keeping Secrets: Configuration Management
Learn how to inject dynamic data and sensitive secrets into your infrastructure code. We cover the Pulumi CLI config commands, structured configuration, and native secret encryption. You will leave knowing how to secure API keys without exposing them in plaintext.
3m 56s
8
Scaling Up: Component Resources on Azure
Elevate your infrastructure by creating reusable components. We walk through building an Azure Static Website component that encapsulates multiple resources. You will learn the importance of parent-child relationships for clean infrastructure tracking.
4m 00s
9
Peaceful Coexistence: Reading Terraform State
Bridge the gap between legacy infrastructure and modern code. We explore how Pulumi can directly read existing Terraform state files. You will learn a powerful coexistence pattern that lets you adopt Pulumi incrementally without rewriting your entire stack.
3m 57s
10
The Great Migration: Converting HCL to Pulumi
Take the final step by translating Terraform HCL into fully functional programming code. We examine the `pulumi convert` tool and discuss when and why to convert legacy configurations. You will learn how real languages unlock advanced unit testing for infrastructure.
3m 38s

Episodes

1

The Developer's Infrastructure: Why Pulumi?

3m 27s

Discover why developers are moving away from domain-specific languages and YAML for cloud provisioning. We explore how Pulumi enables Infrastructure as Code using general-purpose programming languages. You will learn the fundamental difference between declarative cloud state and the imperative languages used to define it.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 1 of 10. What if you could write your cloud infrastructure using the exact same programming language you use for your application? No more context-switching between your application logic and thousands of lines of custom markup. The Developer's Infrastructure: Why Pulumi? covers exactly this shift. Historically, Infrastructure as Code meant writing domain-specific languages or endless blocks of YAML. If you wanted to deploy a database and a server, you learned a syntax exclusive to that specific provisioning tool. You lost the rich ecosystem of standard software engineering. You could not easily write a loop, integrate standard testing frameworks, or share logic using standard package registries. Pulumi changes this foundation. It is an Infrastructure as Code platform that allows you to build, deploy, and manage cloud resources using general-purpose programming languages. Instead of learning a bespoke configuration language, you use TypeScript, Python, Go, C#, or Java. Consider a developer defining cloud resources using TypeScript. With traditional tools, you constantly switch context to look up YAML schemas in a web browser. With Pulumi, you stay in your editor. You instantiate a resource class, type a dot, and standard IDE auto-completion shows you exactly what properties are available. You get inline type-checking before you ever run a deployment. If you need three identical storage buckets, you write a standard for-loop. If your company enforces specific security rules on every server, you abstract that logic into a standard function, publish it to a standard package registry, and let other teams import it just like any regular code library. Here is the key insight. Because you write this infrastructure in imperative languages, you might assume Pulumi is just an automation script that calls cloud APIs sequentially. It is not an imperative execution script. Pulumi still uses a highly robust declarative state model behind the scenes. When you execute your Pulumi program, it does not immediately provision resources line by line. Instead, your code runs to construct the desired end state of your infrastructure. The Pulumi engine captures these resource definitions and builds a strict dependency graph. It then compares this desired state against the current actual state of your cloud environment. Pulumi calculates the exact difference and performs only the specific create, update, or delete operations required to make reality match your code. You write with the expressive flow of an imperative language, but you get the safety and predictability of a declarative deployment engine. By adopting general-purpose languages, you unlock access to existing linters, unit testing frameworks, and continuous integration pipelines. Pulumi simply stops treating infrastructure as a separate configuration domain and turns it into standard software, governed by the exact same rigor and tooling as your core application. If you would like to help support the show, you can find us by searching for DevStoriesEU on Patreon. I would like to take a moment to thank you for listening — it helps us a lot. Have a great one!
2

Under the Hood: Pulumi's Architecture

3m 34s

Take a deep dive into the inner workings of a Pulumi deployment. We break down the roles of the language host, the deployment engine, and resource providers. You will understand exactly how a function call in your code becomes a physical resource in the cloud.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 2 of 10. You instantiate a standard Python class in your editor, hit deploy, and seconds later a storage bucket exists in the cloud. How exactly does a local object in memory translate into a remote physical resource? Today, we are answering that by looking at Under the Hood: Pulumi's Architecture. There is a common misconception about how this tool works. When you write a deployment script, your code does not communicate with cloud APIs. The environment running your Python, Node, or Go code has absolutely no idea how to talk to Amazon Web Services. Instead, Pulumi splits the deployment process across three distinct components: the Language Host, the Deployment Engine, and the Resource Providers. It starts with the Language Host. This component evaluates your program and extracts your intent. Consider a Python script where you declare an AWS S3 bucket by initializing a bucket object. The Python Language Host runs your script line by line. When it hits that bucket declaration, it does not make a network request to AWS. It simply sends a registration request to the Deployment Engine. It tells the engine that you want a bucket with specific properties to exist. The Language Host communicates this desired state over a local connection and then halts, waiting for a response before evaluating the next line of code. The Deployment Engine receives this registration request. This component is the orchestrator. The engine knows nothing about Python syntax, and it knows nothing about the AWS API. Its only job is state management. It looks at the desired state sent by the Language Host and compares it against the last known actual state of your infrastructure. If the engine sees that this exact S3 bucket does not exist in the current state, it calculates a diff and determines that a create operation is required. To actually build the bucket, the engine hands the task off to the third component, the Resource Provider. Providers are standalone plugins downloaded for specific platforms, like AWS, Azure, or Kubernetes. The deployment engine sends an instruction to the AWS Resource Provider, telling it to create the bucket with the requested properties. Here is the key insight. The Resource Provider is the only component in this entire chain that knows how to speak to the cloud. It takes the generic creation command from the engine, translates it into the specific REST API calls required by AWS, and executes them over the network. Once AWS provisions the S3 bucket, it returns a physical resource ID. The Resource Provider catches this physical ID and hands it back to the Deployment Engine. The engine updates its internal state to record that the bucket now exists in the real world. Finally, the engine signals back to the Language Host that the resource is ready, passing along any output properties like the bucket name or URL. The Language Host resumes execution, and your Python script continues to the next instruction. This strict separation of language evaluation, state orchestration, and cloud execution is what gives the architecture its flexibility. You can add a new programming language without touching the cloud providers, and add a new cloud provider without modifying the language hosts. That is your lot for this one. Catch you next time!
3

Hello Azure: Creating Your First Project

4m 05s

Kickstart your infrastructure journey by creating a Pulumi project targeting Microsoft Azure. We walk through the CLI setup process and examine the auto-generated files. You will learn how to bootstrap a clean, ready-to-deploy cloud project in seconds.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 3 of 10. Setting up a cloud infrastructure project used to mean hours wrestling with boilerplate files and folder structures before you could write a single line of logic. Today, you can scaffold a complete environment in seconds. We are focusing on Hello Azure: Creating Your First Project to see exactly how that happens. A common misconception is that running the project creation command immediately builds infrastructure in the cloud. It does not. The command we are discussing only generates local files and prepares your state tracking. No actual Azure resources are created until you explicitly execute a deployment command later. To bootstrap a new project, you use the command line interface. First, you create an empty directory and move into it. Then, you run the command pulumi new followed by a template name. For Azure, this template is usually azure dash and your programming language of choice, such as azure dash typescript or azure dash python. When you execute this, the interface becomes interactive. It walks you through a series of prompts to configure your environment. First, it asks for a project name, which defaults to your directory name. Next, it asks for a project description. Then it asks for a stack name. A stack is an isolated instance of your project, usually representing an environment like development or production. The default stack is named dev. Finally, the Azure templates prompt for an Azure location, like WestUS, which will be saved as your default deployment region. Once you answer the prompts, the tool downloads the template, installs the necessary language dependencies, and creates a handful of files in your directory. Here is the key insight. The most important file generated is Pulumi dot yaml. This is your core project file. It defines the project name, the runtime it uses, and the description. It essentially tells the system how to execute your code. You will also see a file named Pulumi dot dev dot yaml if you accepted the default stack name. This secondary file stores the configuration values specific to that stack, including the Azure region you just selected. Alongside these, you get your standard dependency files, which vary depending on your chosen language, and your entrypoint file. Inside the entrypoint file, the default Azure template provides a working example of an Azure storage account. The code logic flows in three clear steps. First, it imports the Azure Native package. Second, it declares a new Azure Resource Group, giving it a logical name for the state file to track. Third, it declares an Azure Storage Account. This is where it gets interesting. Instead of hardcoding the resource group name, the storage account takes the name property directly from the resource group object created in the previous step. This creates an implicit dependency. The system now knows it must finish creating the resource group before it attempts to create the storage account. The template also automatically fills in required arguments for the storage account, like the account replication type and the account tier. At the very end of the file, the code exports a value. In this scaffolded example, it exports the primary storage key of the newly defined storage account. When you eventually deploy, this exported value will be printed directly to your console, making it easy to retrieve connection strings or endpoints without logging into the Azure portal. You go from an empty folder to a fully wired infrastructure program simply by answering a few prompts, giving you a solid, syntax-correct foundation to start building your own architecture. Thanks for listening, happy coding everyone!
4

Projects and Paths: Structuring Your Code

3m 53s

Understand the anatomy of a Pulumi Project and how to correctly reference local files. We explore the Pulumi.yaml file and the critical difference between absolute and project-relative paths. You will learn how to ensure your code deploys cleanly across different machines and CI pipelines.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 4 of 10. Your deployment script runs perfectly on your laptop. You push your code, your automated pipeline triggers, and the build immediately fails because it cannot find a source file. Nothing in the code changed, but the directory where the command was executed shifted slightly. This is a classic pathing trap, and avoiding it requires understanding Projects and Paths: Structuring Your Code. At its core, a Pulumi project is simply a folder containing a file named Pulumi.yaml. This file is the anchor. It tells the command line interface that this specific directory holds your infrastructure logic. Inside the Pulumi.yaml file, you declare metadata like the project name, a description, and the runtime language. The runtime property is what decides how your code actually executes. If you specify Python, the runtime looks for a main Python file in that directory. If you specify Node, it looks for the entrypoint defined in your package file, which is usually an index file. If you prefer to keep your source code in a subfolder rather than the main project directory, you can explicitly override this behavior by setting the main property in your Pulumi.yaml file to point to that specific subfolder. Now, let us look at the common confusion regarding file paths. Suppose you are building a Docker container image as part of your infrastructure, and your Dockerfile sits in a subdirectory called app, located right next to your Pulumi code. Engineers often pass an absolute path from their local machine to tell Pulumi where the Dockerfile is. But an absolute path includes your personal user directory. When a teammate pulls the code and runs an update, the engine sees a different absolute path on their machine. It registers this as a structural change to the resource, creating an unnecessary drift in your infrastructure state. To fix this, you might switch to a standard relative path, like dot slash app. Standard relative paths rely entirely on the current working directory of the terminal executing the code. If your continuous integration system runs the command from one directory higher in the repository, the relative path resolves incorrectly, and the deployment crashes. Here is the key insight. You need paths that are completely independent of the machine they run on, and independent of where the user typed the execution command. You need project-relative paths. Pulumi provides a built-in function to retrieve the exact location of the Pulumi.yaml file during execution. Depending on your programming language, this function is typically called get root directory or something similar. When you call this function, the runtime returns the absolute path to the folder containing your Pulumi.yaml file. Instead of hardcoding a path to your Dockerfile, you construct the path dynamically. You take the result of the root directory function and append your app subdirectory to it. Because this function evaluates dynamically on every execution, the resulting path is always perfectly tailored to the environment running the code. Your local machine, your teammate's laptop, and the remote build server will all generate the correct absolute path for their specific file systems. The file path resolves consistently every time, and the engine detects zero changes to the resource definition. Your infrastructure code should never care where it lives on a hard drive. Always anchor your file assets to the Pulumi root directory function, ensuring your project remains completely portable across any environment. That is all for this one. Thanks for listening, and keep building!
5

Stacks: Managing Environments

3m 50s

Discover how to safely manage multiple environments like Development, Staging, and Production. We introduce Stacks and how they isolate deployment state. You will learn how to share data between environments using Stack References.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 5 of 10. Copy-pasting your infrastructure code to create a staging environment is a recipe for drift and disaster. You end up maintaining multiple folders of nearly identical files, and eventually, somebody forgets to update production. The solution is treating your environments as isolated instances of a single codebase using Stacks. Before going further, let us clear up a common point of confusion. A Project in Pulumi is simply a directory containing your source code. It is just the instructions. A Stack is an active deployment instance of that code. You write the Project once, and you deploy it multiple times as different Stacks. Stacks let you manage different environments, like development, staging, and production, without duplicating any code. When you run a Pulumi update, it applies the code in your Project to whichever Stack is currently active. Each Stack maintains its own isolated state file, tracking only the resources created for that specific environment. Managing these environments happens directly in your terminal. You create a new environment by running the stack initialize command, giving it a name like dev or prod. Pulumi registers this new instance and creates a fresh state for it. To switch context, you use the stack select command. The Pulumi CLI remembers which stack is active. If you select the dev stack and run an update, Pulumi only looks at the development state. It provisions or modifies the development infrastructure, leaving your staging and production environments completely untouched. That covers deploying isolated environments. But what happens when these environments need to communicate? Sometimes, one stack relies on information generated by another. You might have one project managing core infrastructure and a totally separate project handling application code. Let us say you have an infrastructure project that provisions a Kubernetes cluster. You deploy this as a stack called base-infra-prod. During deployment, the cluster generates a dynamic connection string. Now, you have a second project for a microservice that needs to deploy into that exact cluster. You do not want to hardcode the connection string, and you do not want to merge both projects into one massive, slow-moving state file. This is where it gets interesting. You can securely bridge these deployments using a Stack Reference. A Stack Reference allows one stack to read the exported outputs of another stack. To set this up, your Kubernetes cluster program must explicitly export the connection string at the end of its run. An export is just a variable that Pulumi saves into the stack state specifically so it can be read from the outside. Then, over in your microservice program, you create a Stack Reference object. You pass it the name of the infrastructure stack you want to read from. Next, you call a get output method on that reference, asking for the connection string by its exported name. Your microservice can now use that value to configure its deployment. The microservice stack cannot modify the cluster stack. It can only read the specific values the cluster stack chose to export. This enforces a clean boundary. You can update and scale your core network infrastructure entirely independently of your application workloads, while safely passing the necessary connection details across the gap. By decoupling your code from the environment instance, you guarantee that every tier of your system goes through the exact same logic, removing the hidden risks of manual duplication. That is all for this one. Thanks for listening, and keep building!
6

The Building Blocks: Pulumi Resources

3m 47s

Dive into how cloud resources are represented and named in code. We compare Custom Resources with Component Resources and unravel the mystery of logical versus physical names. You will learn how auto-naming prevents global collisions and keeps your deployments safe.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 6 of 10. You deploy your infrastructure code, the syntax is flawless, and it immediately fails because a storage bucket name is already taken. Or worse, you update a database instance, and your tool deletes the old one before creating the new one, causing a hard outage. You can solve both problems by understanding how your infrastructure tool handles identity. That is exactly what we are covering today: The Building Blocks: Pulumi Resources. In Pulumi, a resource is an object that represents a piece of infrastructure. There are two primary types you will work with. The first is a Custom Resource. This maps directly to a physical object managed by a cloud provider. When you declare a Custom Resource, Pulumi makes an API call to Amazon, Azure, or Google Cloud to create that exact object, like a virtual machine or a load balancer. The second type is a Component Resource. A Component Resource does not map to a single piece of cloud infrastructure. Instead, it is a logical container for other resources. You use Component Resources to build higher-level abstractions. For example, you might create a single Component Resource called Secure Web Server that internally provisions a virtual machine, a security group, and an IP address. The Component Resource itself just groups them together in your state file, making your code cleaner and easier to manage. Whether you are defining a Custom Resource or a Component Resource, every single one requires a name. This brings up a common source of frustration. People type a name into their code, deploy it, and then check their cloud console only to find their resource has a random string of characters attached to the end of its name. This is not a bug. It is a core feature of how Pulumi works, and you need to understand the difference between a logical name and a physical name. The logical name is the name you type into your code as an argument. Pulumi uses this logical name to track the resource inside its state file. It is how Pulumi knows that the database in your code today is the exact same database you deployed yesterday. The physical name is what the cloud provider actually calls the resource in its own system. By default, Pulumi takes your logical name, adds a random suffix, and uses that combined string as the physical name. This is called auto-naming. Here is the key insight. Auto-naming prevents global naming collisions and enables zero-downtime replacements. Think about provisioning multiple identical storage buckets in a loop using Azure. Azure requires storage account names to be globally unique across all customers. If you try to force a strict physical name, the second bucket in your loop will fail because the name is taken, or worse, someone else in the world might already own it. With auto-naming, you can just use a logical name like archive-bucket inside your loop. Pulumi will track each iteration logically while ensuring every bucket gets a mathematically unique physical name in Azure. Auto-naming also protects your system uptime. If you make a change that forces a resource to be replaced, Pulumi creates the new resource first, verifies it works, and only then deletes the old one. If you override auto-naming and force a strict physical name, the cloud provider will not let two resources share the same name at the same time. Pulumi would be forced to delete your old resource first, causing downtime while the new one provisions. If you want to help keep the show going, you can search for DevStoriesEU on Patreon. Keep your physical names flexible. Let the tool manage the random suffixes, because while your state file requires a logical name to maintain order, your production environment relies on physical flexibility to stay online. Thanks for listening. Take care, everyone.
7

Keeping Secrets: Configuration Management

3m 56s

Learn how to inject dynamic data and sensitive secrets into your infrastructure code. We cover the Pulumi CLI config commands, structured configuration, and native secret encryption. You will leave knowing how to secure API keys without exposing them in plaintext.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 7 of 10. Hardcoding a database password in your infrastructure code is a guaranteed security failure. But manually injecting environment variables into every single deployment pipeline is brittle and hard to track. You need a way to tie specific, encrypted values to specific environments automatically. This is Keeping Secrets: Configuration Management. The core idea here is separating your code from your configuration. You want to write your infrastructure logic exactly once. Then, when you deploy to your development stack, the code provisions small instances. When you deploy to production, it provisions large instances and uses the production database credentials. Pulumi handles this through a built-in configuration system. A common point of confusion is how these values are actually stored. Setting a Pulumi configuration value does not set local operating system environment variables. Instead, it stores values directly in a file named Pulumi dot stack-name dot yaml. Because every stack gets its own distinct configuration file, your dev configuration and prod configuration live side by side in your repository, cleanly separated by file name. You add data to this file using the Pulumi command line interface. If you run the command pulumi config set frontendPort 8080, Pulumi writes that key-value pair directly into the yaml file for your currently active stack. To use that value in your infrastructure code, you instantiate a Config object. Then, you call a method like get or require on that object, passing the key name. The difference is straightforward. Calling get returns the value if it exists, or nothing if it does not. Calling require will throw an error and halt your deployment if the configuration key is missing. This is a great way to ensure a deployment never proceeds without a mandatory setting. You are not limited to simple strings. You can store and retrieve structured data, like a JSON block defining scaling parameters, and parse it directly into an object in your code. Now, what happens when that configuration value is highly sensitive? Suppose your application needs to connect to an external database, and you must pass the database password to your infrastructure. You absolutely cannot store this in plaintext in your yaml file, because that file gets committed to version control. This is where Pulumi secrets come in. You use the exact same command line interface, but you append a secret flag. You run pulumi config set dbPassword your-password dash dash secret. Pulumi encrypts the value before saving it to the yaml file. If someone looks at the file in your repository, they will only see a ciphertext string securely encrypted by your stack's encryption provider. In your code, you retrieve this securely by calling a specific secret method on your Config object, such as requireSecret. Here is the key insight. When you retrieve a secret this way, Pulumi wraps it in a special secret type. As this value flows through your infrastructure code and gets passed to resources, the Pulumi engine tracks it. It ensures the plaintext value is masked in your console output during a deployment, and it guarantees the value remains encrypted inside your Pulumi state file. Configuration allows you to write infrastructure code once and safely promote it across environments. Native secret encryption ensures that your sensitive credentials drive those deployments without ever leaking into your version control or your state files. Thanks for spending a few minutes with me. Until next time, take it easy.
8

Scaling Up: Component Resources on Azure

4m 00s

Elevate your infrastructure by creating reusable components. We walk through building an Azure Static Website component that encapsulates multiple resources. You will learn the importance of parent-child relationships for clean infrastructure tracking.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 8 of 10. You copy and paste a block of storage and networking code for the fifth time this week. Your infrastructure code is growing, but it is not getting smarter. Instead of repeating identical cloud configurations every time you need a standard setup, you can stamp them out with a single logical unit. That is the focus of this episode: Scaling Up: Component Resources on Azure. When you start using Pulumi, you declare raw resources. A resource group here, a storage account there. But as your system grows, deploying a standard piece of architecture requires provisioning the exact same set of primitives again and again. This violates the rule of not repeating yourself. Component Resources solve this by letting you encapsulate multiple physical cloud resources into one reusable abstraction. Think of a component resource as a custom class you define in your chosen programming language. Once defined, you instantiate it just like a built-in Pulumi resource. Consider a scenario where you frequently deploy static websites on Azure. A minimal setup requires an Azure Resource Group, a Storage Account configured for static website hosting, and a Blob object serving as the index document. Instead of writing those three definitions in your main program every single time, you create an Azure static website component. To build this, you define a new class that inherits from the Pulumi ComponentResource base class. Your class constructor takes a name, a set of arguments for customization, and standard resource options. The very first thing your constructor does is call the base class constructor. You provide it a unique type token, such as custom colon infrastructure colon static website, along with the name. This type token tells the engine how to track your new abstraction in the state file. Next, you define the actual Azure primitives inside your constructor. You declare the resource group. You declare the storage account inside that group. You upload the index blob to that account. Here is the key insight. When you create these internal resources, you must explicitly tell the engine that they belong to your new component. You do this by passing the component instance itself into the resource options under the parent property. Many engineers forget this step. If you omit the parent option, the child resources will provision successfully, but they will be treated as top-level resources. Your command line output will be a flat, confusing list. By setting the parent property to your component instance, the engine organizes the state tree. When you run an update, the interface visually nests the resource group, storage account, and blob directly under your custom website component. This keeps your state manageable and your output readable. Finally, your main program probably needs to know the web address of the newly created site. Inside your component, after defining the storage account, you map its primary web endpoint to a public property on your class. Then, you call a method named register outputs. This finalizes the initialization and ensures the final address is exposed to the rest of your program and printed to the console when the deployment finishes. In your main file, you no longer see the boilerplate. You simply instantiate your website component, pass it an index file, and deploy. The underlying resources are securely managed behind the abstraction. The true power of infrastructure as code is treating cloud architecture like software, and component resources are how you build a standard, reliable library for your team. That is all for this one. Thanks for listening, and keep building!
9

Peaceful Coexistence: Reading Terraform State

3m 57s

Bridge the gap between legacy infrastructure and modern code. We explore how Pulumi can directly read existing Terraform state files. You will learn a powerful coexistence pattern that lets you adopt Pulumi incrementally without rewriting your entire stack.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 9 of 10. You do not have to throw away years of existing code to start using a new infrastructure tool today. The dreaded massive system rewrite is a huge risk, and fortunately, it is entirely optional. The strategy that makes this possible is called Peaceful Coexistence: Reading Terraform State. A common misconception is that moving to Pulumi means you must migrate all your existing infrastructure at once. That is incorrect. Pulumi can natively read and depend on resources that are actively managed by Terraform. You can adopt new tools incrementally, side-by-side with your existing deployment pipelines. Consider a typical enterprise environment. Your company has a core AWS Virtual Private Cloud managed by a central network team using Terraform. You are a developer building a new application, and you want to use Pulumi to deploy Elastic Container Service tasks. Your containers must run inside that exact VPC. You do not want to rewrite the VPC code into Pulumi, and you certainly do not want to take over managing the underlying network. To handle this, you use the Pulumi Terraform provider. This provider includes a specific component designed to read state files, called a remote state reference. The process relies on how Terraform stores its execution data. First, the Terraform code managing the network must explicitly expose the data your new application needs. It does this using standard Terraform output blocks. The network team configures their code to output the VPC identifier and a list of private subnet identifiers. When Terraform applies its configuration, those outputs are written into the Terraform state file, which is typically stored remotely in a backend like an AWS S3 bucket or Terraform Cloud. Next, you move to your Pulumi program. You write code to instantiate the remote state reference. You provide this object with the exact same backend configuration details that Terraform uses to find its state file. This includes the backend type, the storage location, the region, and the specific state file key. When you execute your Pulumi deployment, the engine reaches out to that remote backend, opens the Terraform state file, and parses the available outputs. Here is the key insight. Pulumi treats the Terraform state as strictly read-only. It never modifies the Terraform state file, and it does not assume ownership of the network resources. It simply queries the current, known values of the infrastructure. Once Pulumi retrieves the VPC and subnet identifiers from the state, you treat those values like any other variable in your code. You pass them directly into the deployment logic for your new container cluster. Pulumi provisions your new containers seamlessly into the existing network. This architecture keeps responsibilities completely separate. The central team continues to manage the network lifecycle using Terraform. If they update a route table or add a tag, they use their standard workflow. If they modify an output, such as creating a new subnet, Pulumi will automatically read the updated state file during its next update and adjust your container deployment accordingly. Using remote state references creates a clean, one-way dependency boundary, allowing you to confidently build new systems with modern capabilities while relying on a stable foundation managed by legacy code. That is all for this one. Thanks for listening, and keep building!
10

The Great Migration: Converting HCL to Pulumi

3m 38s

Take the final step by translating Terraform HCL into fully functional programming code. We examine the `pulumi convert` tool and discuss when and why to convert legacy configurations. You will learn how real languages unlock advanced unit testing for infrastructure.

Download
Hi, this is Alex from DEV STORIES DOT EU. Pulumi: Infrastructure as Code, episode 10 of 10. Testing complex infrastructure logic is notoriously difficult. You write thousands of lines of configuration, but verifying if a specific combination of firewall rules actually behaves as intended before deployment often feels like guesswork. The Great Migration: Converting HCL to Pulumi is how you fix this. Moving from Terraform to Pulumi is not a simple syntactic find-and-replace. It is not about changing curly braces to parentheses. It is about taking static configuration and transforming it into an executable program, giving you immediate access to native loops, functions, and standard testing frameworks. Think about a highly complex, repetitive Terraform security group configuration. You likely have dozens of overlapping port ranges, specific IP allowlists, and heavy block definitions. In HCL, managing this requires rigid structures, and validating the logic requires running a plan against a live cloud state. The transition starts with the pulumi convert command. You navigate to a directory containing your existing Terraform files and run this command, specifying your target language, such as TypeScript or Python. The tool parses your HCL source code, reads your variables, main resources, and outputs, and generates an equivalent Pulumi program. It translates the declarative intent of the HCL into the imperative structure of your chosen programming language. Once that code is generated, the strategic benefits of the migration become clear. You can now refactor that massive list of security group rules into a clean array of data objects, or pull them from an external configuration file. You can iterate over that array to generate firewall rules dynamically using standard TypeScript or Python loops. Here is the key insight. Because your infrastructure is now written in a general-purpose language, you can test it exactly like application code. You can write a unit test using standard frameworks like Jest or PyTest. You create a test case that mocks the Pulumi runtime and asserts that your security group builder function never accidentally exposes port twenty-two to the entire internet. You run these tests in milliseconds, completely offline, catching logical errors before the infrastructure plan phase even begins. This shift unlocks deep language integration. Your infrastructure code can share standard libraries, validation logic, and typing definitions directly with your application code. You gain access to the mature ecosystems of package managers like NPM or pip, allowing you to package and distribute infrastructure patterns as easily as any other software library. The conversion command does the heavy lifting of translating your current state, but the true migration happens when you shift your mindset from writing static files to engineering testable systems. The biggest advantage of converting your code is moving from simply configuring infrastructure to truly programming it. I highly encourage you to read the official Pulumi documentation, take a small Terraform module, and try running the conversion yourself to see the output. If you have ideas for what technical topics we should cover in our next series, visit devstories dot eu and let us know. That is all for this one. Thanks for listening, and keep building!