Back to catalog
Season 28 13 Episodes 48 min 2026

Terraform Fundamentals

2026 Edition. A comprehensive guide to building, changing, and versioning infrastructure safely and efficiently with Terraform. Produced in 2026 covering Terraform v1.14 concepts.

Infrastructure as Code DevOps
Terraform Fundamentals
Now Playing
Click play to start
0:00
0:00
1
The Infrastructure as Code Paradigm
We explore why Terraform has become the industry standard for infrastructure provisioning. Learn the difference between declarative and imperative approaches, and why immutable infrastructure matters for your enterprise.
3m 53s
2
The Core Terraform Workflow
Master the fundamental three-step process that powers all Terraform deployments: Write, Plan, and Apply. Discover how the execution plan prevents catastrophic deployment mistakes.
3m 17s
3
Providers and Connecting to Azure
Terraform doesn't know how to talk to Azure out of the box. We break down how Providers act as the translation layer between Terraform core and external cloud APIs.
4m 06s
4
Declaring Infrastructure with Resources
The resource block is the fundamental building block of any Terraform configuration. Learn how to write code that provisions a real-world Azure Resource Group.
3m 44s
5
Resource Relationships and Dependencies
Infrastructure components rely on one another. We explain how Terraform automatically calculates execution order using implicit dependencies, and when to force ordering with explicit dependencies.
3m 50s
6
Understanding Terraform State
State is the absolute source of truth for Terraform. Learn why the state file is mandatory, how it maps your code to the real world, and why you should never edit it manually.
3m 38s
7
Parameterizing with Input Variables
Hardcoding infrastructure values doesn't scale. Discover how to use input variables to create dynamic, reusable configurations across different enterprise environments.
3m 37s
8
Exposing Data with Output Values
Once your infrastructure is built, you need to know how to connect to it. Learn how to use Output blocks to extract critical data like auto-generated IDs and IP addresses from your deployments.
3m 37s
9
Querying with Data Sources
Not every cloud resource is managed by your current project. Data sources allow Terraform to dynamically read and use existing infrastructure, like a core network managed by another team.
3m 41s
10
Scaling with Count and For_Each
Stop copying and pasting your resource blocks. Learn how to use the count and for_each meta-arguments to dynamically scale your infrastructure up and down with ease.
3m 30s
11
Building Reusable Components with Modules
Modules allow you to package complex architectures into single, reusable blocks of code. Learn how to construct child modules and call them from your root configuration to keep your enterprise DRY.
3m 58s
12
Enterprise Readiness: Remote State and Locking
A local state file is fine for a solo developer, but disastrous for a team. Learn how to configure remote state backends and implement state locking to safely collaborate on enterprise infrastructure.
3m 43s
13
Enterprise Workflows and CI/CD
Take Terraform out of your terminal and into automation. We wrap up the series by exploring CI/CD pipelines, automated PR reviews, and self-service infrastructure models.
3m 51s

Episodes

1

The Infrastructure as Code Paradigm

3m 53s

We explore why Terraform has become the industry standard for infrastructure provisioning. Learn the difference between declarative and imperative approaches, and why immutable infrastructure matters for your enterprise.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 1 of 13. Provisioning servers used to mean filing a helpdesk ticket and waiting two weeks for someone to click through a cloud console. Today, the exact same process is a simple pull request that safely deploys in minutes. That shift is entirely driven by the Infrastructure as Code paradigm, and HashiCorp Terraform is the primary engine powering it. Infrastructure as Code is exactly what it sounds like. You manage your databases, virtual networks, and compute instances using plain text files instead of manual clicks. These files can be version-controlled, reviewed by peers, and tested automatically. To understand why this is powerful, consider a system administrator who needs a new Azure virtual machine. Using an imperative workflow, they might write a complex PowerShell script. That script has to explicitly define every action in order. It tells the system to log in, check if a network exists, create it if it does not, allocate a public IP, and finally boot the server. If the script fails at step four, you are left with partially built infrastructure. Running the script a second time often causes errors because some resources already exist. Terraform uses a declarative approach instead. You do not write the sequence of steps. You simply define the desired end-state. You write a configuration file stating you want an Azure virtual machine attached to a specific network. Terraform compares your requested state against what currently exists in the cloud. It then calculates the exact sequence of API calls needed to bridge that gap. If the network already exists, Terraform leaves it alone and only builds the server. Here is the key insight. Many engineers confuse infrastructure provisioning tools like Terraform with configuration management tools like Chef, Puppet, or Ansible. They are not the same thing. Terraform builds the house. Configuration management tools arrange the furniture. Terraform provisions the raw cloud resources, like the load balancers and the virtual machines. Ansible or Chef then log into those machines to install software packages and start background services. Configuration management tools were fundamentally designed for mutable infrastructure. They expect a server to live a long time and undergo constant patching and tweaking. Terraform pushes you toward immutable infrastructure. If an environment needs a different operating system version, Terraform does not log in and run an upgrade script. It destroys the old server and provisions a brand new one with the correct image. This strict approach guarantees your code always matches reality, completely eliminating configuration drift. This workflow is particularly valuable because it is platform agnostic. An enterprise rarely uses just one vendor. You might run your primary workloads on Azure, handle your DNS through Cloudflare, and manage incident routing in PagerDuty. Terraform manages all of these through a provider model. A provider is simply a plugin that understands a specific vendor API. By using multiple providers, you can build a single configuration that spins up an Azure database, configures the necessary DNS records, and sets up the monitoring alerts simultaneously. The underlying API calls change, but your workflow stays exactly the same. If you want to help keep these episodes coming, you can search for DevStoriesEU on Patreon to support the show. A tool that merely automates tasks makes you faster, but a tool that enforces a declarative state makes your entire system predictable. I would like to take a moment to thank you for listening — it helps us a lot. Have a great one!
2

The Core Terraform Workflow

3m 17s

Master the fundamental three-step process that powers all Terraform deployments: Write, Plan, and Apply. Discover how the execution plan prevents catastrophic deployment mistakes.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 2 of 13. You hit deploy, watch the console, and cross your fingers, hoping you did not just take down the production environment. That blind leap of faith is a massive risk, and it is exactly what the core Terraform workflow eliminates. The workflow consists of three strict steps: write, plan, and apply. Each step operates independently to translate your requirements into running resources. You start with the write phase. You create configuration files that declare the exact infrastructure you want. You are not writing a procedural script that says how to build a server step by step. You are describing the final, desired state of your environment. You save these files, and your code becomes the single source of truth for what should exist. New users sometimes think the next step is just executing those files sequentially from top to bottom. That is not how this tool operates. It does not blindly run commands. Instead, it moves to the plan phase. The plan phase is the absolute superpower of this workflow. When you run the plan command, the tool evaluates your new configuration against the current, actual state of your infrastructure. It calculates a precise diff between reality and your desired code. Here is the key insight. The tool reads this diff and generates a highly detailed dry run of every single action it intends to take. Think of an engineer who needs to add an Azure Load Balancer to a live environment. They update their configuration files and run the plan command. The system connects to the cloud provider, checks the active state, and prints a strict summary. The engineer reads the output and sees one resource to add, zero to change, and zero to destroy. The output details the exact properties of the new load balancer. The engineer validates this dry run. They know the change is safe before a single API call modifies the real infrastructure. There is no guessing. After validating the output, you proceed to the apply phase. This is the moment of execution. The tool takes the exact diff calculated during the plan phase and builds a strict execution graph. It maps out all dependencies. If your new load balancer requires a dedicated public IP address, the execution graph ensures the IP is provisioned first. The system waits for the IP to become available, grabs its new address, and only then creates the load balancer. It handles the timing and the ordering automatically. Because the apply phase strictly follows the approved execution plan, you never have resources spinning up in the wrong order or accidentally deleting existing databases due to a typo. The workflow forces you to separate the intention from the execution. The most powerful aspect of this process is not the automated provisioning itself. It is the complete elimination of operational anxiety through predictable, reviewable dry runs. That is all for this one. Thanks for listening, and keep building!
3

Providers and Connecting to Azure

4m 06s

Terraform doesn't know how to talk to Azure out of the box. We break down how Providers act as the translation layer between Terraform core and external cloud APIs.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 3 of 13. Out of the box, the core Terraform application does not actually know how to build an Azure Virtual Machine or create a database. It is strictly an engine that evaluates code, determines dependencies, and manages state. To do any real work, it relies on thousands of downloadable translation plugins. This brings us to providers and how you connect your code to Azure. A common point of confusion is thinking that the Terraform application itself contains the logic for every cloud platform. It does not. The binary you install on your machine is completely infrastructure agnostic. It understands the configuration language and the deployment workflow, but it has zero hardcoded knowledge of any specific cloud API. Instead, Terraform uses plugins called providers. A provider is a standalone piece of software that understands the endpoints, authentication methods, and resource behaviors for a specific platform. There is a provider for Microsoft Azure, one for Amazon Web Services, and others for software as a service platforms like GitHub or Datadog. These plugins are published and hosted in a central directory called the Terraform Registry. When you start a new infrastructure project, you must explicitly declare which providers your code will use. You also specify the exact version of the provider you want. Locking in a version is a critical practice. It ensures that if the cloud API changes or the provider plugin receives a major update tomorrow, your deployment will not unexpectedly break. You control exactly when to upgrade. Simply declaring the provider in your text file does not install it. You must run an initialization command in your terminal. This initialization step actively reaches out to the Terraform Registry, downloads the required provider plugins, and caches them in a hidden local directory. Until you run this step, your Terraform project cannot interact with any external system. Let us look at setting this up for a new project connecting to an enterprise Azure subscription. You will use the official Azure Resource Manager provider, referred to as azurerm. After declaring the version you need, you must configure the provider's specific behavior. Every provider has its own configuration requirements based on the underlying API. For Azure, the plugin requires you to explicitly declare how it should handle certain resource behaviors. For example, you must tell the provider whether it should permanently delete attached storage disks when a virtual machine is destroyed. The provider demands this configuration up front so that destructive actions are always intentional. Once initialized and configured, the provider acts as a plug-and-play translation layer. When you execute your code, the core Terraform engine calculates the difference between your current infrastructure and your desired state. It then passes those generic instructions to the Azure provider plugin. The plugin takes over, translating your intent into authenticated HTTP requests sent directly to the Azure Resource Manager API. The plugin waits for Azure to finish creating or modifying the resources, translates the API response back into a format Terraform understands, and hands the final data back to the core engine. Terraform itself never talks to your cloud environment directly; it delegates every single API call to the provider plugin, making the provider the actual bridge between your code and your live infrastructure. That is all for this one. Thanks for listening, and keep building!
4

Declaring Infrastructure with Resources

3m 44s

The resource block is the fundamental building block of any Terraform configuration. Learn how to write code that provisions a real-world Azure Resource Group.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 4 of 13. You look at your code and see a database labeled primary, but when you log into your cloud console, that exact same database is named something completely different, like db-cluster-node-one. If you want something to physically exist in your cloud environment, you have to ask for it using Terraform's most important construct, but you also have to understand how Terraform names things. Today we are talking about declaring infrastructure with resources. Resources are the fundamental components of your infrastructure. Every time you want to create, update, or destroy an object, you write a resource block. This object could be a physical component like a compute instance or a storage drive, or a logical construct like a DNS record or a role assignment. The resource block is how you translate the idea of an infrastructure component into an API request that your cloud provider actually understands. When you declare a resource block, you define its identity using two distinct labels. First, you declare the resource type. This tells Terraform exactly what kind of object you want to build and which provider will build it. The type always begins with the provider namespace, like Azure or AWS, followed by the specific service. You are essentially telling Terraform you need an Azure resource group, or you need an AWS storage bucket. Immediately after the resource type, you define the local identifier. This is simply a nickname. It only exists within your Terraform configuration. You use this nickname to reference the object from other parts of your code. It has absolutely no effect on what your cloud provider sees. This brings us to the configuration block itself. Once you declare the type and the local nickname, you define the arguments for that resource. Arguments are the specific settings and values that configure the object. This is where you pass the actual parameters required by the cloud provider API. To put this together, imagine you are deploying an Azure resource group. You declare the resource type for an Azure resource group, and you give it a local nickname like main. Within the configuration block, you provide the actual arguments. You define a name argument and set it to rg-enterprise-prod, and you define a location argument setting it to a specific region. When you run your deployment, Terraform uses the resource type to know which provider API to call. It uses your arguments to tell the API exactly how to configure the resource. In the Azure portal, your resource group will appear as rg-enterprise-prod. Azure knows nothing about the nickname main. But back in your code, any time you need to retrieve the ID or location of that resource group to pass to a virtual machine or a database, you simply ask Terraform for the data held by the local resource named main. Every resource type has its own unique set of arguments. Some are mandatory, like the location for a resource group or the instance size for a virtual server. Others are optional, like tagging or specific routing rules. The provider documentation dictates exactly which arguments you can use. You just map the values you need into the resource block, and Terraform handles the translation into the API calls that actually provision your infrastructure. Your local identifier belongs to Terraform to keep your code readable, but your arguments belong to the cloud to define reality. Thanks for tuning in. Until next time!
5

Resource Relationships and Dependencies

3m 50s

Infrastructure components rely on one another. We explain how Terraform automatically calculates execution order using implicit dependencies, and when to force ordering with explicit dependencies.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 5 of 13. If a database must exist before a web server can connect to it, how does an infrastructure tool know which to build first without sequential scripts? If you are coming from an imperative scripting background like Bash or Python, you might look for ways to enforce execution line by line. But in Terraform, the order of the lines in your file does not matter at all. The only thing that matters is Resource Relationships and Dependencies. Terraform is a highly intelligent execution engine. Before it creates anything, it analyzes your configuration and builds a directed acyclic graph. This graph maps out exactly how every piece of infrastructure connects to the others. It uses this map to determine the most efficient creation order, building unrelated resources in parallel and sequencing the ones that rely on each other. You never write manual wait or sleep scripts. Most of the time, Terraform figures out this sequencing automatically through implicit dependencies. An implicit dependency happens when one resource references an attribute of another resource. Consider a scenario where you are creating an Azure Virtual Network and a Subnet. A subnet cannot exist without a virtual network. In your configuration, you define the virtual network block, and then you define the subnet block. Inside the subnet block, you set the virtual network name argument to point directly at the name attribute of the virtual network resource you just defined. When Terraform parses this, it sees the connection. It inherently knows it must finish creating the virtual network and fetch its generated name before it can even start creating the subnet. You do not have to tell it what to do. You just pass the data, and Terraform handles the timing. This is the part that matters. Always use implicit dependencies if you can. By referencing attributes directly, you give Terraform the exact information it needs to optimize your deployments safely. Sometimes the relationship between two resources is not visible in the code. You might have a situation where a resource requires another resource to be fully active, but it does not actually need to pull any data from it. Consider deploying a virtual machine that runs an application, while also provisioning a managed database. The application needs the database to finish booting before it can start. However, the virtual machine configuration does not reference any attributes from the database resource. Because there is no data link, Terraform assumes these two resources are entirely unrelated and will try to build them at the same time. The application will boot, look for a database that does not exist yet, and fail. To fix this, you use explicit dependencies. Terraform provides a meta-argument called depends on. You add this argument to the virtual machine block and pass it a list containing the database resource. This explicitly tells Terraform to pause the creation of the virtual machine until the database is completely finished provisioning. You should treat explicit dependencies as a last resort. They force Terraform to be more conservative in its execution, which slows down your deployments. They can also make your configuration harder to maintain over time, as the actual reason for the dependency is not always obvious to the next engineer reading the file. Letting the execution graph do the heavy lifting is what separates declarative infrastructure from procedural scripts, so stop trying to micro-manage the execution order and let the data flow dictate the sequence. Thanks for spending a few minutes with me. Until next time, take it easy.
6

Understanding Terraform State

3m 38s

State is the absolute source of truth for Terraform. Learn why the state file is mandatory, how it maps your code to the real world, and why you should never edit it manually.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 6 of 13. You write your infrastructure code, run apply, and your servers spin up perfectly. But if you run apply again five minutes later, nothing happens. Terraform knows the job is already done. Unlike simple automation scripts that blindly execute commands, Terraform has a memory. Without it, it would be entirely blind to the infrastructure it just finished building. That memory is called Terraform State. When you run Terraform, it creates a local file called terraform dot tfstate. Many engineers initially view this file as a nuisance. It feels like an extra artifact to manage and secure. But this file is the core of how Terraform operates. Terraform requires a mechanism to map the logical resources defined in your configuration files to the physical remote objects living in your cloud environment. A common misconception is that Terraform just looks at cloud provider tags to figure out what it manages. You might think it tags a server during creation, and then later searches the cloud for that specific tag to know what to update. This approach falls apart quickly. Not all cloud resources support tags. Furthermore, someone could manually edit or delete a tag, breaking the connection. Finally, searching a massive enterprise cloud account for specific tags every time you run a plan would be incredibly slow. Because tags are unreliable, Terraform uses a dedicated, highly structured state file. The state file acts as a private mapping database. When you declare a resource in your code, Terraform creates it via the provider API. The provider returns a unique physical identifier for that newly created object. Terraform takes your logical resource name from the code, pairs it with that unique cloud ID, and writes the pair into the state file. Take a practical scenario. You decide to change the size of an Azure virtual machine in your code. When you run apply, Terraform does not guess which machine to modify. It checks the state file, looks up your logical resource name, and retrieves the exact Azure instance ID. It then sends an update request targeting that specific ID. Without the state file, Terraform would not know if it should update an existing machine or just create a duplicate one. Beyond mapping, state handles metadata tracking. Terraform must know the exact order in which resources were created so it can update or destroy them safely. If a web server requires a database, that dependency is written in your code. However, if you delete that entire block of code to tear down the environment, Terraform can no longer read the code to find the dependency. The state file retains a copy of this historical metadata, ensuring Terraform destroys the web server before the database. State also provides crucial performance caching. Querying a cloud provider API to gather the current status of thousands of network rules, storage buckets, and compute nodes takes a significant amount of time. Cloud providers also enforce strict API rate limits. The state file acts as a cache of your infrastructure attributes. By referencing this cache, Terraform minimizes the number of slow, expensive API calls required to calculate a plan. Here is the key insight. Your configuration describes what you want, the cloud provider holds what actually exists, and the state file is the only definitive bridge connecting the two. That is your lot for this one. Catch you next time!
7

Parameterizing with Input Variables

3m 37s

Hardcoding infrastructure values doesn't scale. Discover how to use input variables to create dynamic, reusable configurations across different enterprise environments.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 7 of 13. Hardcoding values is fine for a quick test, but what happens when you need to deploy that exact same setup to both a development environment and production? You cannot duplicate and rewrite your code for every new deployment. The mechanism that solves this is Parameterizing with Input Variables. Input variables serve as parameters for a Terraform module. They allow you to customize aspects of your infrastructure without altering the underlying source code. This is the exact step where your code graduates from a proof of concept to a production-ready template. By using variables, you write the configuration once and reuse it everywhere. Before getting into the mechanics, we need to clear up a common confusion. There is a strict difference between declaring a variable and assigning a value to it. Declaring a variable simply tells Terraform that a parameter exists and defines its rules. Assigning a value is the act of actually giving it data during a deployment. You declare a variable using a variable block followed by a unique name. Inside this block, you define the expected data type. Terraform supports several types. A string is just regular text. A list is an ordered sequence of values, like multiple availability zones. A map is a collection of key-value pairs, which is perfect for applying standard resource tags. Inside the variable block, you can also set a default value. If you provide a default, the variable becomes optional. If the user does not supply a value during deployment, Terraform just uses the default. If you do not set a default, Terraform will force the user to provide a value before it proceeds. Let us anchor this to a practical scenario. Suppose you have an Azure deployment. Right now, your configuration explicitly requests a small virtual machine size called Standard B2s, and it hardcodes an environment tag as dev. To make this reusable, you replace that hardcoded text with a reference. In Terraform, you reference an input variable by typing var dot followed by the variable name. So, instead of writing Standard B2s, you write var dot vm size. Instead of dev, you write var dot environment. Now your code is flexible, but Terraform still needs to know what values to use when it actually runs. This is where variable definition files come in. These are files ending in dot tfvars. A tfvars file is simply a list of variable names and their corresponding values. For your production deployment, you create a file named prod dot tfvars. Inside it, you set the environment variable to prod, and the vm size variable to a larger instance, like Standard D4s. When you run Terraform, you point it to this file. Terraform reads the tfvars file, injects those values into your var dot references, and provisions the production environment. Tomorrow, you can point the exact same Terraform code at a dev dot tfvars file to spin up a small testing environment. Here is the key insight. Keeping your logic completely separate from your environment-specific data is what makes infrastructure truly repeatable. If you are finding these episodes helpful and want to support the show, you can search for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
8

Exposing Data with Output Values

3m 37s

Once your infrastructure is built, you need to know how to connect to it. Learn how to use Output blocks to extract critical data like auto-generated IDs and IP addresses from your deployments.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 8 of 13. Your cloud infrastructure finishes deploying perfectly, but you still have one major problem: you need to know its public IP address to actually connect to it. Combing through cloud provider dashboards defeats the purpose of automation. You need a way to extract that specific piece of data directly from Terraform. That is where exposing data with output values comes in. Output values are essentially the return values of a Terraform configuration. When you define and create a resource, the cloud provider generates certain attributes dynamically. These are things you cannot know before the deployment, like an assigned IP address, a generated database password, or a specific domain name. You use output blocks to capture those dynamically generated attributes and expose them to the outside world. To define one, you write an output block followed by a label. This label is just the name you want to assign to the output. Inside the block, you define a single required argument called value. This argument points to the specific piece of data you want to extract. For example, if you create a virtual machine, you might set the value argument to point directly to the public IP attribute of that specific machine resource. Take a concrete scenario. You have an automated pipeline that deploys a new Azure Kubernetes Service cluster. When the deployment finishes, your developers need the auto-generated raw Kubernetes endpoint to configure their local connection tools. Without an output, someone would have to log into the cloud portal, find the cluster, and copy the URL manually. Instead, you write an output block named cluster endpoint. You set its value to reference the fully qualified domain name attribute of the newly built Kubernetes cluster. When Terraform finishes applying your configuration, it gathers all defined output values and prints them directly to the command line interface. Your automation pipeline can then read that text and pass the endpoint straight to the developers. You can also retrieve these values later without running a new deployment. You simply run the Terraform output command in your terminal. For automation scripts, you can even tell that command to return the data as raw text or in JSON format, making it easy for other software tools to parse. Sometimes the data you need to output is confidential, like a database password or a private key. You do not want those strings scrolling across a shared terminal screen or sitting permanently in your continuous integration build logs. To prevent this, you add the sensitive argument inside the output block and set it to true. Terraform will then hide the actual value in the console display, replacing it with a placeholder tag indicating the value is sensitive. Here is the key insight. Setting an output to sensitive only suppresses it from the terminal display. It does not encrypt or hide the data inside the Terraform state file. The password or key is still stored in plain text within your state data on disk or in your remote backend. The sensitive flag is purely a display filter for the command line interface, not a security mechanism for your storage. Output values ultimately act as your configuration API. They are the structured, predictable way you hand critical information back to humans or automation tools the moment a run completes. That is all for this one. Thanks for listening, and keep building!
9

Querying with Data Sources

3m 41s

Not every cloud resource is managed by your current project. Data sources allow Terraform to dynamically read and use existing infrastructure, like a core network managed by another team.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 9 of 13. Not every piece of infrastructure in your cloud environment was built by the Terraform code you are currently writing. Yet, your code still needs a way to connect to those existing systems safely, without accidentally altering them. Querying with Data Sources is exactly how you handle this. A major point of confusion when learning Terraform is the difference between a resource block and a data block. Let us clear that up right now. A resource block tells Terraform to create, update, and own an infrastructure object. A data block only performs a read-only lookup. It asks the provider API to find an existing object and return its details so your configuration can read those details. This read-only capability is the foundation of a decoupled infrastructure architecture. Consider a standard enterprise setup. A centralized networking team builds and manages the corporate Virtual Network. As an application developer, you need to deploy an Azure Virtual Machine and attach it to that exact network. You do not own the network code. You also should not hardcode the network's unique ID into your Terraform files, because hardcoded IDs break easily if environments are ever recreated. Instead, you just look the network up. You do this by defining a data block. The syntax looks very much like a resource block. You start with the keyword data. Next, you specify the data source type, such as azurerm virtual network. Then you give it a local name, like corporate, which you will use to reference it later in your code. Inside the block, you define the search arguments. These act as strict filters. You might pass the human-readable name of the virtual network and the resource group it belongs to. Terraform uses these arguments to construct a query. When you run a Terraform plan, Terraform reaches out to the Azure API and searches for a virtual network that matches your filters. If the API returns exactly one match, Terraform downloads the properties of that network into memory. If the query returns zero matches, or if the filters are too loose and return multiple matches, Terraform stops immediately and throws an error. This strictness is intentional. It prevents you from accidentally deploying your application into the wrong subnet or environment. Once the data source successfully fetches the information, you can extract any exported attribute from it. The syntax for referencing a data source is highly structured. You start with the word data, followed by a dot, the data source type, a dot, your local name, and finally the specific attribute you need. In our scenario, you would type data dot azurerm virtual network dot corporate dot id. You pass that specific string right into your virtual machine resource block, entirely bypassing the need to hardcode a static value. Here is the part that matters. Data sources allow you to treat your surrounding infrastructure as a dynamic service. You do not have to build the environment to interact with it, and you can safely stitch independent workspaces together simply by querying the exact identifiers you need at runtime. Thanks for hanging out. Hope you picked up something new.
10

Scaling with Count and For_Each

3m 30s

Stop copying and pasting your resource blocks. Learn how to use the count and for_each meta-arguments to dynamically scale your infrastructure up and down with ease.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 10 of 13. If you need to deploy fifty identical web servers, the last thing you want to do is copy and paste the same block of code fifty times. You might look for a traditional while-loop or a for-loop, but Terraform does not work that way. Instead, you manage scale at the block level using Count and For Each. By default, a resource block configures exactly one infrastructure object. To provision multiple objects, you add the count meta-argument inside that block. It accepts a whole number. If you set count to three, Terraform provisions three objects from that single configuration block. These objects usually cannot be strictly identical in the real world. They need unique names or IP addresses. To handle this, Terraform provides the count dot index object. This is a special variable available only within blocks that have a count argument. Say you are deploying three Azure Virtual Machines. You write one resource block for the virtual machine and set the count to three. Inside the block, you assign the machine name by combining the word web, a hyphen, and the count dot index value. Because the index starts at zero, Terraform evaluates this block and outputs three separate machines named web-zero, web-one, and web-two. Adding count changes how Terraform tracks the resource internally. A single resource is addressed simply by its type and given name. Once you add count, that address becomes an array. You now reference specific instances elsewhere in your code using their index numbers in square brackets. Count is highly efficient, but it has a specific mechanical risk tied to list order. Here is the key insight. Count identifies resources entirely by their integer position. If you use a list of values to configure a block with count, the index position is the only thing Terraform cares about. If you have three items and change the count to two, Terraform destroys the last item in the array, index two. If you inject a new string into the middle of your source list, all the subsequent index positions shift down. Terraform will notice that the configuration for index one has changed, index two has changed, and so on. It will likely destroy and recreate perfectly healthy infrastructure just because the source list order shifted. To solve this vulnerability, you use the for each meta-argument instead. While count takes a whole number, for each accepts a map or a set of strings. Rather than creating an array of objects indexed by sequential numbers, for each creates a map of objects tracked by explicit string keys. If you pass it a set containing the strings frontend and backend, Terraform creates resources addressed by those exact names. If you add a new string later, or remove one, Terraform only adds or destroys that specific resource. The rest are untouched because their identifiers are fixed string keys, not fragile numeric positions. Use count when the infrastructure objects are truly interchangeable, but switch to for each the moment those objects require distinct identities that must survive changes to your configuration list. Thanks for tuning in. Until next time!
11

Building Reusable Components with Modules

3m 58s

Modules allow you to package complex architectures into single, reusable blocks of code. Learn how to construct child modules and call them from your root configuration to keep your enterprise DRY.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 11 of 13. Your single Terraform configuration file was fine for a simple web server, but as your infrastructure grows, it is rapidly becoming a messy, unreadable monolith. You are copying and pasting the same resource blocks over and over just to change a single name string or environment tag. It is time to stop repeating yourself by building reusable components with modules. A module is a container for multiple resources that are used together. If you are familiar with any programming language, you can think of a module as a function. You write the complex logic once, encapsulate it, and then call it multiple times from elsewhere. In Terraform, every configuration already has at least one module. The files sitting in your main working directory form what is called the root module. When your root module references another set of configuration files, that second set is known as a child module. Consider a specific scenario. A centralized DevOps team wants to ensure every storage account created across the company is secure by default, with encryption, private endpoints, and diagnostic logging strictly enforced. Instead of trusting every application developer to configure ten different complex resources correctly, the DevOps team creates a standard Secure Azure Storage module. When an application team needs storage, they do not write a massive block of resource definitions. They just write a module block in their root configuration. Inside that module block, the very first thing they define is a source argument. The source tells Terraform exactly where to find the child module files, whether that is a local directory path or a remote repository. Below the source, the application team passes in arguments. Just like passing arguments to a function, they provide the specific data the child module needs to run, such as a unique application name or an environment identifier. These arguments map directly to the input variables defined inside the child module. The child module takes those inputs, executes the underlying resource configurations, and builds the infrastructure. The application team gets compliant storage automatically, completely insulated from the underlying complexity. Here is the key insight. People often get confused about how Terraform handles scope between these modules. When you call a child module, the resources inside it are strictly encapsulated. Your root module cannot directly read an IP address, a connection string, or a storage ID generated inside that child module. The child module is a black box. If your root module needs a piece of data that was generated inside the child module, the child module must explicitly export it using an output block. Variables act as the input parameters, and outputs act as the return values. They form a strict interface. Once the child module exports that data as an output, the root module can finally read it. You access this data by referencing the word module, followed by the specific name you assigned to your module block, and then the name of the output. If the child module outputs a generated storage account ID, your root module can grab it using that syntax and pass it along to a database or a virtual machine that needs to connect to it. The true power of modules is not just saving lines of code. It is the ability to define an architectural standard once, lock away the complexity, and present a clean, predictable interface to the rest of your organization. Thanks for listening. Take care, everyone.
12

Enterprise Readiness: Remote State and Locking

3m 43s

A local state file is fine for a solo developer, but disastrous for a team. Learn how to configure remote state backends and implement state locking to safely collaborate on enterprise infrastructure.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 12 of 13. A local state file on your hard drive works perfectly when you are building alone. But the moment two engineers attempt to update the same infrastructure at the exact same second, you are looking at a recipe for a corrupted environment. This is the threshold between individual experimentation and team collaboration, and it brings us to enterprise readiness: remote state and locking. By default, Terraform writes its current view of your infrastructure to a local file called terraform dot tfstate. This file is the critical source of truth mapping your configuration code to real-world resources. The problem arises when you add more people to your project. If your colleague makes a change from their machine, your laptop has no idea the environment just shifted. You are operating on outdated information. Sometimes, teams try to solve this by committing the state file to their version control system. This is a severe security risk. State files routinely store sensitive data, like database passwords or private keys, in plain text. The correct approach is configuring a remote backend. Instead of saving the state file on a local machine, Terraform reads and writes this data from a secure, centralized data store. This is typically an object storage service like an Amazon S3 bucket, an Azure Blob Storage container, or a Google Cloud Storage bucket. When you use a remote backend, every time someone runs a command, Terraform queries that central storage to fetch the most accurate, up-to-date picture of the infrastructure. Transitioning to this setup requires adding a backend configuration block to your code, defining where the state should live. Pay attention to this bit. A very common mistake is writing that configuration, saving the file, and assuming the state is now remote. It is not. After adding the backend block, you must run terraform init again. Running this initialization command is the trigger that tells Terraform to physically copy your existing local state file and migrate it up to the cloud backend. Moving the state file to a shared location solves the visibility problem, but it exposes you to concurrent modifications. If two deployment pipelines trigger an update simultaneously, they could both try to write to the remote state file at once, completely corrupting it. This is why remote backends support state locking. Consider an environment using Azure Blob Storage for its remote state. Two engineers are working on different updates. Engineer A runs an apply. Before making any changes to the actual cloud resources, Terraform reaches out to the Azure storage container and places a lock on the remote state file. A fraction of a second later, Engineer B tries to run their own apply. Terraform checks the remote backend, detects the active lock, and immediately intercepts Engineer B's run. Instead of colliding, Terraform safely halts and returns an error, explaining that another process currently holds the lock. Once the first run completes successfully, Terraform automatically releases the lock. Implementing a locked remote backend is the defining step for enterprise infrastructure as code. It secures your sensitive data and eliminates dangerous race conditions. Remote state ensures that no matter who runs the code, or from where, your entire team is firmly anchored to the exact same reality. Thanks for listening, happy coding everyone!
13

Enterprise Workflows and CI/CD

3m 51s

Take Terraform out of your terminal and into automation. We wrap up the series by exploring CI/CD pipelines, automated PR reviews, and self-service infrastructure models.

Download
Hi, this is Alex from DEV STORIES DOT EU. Terraform Fundamentals, episode 13 of 13. You have written your configuration, tested it locally, and deployed your resources. But executing infrastructure changes from a developer laptop is a bottleneck, not a strategy. Manual applies lead to conflicting states, unreviewed changes, and security risks. True automation runs securely in a pipeline, triggered automatically by version control. That brings us to Enterprise Workflows and CI/CD. When you transition from a solo operator to a team, the core Write, Plan, and Apply workflow moves off your local machine entirely. Version control becomes the absolute source of truth. You stop running commands directly, and you start letting continuous integration pipelines orchestrate the state changes. Here is the key insight. The pipeline splits the traditional plan phase into two distinct concepts: speculative plans and concrete plans. Understanding the difference is crucial for pipeline design. Consider a developer who needs to increase the size of an Azure virtual machine. They update the instance size in the configuration, commit the code, and open a Pull Request. At this exact moment, the pipeline automatically triggers a speculative plan. A speculative plan simply shows what Terraform intends to do. It checks the proposed code against the remote state to calculate the delta, but it is strictly read-only. It cannot be applied under any circumstances. The pipeline takes the output of this speculative plan and posts it directly as a text comment on the Pull Request. When a senior engineer reviews the code, they do not just see the syntax change. They see the exact infrastructure impact. They know precisely which Azure resources will be modified, created, or destroyed before they grant approval. Once the Pull Request is approved and merged into the main branch, the pipeline triggers the second phase. It generates a concrete plan against that main branch. This is an executable plan file. Because the main branch is the trusted source of truth, the pipeline takes this concrete plan and immediately applies it. The live infrastructure is updated automatically by the robot, not the human. Running Terraform in automation opens the door to advanced enterprise controls. Policy-as-code, using frameworks like Sentinel, integrates directly into this pipeline. Sentinel evaluates the plan before the apply ever happens. If a developer accidentally requests a database instance that violates cost restrictions or compliance rules, the policy engine flags it and halts the pipeline immediately. This automated workflow is what enables a self-service infrastructure model. Platform engineers build and test reusable modules, while application developers simply submit a Pull Request requesting the resources they need. The pipeline plans the change, policy-as-code verifies the compliance, and a peer reviews the intent. The application team gets their infrastructure quickly, and the platform team enforces security without acting as a manual roadblock. This concludes our foundation series. You now know how Terraform scales from a single local command to an automated enterprise engine. The best way to solidify this knowledge is to build something, read the official documentation, and experiment with these pipelines yourself. If you have ideas for future topics, visit devstories dot eu. I would like to take a moment to thank you for listening — it helps us a lot. Have a great one!