Back to catalog
Season 30 10 Episodes 36 min 2026

Kubernetes & Helm Fundamentals

v1.35 — 2026 Edition. A comprehensive audio course on Kubernetes v1.35 and Helm fundamentals. From the historical origins of Borg to enterprise Azure implementations, learn the core concepts, architecture, and practical usage of K8s and Helm.

Container Orchestration DevOps Containerization
Kubernetes & Helm Fundamentals
Now Playing
Click play to start
0:00
0:00
1
The Origins From Borg To Kubernetes
Discover the history of Kubernetes and why it became the industry standard. This episode covers its evolution from Google's internal Borg system to the open-source powerhouse it is today.
3m 38s
2
Cluster Architecture
Understand the brain and the brawn of a Kubernetes cluster. We break down the Control Plane and Worker Nodes to see how they orchestrate container workloads.
4m 03s
3
Demystifying Pods
Learn about the smallest deployable unit in Kubernetes. We explore why Kubernetes uses Pods instead of bare containers and how they share network and storage contexts.
3m 24s
4
Managing State With Deployments
Discover how Kubernetes keeps your applications running automatically. This episode details Deployments, desired state, and the magic of self-healing workloads.
3m 24s
5
Services And Networking
Solve the moving target problem of ephemeral Pods. Learn how Kubernetes Services provide stable IP addresses and load balancing for your internal network.
3m 20s
6
Introduction To Helm
Escape the complexity of raw YAML manifests. This episode introduces Helm, the package manager for Kubernetes, and explains how it brings templating and versioning to your cluster.
3m 23s
7
Anatomy Of A Helm Chart
Look inside a Helm Chart to see how it works. We break down the directory structure, the role of Chart.yaml, and the power of values.yaml for configuration management.
3m 52s
8
Helm Chart Best Practices
Write cleaner, more maintainable Helm Charts. Learn the official best practices for structuring values, naming conventions, and avoiding common templating traps.
3m 23s
9
Enterprise Azure Implementation
Bridge theory and reality. This episode outlines a practical, high-level architecture for deploying an enterprise application using Helm on Azure Kubernetes Service (AKS).
3m 57s
10
Getting Started With Minikube
Take your first steps into the Kubernetes ecosystem. We conclude the series with a guide on how to spin up a local infrastructure using Minikube and deploy your first app.
3m 51s

Episodes

1

The Origins From Borg To Kubernetes

3m 38s

Discover the history of Kubernetes and why it became the industry standard. This episode covers its evolution from Google's internal Borg system to the open-source powerhouse it is today.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 1 of 10. For years, Google ran its global search and email infrastructure using a highly secretive system named after Star Trek's most terrifying villains. They had to invent it because existing server management simply broke at their scale. Today, we are looking at the origins from Borg to Kubernetes, and why modern enterprises rely on it. To understand why Kubernetes exists, you have to look at how application deployment evolved. In the traditional deployment era, you ran applications on physical servers. There was no way to define resource boundaries. If one application took up most of the memory, other applications on that physical server suffered. You could buy a separate physical machine for every app, but that resulted in expensive, underutilized hardware. Next came the virtualized deployment era. You ran multiple Virtual Machines on a single physical server's CPU. VMs isolated applications and provided a level of security, but each VM still required a full, heavy operating system. Finally, we arrived at the container deployment era. Containers are similar to VMs, but they share the underlying operating system among the applications. Because they are decoupled from the underlying hardware, they are lightweight, fast to start, and portable across different clouds and operating system distributions. But containers introduced a new problem. If you run a global enterprise application, you do not just have one container. You have thousands. If a container goes down, another needs to start immediately. If traffic spikes, you need to spin up more containers and distribute the network load evenly. You cannot manage that manually across hundreds of machines. Google faced this exact problem long before the rest of the industry. They built an internal container cluster manager called Borg to automate the orchestration of hundreds of thousands of jobs. When it became clear the rest of the software world needed this same capability, Google engineers started an open-source project based on the lessons learned from Borg. They gave it a Star Trek reference, originally calling it Project Seven of Nine, a nod to a Borg drone who escaped the collective. That project eventually launched as Kubernetes. The steering wheel logo you see today has seven spokes as a quiet tribute to that original project name. This is the part that matters. Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application. If a container crashes, Kubernetes replaces it. If a node fails, it reschedules the containers on healthy nodes. It handles service discovery, meaning a container can be found using a DNS name or its own IP address, and it balances the load so no single container is overwhelmed. It also manages storage orchestration, letting you automatically mount local storage or cloud providers, and it automates rollouts and rollbacks. You describe the desired state of your deployed containers, and Kubernetes changes the actual state to the desired state at a controlled rate. You do not write scripts to manage server state; you declare what you want, and the system makes it happen. The core takeaway here is that Kubernetes is not just a hosting environment, it is a control loop that constantly measures reality against your expectations and corrects the difference. Before we wrap up, if you want to help us keep making these episodes, search for DevStoriesEU on Patreon — we appreciate the support. That is all for this one. Thanks for listening, and keep building!
2

Cluster Architecture

4m 03s

Understand the brain and the brawn of a Kubernetes cluster. We break down the Control Plane and Worker Nodes to see how they orchestrate container workloads.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 2 of 10. You have dozens of servers running hundreds of containers. When a machine suddenly fails, something has to decide where those orphaned workloads go next. An orchestra without a conductor is just noise, and a fleet of containers without a centralized brain is unmanageable chaos. That centralized brain, along with the machines doing the actual work, form the Kubernetes Cluster Architecture. A Kubernetes cluster is split into two distinct halves. You have the Control Plane, which acts as the brain, and you have the Worker Nodes, which act as the brawn. The Control Plane makes all the global decisions, like scheduling workloads, and it detects and responds to cluster events. The Worker Nodes host your applications and execute the instructions sent by the Control Plane. Every cluster must have at least one worker node to run applications. Look at the Control Plane first. Its primary component is the kube-apiserver. Listeners sometimes hear API server and picture a standard web server handling HTTP requests for an application. That is not what this is. The kube-apiserver is the central nervous system of the entire cluster. It is the front end of the control plane. Every single communication, whether it comes from a human operator, a worker node, or an internal component, routes through this API server. Because the API server is completely stateless, the cluster needs a memory. That is etcd. This is a consistent, highly-available key-value store containing all cluster data. Whenever a configuration is created or a system state changes, the true record of that change lives in etcd. Next is the kube-scheduler. When you ask the cluster to run a new container workload, that workload initially has no assigned machine. The scheduler spots this unassigned workload. It evaluates resource requirements, hardware constraints, and policy rules, and then assigns the workload to the most appropriate worker node. Finally, you have the kube-controller-manager. This component runs continuous background loops called controllers. These controllers constantly watch the current state of the cluster through the API server and actively work to push that current state toward your desired state. If a worker node crashes, the controller manager notices the missing machine and triggers the response to replace the lost workloads. That covers the brain. Now look at the worker nodes executing the tasks. Node components run on every single worker machine to maintain the runtime environment. The most critical component here is the kubelet. This is an agent running on every node that communicates directly with the control plane. The kubelet takes instructions from the API server and ensures that the required containers are actually running and healthy on its specific machine. The kubelet does not start the containers itself. It delegates that task to the Container Runtime. The runtime is the actual software, like containerd, responsible for pulling container images from a registry and starting the processes on the operating system. Lastly, there is the kube-proxy. This is a network proxy running on each node. It maintains the local network rules that allow network communication to reach your containers from inside or outside the cluster. Here is the key insight. The control plane dictates what should happen, but it never executes the application code. The worker nodes run the application code, but they rely entirely on the control plane to tell them what to run. This strict separation of decision-making from execution is what allows Kubernetes to scale horizontally and recover from hardware failures automatically. That is all for this one. Thanks for listening, and keep building!
3

Demystifying Pods

3m 24s

Learn about the smallest deployable unit in Kubernetes. We explore why Kubernetes uses Pods instead of bare containers and how they share network and storage contexts.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 3 of 10. You build a container, test it locally, and hand it to Kubernetes to run. But Kubernetes absolutely refuses to deal with your container directly. Instead, it demands you wrap it in an entirely different abstraction first. We are demystifying Pods, the actual unit of work in this ecosystem. If containers are the standalone whales of the traditional containerization world, Pods are the cohesive groups they swim in together. A Pod is the smallest, most basic deployable object you can create and manage in Kubernetes. It represents a single instance of a running process in your cluster. You might wonder why Kubernetes introduces this extra layer instead of simply managing containers directly. The answer lies in abstraction and shared context. Kubernetes needs a uniform way to handle networking, storage, and scheduling regardless of the specific underlying container runtime you use. By wrapping containers in a Pod, Kubernetes treats the Pod as a logical host. Here is the key insight. A Pod does not just wrap a single container; it establishes a shared execution environment. While a Pod often contains just one container, it can hold multiple containers that need to work together closely. When multiple containers are placed inside the same Pod, they are guaranteed to be scheduled on the exact same physical or virtual machine. More importantly, these containers share the same network namespace. Every container inside a single Pod shares a single IP address and a single port space. Because they exist in the same network context, they can communicate with one another simply by using localhost. There is no need for internal DNS lookups or complex service routing just to get two local processes to talk. If container A binds to port eight thousand, container B in the same Pod can reach it at localhost port eight thousand. This shared context extends to storage. You can define shared storage volumes at the Pod level. Once defined, any container inside that Pod can mount those shared volumes into its own file system. This allows tightly coupled containers to read and write the exact same files seamlessly. Consider a primary web server container. Its job is to serve HTTP traffic, but it also writes raw access logs to a local directory. You want to ship those logs to a central monitoring system, but you do not want to bloat your web server image with logging agents and configuration files. Instead, you create a second container, a lightweight logging utility. You deploy both the web server and the logging container inside the exact same Pod. The web server writes its logs to a shared storage volume. The logging container, acting as a sidecar, mounts that same volume, reads the incoming log files, and streams them out to your monitoring system. They operate as one integrated unit, sharing resources without muddying their individual responsibilities. When deciding if two containers belong in the same Pod, ask yourself if they inherently need to land on the exact same machine and share an identical lifecycle. If they do not absolutely need to deploy, start, and die together, they belong in separate Pods. That is all for this one. Thanks for listening, and keep building!
4

Managing State With Deployments

3m 24s

Discover how Kubernetes keeps your applications running automatically. This episode details Deployments, desired state, and the magic of self-healing workloads.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 4 of 10. You are asleep. At three in the morning, a memory leak takes down your main application server. In a traditional setup, an alert fires, and you wake up to restart the process manually. In Kubernetes, the system handles the night shift for you. This is the power of managing state with Deployments. A Deployment provides declarative updates for your applications. Instead of writing scripts that command the system step-by-step on how to run your software, you describe exactly what the final picture should look like. You hand this desired state to the Deployment controller, and it changes the actual state to match it at a controlled rate. To understand how it does this, you need to know the hierarchy. You rarely create individual Pods directly. Instead, you create a Deployment. The Deployment then creates a secondary object called a ReplicaSet. The ReplicaSet is the mechanism strictly responsible for ensuring the exact specified number of Pod replicas are running at any given moment. If a server node fails, or a Pod crashes from that memory leak, the ReplicaSet notices the numbers have dropped below your desired state. It immediately spins up a new Pod to replace the lost one. That is your self-healing mechanism. You never intervene. The same logic applies to scaling. If traffic spikes, you update your Deployment file to ask for five replicas instead of three. The controller sees the discrepancy between your request and reality, and instructs the ReplicaSet to launch two more Pods. This declarative approach is most critical during application updates. Let us say you have three Pods running an Nginx image at version 1.14. You need to upgrade to version 1.16 without dropping any user traffic. You simply update the image version in your Deployment configuration. The Deployment does not terminate all your old Pods at once. Instead, it creates a brand new ReplicaSet specifically for version 1.16. Then, it begins a rolling update. It starts a new Pod in the new ReplicaSet. Once that new Pod is healthy, it scales down the old ReplicaSet by terminating one version 1.14 Pod. It repeats this careful, staggered process until all three old Pods are gone and three new Pods are running. The transition is completely seamless. Now, what happens if the update is broken? Here is the key insight. Because the Deployment controller orchestrates these ReplicaSets, it gives you a built-in safety net. If you accidentally type the image name as Nginx 1.16-typo, the new Pods will crash on startup. The Deployment detects the failure and halts the rollout immediately. It leaves your remaining old Pods running so your application stays online. Once you spot the error, you can issue a rollback command. The Deployment simply scales the old, known-good ReplicaSet back up and scales the broken one down to zero. The true strength of a Deployment is not just launching containers, but its relentless, continuous loop of comparing what you asked for against what actually exists, and forcing reality to match. I would like to take a moment to thank you for listening — it helps us a lot. Have a great one!
5

Services And Networking

3m 20s

Solve the moving target problem of ephemeral Pods. Learn how Kubernetes Services provide stable IP addresses and load balancing for your internal network.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 5 of 10. Pods are mortal. They crash, they scale down, they get evicted, and when they are replaced, they get an entirely new IP address. If you have an application trying to communicate with them, you are constantly shooting at a moving target. Kubernetes Services resolve this exact problem. Think about a typical web application. You have a frontend deployment and a backend database. If your database Pod restarts, the cluster control plane spins up a new Pod to replace it. This new Pod is assigned a completely different IP address on the cluster network. If your frontend was configured to talk directly to the old IP, the connection drops and your application breaks. You cannot rely on individual Pod IPs for anything permanent. A Kubernetes Service is an abstraction that provides a stable, long-lived network identity for a dynamic group of Pods. When you create a Service, it is assigned an IP address that will never change as long as the Service exists. Your frontend application does not need to keep track of exactly which database Pods are alive at any given second. It simply sends traffic to the Service IP. Furthermore, the cluster assigns a stable DNS name to the Service. Your frontend code can just connect to a simple hostname, and the cluster automatically resolves that to the correct IP address. Behind the scenes, the Service acts as an internal load balancer. It relies on a component called kube-proxy running on every node to implement the actual routing rules. When traffic arrives at the Service, it is forwarded to one of the healthy Pods backing it. To link a Service to the right Pods, you use labels and selectors. You might configure the Service with a selector looking for the label indicating a database application. The Service constantly watches the cluster. If a database Pod dies, its IP is removed from the active pool. When the replacement Pod boots up, its new IP is added. The frontend application remains completely unaware that the underlying network topology just shifted. There are a few ways to expose a Service, depending on where the traffic originates. The default type is ClusterIP. A ClusterIP Service gets an internal IP address reachable only from inside the cluster. This is the correct choice for your backend database, keeping it securely isolated from the outside world. But your frontend needs to receive traffic from external users. For this, you change the Service type to LoadBalancer. When you create a LoadBalancer Service, Kubernetes communicates with your cloud provider to provision a standard external load balancer. External internet traffic hits that cloud resource, which forwards the connection into your cluster, passing it through the Service and finally to your frontend Pods. Here is the key insight. Services decouple the addressable identity of your application from the physical workloads actually executing the logic. You stop routing to specific, fragile instances and start routing to a resilient, persistent concept. That is all for this one. Thanks for listening, and keep building!
6

Introduction To Helm

3m 23s

Escape the complexity of raw YAML manifests. This episode introduces Helm, the package manager for Kubernetes, and explains how it brings templating and versioning to your cluster.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 6 of 10. You want to deploy a single application, but you end up managing hundreds of lines of static YAML across deployments, services, and ingresses. When you need to push that same application to a staging environment, you copy those files into a new directory and manually find and replace image tags and hostnames. It is fragile, highly repetitive, and impossible to maintain as your infrastructure grows. This is the exact problem Helm exists to solve. Helm is the package manager for Kubernetes. You can think of it like apt, yum, or Homebrew, but designed specifically for Kubernetes resources. Instead of treating your application as a loose collection of independent YAML manifests, Helm bundles them together into a single, cohesive unit. This packaging format is called a Chart. A Chart is essentially a directory containing files that describe a related set of Kubernetes resources. It holds all the definitions your application needs to run. The primary mechanism that makes a Chart useful is templating. Raw Kubernetes manifests are entirely static. A Helm Chart, by contrast, replaces hardcoded infrastructure details with template variables. Instead of writing a specific replica count, a fixed container image tag, or a distinct environment variable directly into a deployment file, you define placeholders. When it is time to deploy, Helm merges these templates with a separate file containing your specific values. This architecture means you only ever maintain one single Chart for your application. You simply pass it different configuration parameters depending on whether you are deploying locally, to staging, or into production. When you take a Chart, combine it with your specific configuration values, and deploy it to a Kubernetes cluster, you create what Helm calls a Release. This is where it gets interesting. A Chart is just the generic blueprint. A Release is the actual deployed instance running in your cluster. Because of this strict separation between the blueprint and the instance, you can install the exact same Chart multiple times into the exact same cluster. If you need three separate instances of a caching server, you do not duplicate the YAML. You install the caching chart three times. Helm tracks each installation as a distinct Release with its own unique name, its own configuration values, and its own isolated lifecycle. Helm also tracks the state and history of these Releases inside the cluster. When you update an application by providing a new image tag or modifying a setting, Helm evaluates the differences and creates a new revision of that specific Release. It applies only the necessary changes to the underlying Kubernetes resources. If an update fails or an application starts behaving erratically, you command Helm to roll back to a previous revision. Helm knows exactly which Kubernetes resources belong to which version of your application, handling the creation, modification, and deletion of those resources as a single operation. The core shift with Helm is abstraction. You stop managing independent text files representing disconnected pods, services, and volumes, and you start deploying, configuring, and upgrading complete applications. Thanks for listening, happy coding everyone!
7

Anatomy Of A Helm Chart

3m 52s

Look inside a Helm Chart to see how it works. We break down the directory structure, the role of Chart.yaml, and the power of values.yaml for configuration management.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 7 of 10. You need to update a database password, but the credentials are hardcoded across fifteen different YAML files. One missed file, and the entire deployment fails. Hardcoding configuration directly into your deployment structure is brittle. The solution is understanding the Anatomy Of A Helm Chart. A Helm chart is a standardized package containing all the resource definitions necessary to run an application, tool, or service inside a Kubernetes cluster. The entire system is built around one core philosophy: a strict separation between structural definition and environment-specific configuration. You define the shape of your deployment once, and inject the specific details when it is time to deploy. When you look inside a Helm chart, you find a specific directory layout. The top-level folder is always named after the chart itself. Inside this folder, three core components drive the packaging system. First is a file called Chart dot yaml. This is the metadata hub. It tells Helm exactly what the package is. It contains the API version for the chart standard, the chart name, a description, and version numbers. Crucially, it tracks both the version of the chart itself, and the app version, which is the version of the actual software being deployed. You might also find a charts directory sitting next to it, which holds any subcharts your application depends on, but the metadata file is the primary identifier. The second core component is the templates directory. This is where the structural definition lives. Inside, you place your standard Kubernetes manifest files, like deployments and services. However, instead of writing static YAML, these files contain Go template logic. Instead of hardcoding a replica count of three, or pasting in a specific database password, you write a template directive. That directive instructs Helm to look up the required value dynamically during deployment. The third component answers those dynamic lookups. It is a file called values dot yaml, sitting at the root of the chart directory alongside the metadata file. This file holds the default configuration settings. When a template asks for an image repository, a port number, or a password, the values file provides the baseline answer. Here is the key insight. The templates dictate the architecture of your application, while the values dictate how that architecture behaves in a specific environment. When you run an install command, Helm takes the raw templates, merges them with the values file, and renders final, valid Kubernetes manifests. It then sends those rendered manifests to the Kubernetes API. This separation is what makes charts highly reusable. Consider a scenario where you must deploy the exact same application to both a staging environment and a production environment. You do not copy and alter the chart. You use the exact same directory structure and the exact same templates. For the staging deployment, you pass Helm a custom values file during the install command. This file overrides the defaults, specifying one pod replica, a local test database URL, and debug-level logging. When you deploy to production, you pass a completely different values file. This production file specifies ten replicas, a managed database URL, and strict resource constraints. Helm merges the single set of templates with the respective custom values files, producing two completely different deployment profiles. The power of a Helm chart lies not in the YAML it contains, but in the boundaries it creates. It locks down the infrastructure architecture in the templates, while keeping the operational details entirely fluid in the values. That is all for this one. Thanks for listening, and keep building!
8

Helm Chart Best Practices

3m 23s

Write cleaner, more maintainable Helm Charts. Learn the official best practices for structuring values, naming conventions, and avoiding common templating traps.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 8 of 10. Just because you can template every single line of a Kubernetes manifest does not mean you should. When you try to make every field configurable, you end up with a chart that no one can read, let alone maintain. This episode covers Helm chart best practices, the rules that keep your configurations from collapsing under their own weight. The biggest trap in chart creation is over-templating. Many developers treat Helm templates like a basic text replacement script. That is a common misunderstanding. Helm executes a Go template engine to produce structured, valid YAML. If you drop a variable into a file without carefully managing indentation and data types, the generated YAML will break entirely. Because of this, you should only template the values that genuinely change between environments. Think image tags, replicas, resource limits, or ingress rules. Leave the core structural fields hardcoded. If a user never needs to change a specific security context or volume mount, do not expose it as a variable. The values dot yaml file acts as the public API for your chart. When organizing this file, you must balance structure with usability. The official recommendation is to keep the hierarchy as shallow as possible. While you should group related parameters together, avoid deep nesting. Consider the user passing overrides through the command line. Forcing them to type a dot-separated path five levels deep just to change a port number causes unnecessary friction. If you have a web server configuration, place the properties under a single server key, but keep the internal properties flat. When naming these variables, always use camel case. Start with a lowercase letter and capitalize the first letter of each subsequent word. Do not use dashes or underscores in your values file. You might create a variable named externalPort, rather than external dash port. Consistent camel case prevents parsing errors during template rendering and matches the broader Kubernetes ecosystem standards. Also, maintain strict type consistency. If a parameter is an integer in Kubernetes, like a port number, leave it as an integer in your values file. Do not wrap it in quotes and turn it into a string. This consistency applies directly to how you tag the resources your chart generates. Every object needs standard labels. Helm best practices dictate using the official Kubernetes app labels. Specifically, use the app dot kubernetes dot io prefix. The name label should map to the chart name, while the instance label should map to the release name. You also include the version label, and note that the resource is managed by Helm. Applying these exact labels to every deployment, pod, service, and config map ensures that external monitoring tools and service meshes can automatically discover and group your application components without manual configuration. Here is the key insight. The best Helm chart is not the one with the most configuration options. It is the one that requires the fewest overrides to run successfully out of the box. If you want to help keep the show going, search for DevStoriesEU on Patreon — we really appreciate the support. Thanks for spending a few minutes with me. Until next time, take it easy.
9

Enterprise Azure Implementation

3m 57s

Bridge theory and reality. This episode outlines a practical, high-level architecture for deploying an enterprise application using Helm on Azure Kubernetes Service (AKS).

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 9 of 10. Building a cluster from scratch is a great learning exercise, but when you scale to hundreds of microservices, you do not want to manage the control plane yourself. You want the cloud provider to do the heavy lifting so your engineers can focus on shipping code. That is exactly what an enterprise Azure implementation of Kubernetes and Helm looks like. In a standard Kubernetes architecture, you have a control plane making global decisions and worker nodes executing them. Keeping that control plane highly available is notoriously difficult. An enterprise client deploying on Azure typically uses Azure Kubernetes Service, or AKS. AKS abstracts away the control plane. Azure manages the API server, the scheduler, and the key-value data store. Your operations team is only responsible for the worker nodes that actually run your applications. So, how does an application get from a developer's machine onto those worker nodes? This is where Helm comes into the workflow. An enterprise application rarely consists of a single container. It is usually a collection of microservices, each needing its own deployments, services, and configurations. Instead of managing dozens of static YAML files, developers package these resources into a Helm chart. A chart acts as a single, versioned blueprint for a microservice. Because Helm uses templates, developers can write the structural logic once and inject different configuration values depending on whether they are deploying to a development, staging, or production cluster. Before anything runs, the application code is built into container images. These images are pushed to a secure storage location, like a container registry. Helm charts themselves can also be packaged and pushed to a registry, which allows enterprise teams to treat their infrastructure definitions exactly like their compiled application code. Here is the key insight. When a release pipeline triggers a deployment, Helm evaluates its templates with the environment-specific values and sends the final manifests to the AKS API server. The AKS control plane reads this desired state and starts scheduling Pods onto your worker nodes. The nodes reach out to the container registry, authenticate securely, pull down the specific image versions, and spin up the containers. Kubernetes constantly monitors this state. If a worker node crashes, the control plane immediately reschedules its Pods onto healthy nodes to maintain the replica count defined in the Helm chart. Once the Pods are running, they need to receive traffic. A Helm chart will typically include a service definition to expose the application. When this is deployed to AKS, Kubernetes talks directly to the underlying Azure infrastructure. If the service requests a public entry point, AKS automatically provisions an Azure Load Balancer. This load balancer takes incoming external traffic and securely routes it into the cluster, distributing it across the healthy Pods. Your developers never had to touch the Azure portal or write specific cloud routing rules. They simply defined a standard Kubernetes service in their Helm chart, and the managed platform handled the physical network provisioning. The real power of this enterprise architecture is the clean separation of concerns. Helm standardizes how the application is defined across environments, the registry secures the versioned artifacts, and the managed cloud platform ensures the underlying infrastructure actually stays alive to run it. Thanks for spending a few minutes with me. Until next time, take it easy.
10

Getting Started With Minikube

3m 51s

Take your first steps into the Kubernetes ecosystem. We conclude the series with a guide on how to spin up a local infrastructure using Minikube and deploy your first app.

Download
Hi, this is Alex from DEV STORIES DOT EU. Kubernetes & Helm Fundamentals, episode 10 of 10. You have the theory and the tools. But when you stare at an empty terminal, the gap between massive cloud infrastructure and your laptop feels incredibly wide. Closing that gap is exactly what getting started with Minikube is all about. A production Kubernetes cluster involves multiple machines acting as control planes and worker nodes. Provisioning that in the cloud costs money, takes time, and requires complex network configuration. For a solo developer or a small team designing a day-one architecture, you need a local environment that behaves exactly like production without the overhead. Minikube is a lightweight Kubernetes implementation that creates a virtual machine or a container directly on your laptop. Inside that isolated environment, it deploys a simple, single-node cluster. Usually, Kubernetes separates the control plane, which manages the state of the cluster, from the worker nodes, which run your application containers. Minikube combines them. Your local machine runs one node that handles both the management logic and the actual application workloads. It is not meant to serve production traffic. It exists solely so you can test your container orchestration safely. Here is the key insight. The interface you use to interact with Minikube is identical to the one you use for a massive cloud cluster. You use the command-line tool called kubectl. When you execute a kubectl command, it communicates directly with the Minikube control plane over its API. There is no special syntax to learn for local development. The workflow matches the official Kubernetes basics tutorial perfectly. You initiate the cluster with a simple start command. Minikube provisions the environment and automatically configures kubectl to point to your new local instance. From there, you use kubectl to create a deployment. You tell the control plane to pull a specific container image and run it. Once deployed, your application is running inside a pod on that single node. However, it is isolated from your host network. To access it from your laptop browser, you must expose it by creating a service. A service routes traffic from a specific port on your local machine into the correct port on the pod running inside Minikube. From this point, you can practice every core Kubernetes function. You can scale your application by telling kubectl to increase the replica count. Minikube will spin up additional pods alongside the first one. You can practice rolling updates by changing the container image version in your deployment. Minikube will gracefully terminate the old pods and start new ones, simulating a zero-downtime deployment. Because Minikube exposes a standard Kubernetes API, your external tooling integrates seamlessly. Helm works immediately. You can install complex databases, ingress controllers, or message queues using Helm charts exactly as you would in a live environment. Minikube even includes built-in addons, such as a local web dashboard, allowing you to visually inspect your cluster state, read logs, and monitor resource usage. The true power of a local cluster is parity. When you write a deployment configuration or a Helm chart that successfully runs on Minikube, you have already written the exact configuration that will run in the cloud. Take the time to explore the official Kubernetes documentation and try these commands hands-on. If you have suggestions for topics we should cover in our next series, visit devstories dot eu and let us know. That is it for today. Thanks for listening, go build something cool.