Back to catalog
Season 49 18 Episodes 1h 4m 2026

LangGraph

v1.1 — 2026 Edition. A comprehensive audio course on LangGraph, a framework for building stateful, long-running agentic workflows. Covers mental models, Graph vs Functional APIs, memory, time travel, human-in-the-loop, and production deployment.

LLM Orchestration Multi-Agent Systems AI/ML Frameworks
LangGraph
Now Playing
Click play to start
0:00
0:00
1
The Orchestration Problem: Why LangGraph?
An introduction to the core problems LangGraph solves. We explore the transition from simple linear workflows to long-running, stateful agent orchestration.
3m 35s
2
Thinking in LangGraph: The Mental Model
Learn how to translate complex AI tasks into the LangGraph mental model. We break down the fundamental concepts of Nodes, Edges, and State.
3m 47s
3
The Graph API: State and Reducers
Dive into the mechanics of the Graph API. We explain how TypedDict defines your schema and how reducers manage state updates from multiple nodes.
3m 03s
4
The Functional API: @entrypoint and @task
Explore the Functional API as an alternative to the Graph API. We discuss how to gain enterprise-grade persistence using standard Python control flow.
3m 39s
5
Managing Conversation History with MessagesState
Understand the challenges of chat history in AI agents. We explore MessagesState and the add_messages reducer to handle edits and deduplication.
3m 35s
6
Choosing Your Abstraction: Graph vs Functional
A framework for deciding which API to use. We contrast the explicit visual routing of the Graph API against the imperative flow of the Functional API.
3m 39s
7
Dynamic Routing and Conditional Edges
Move beyond hardcoded logic. We discuss how to use LLMs with structured outputs alongside conditional edges to dynamically route workflows.
3m 32s
8
Map-Reduce Workflows with the Send API
Master the Orchestrator-Worker pattern. We dive into the Send API to dynamically fan-out parallel worker nodes based on runtime plans.
4m 01s
9
Persistence: Threads and Checkpoints
Discover the foundation of statefulness. We explain Threads, Checkpoints, and Super-steps, showing how LangGraph guarantees survival from crashes.
3m 45s
10
Durable Execution and Idempotency
Understand the nuances of resuming workflows. We cover why side-effects must be idempotent and how to structure nodes for durable execution.
3m 38s
11
Human-in-the-Loop: Interrupts
Learn how to freeze agents mid-execution. We detail the interrupt function and how to resume workflows with external human approval.
3m 50s
12
Debugging the Past: Time Travel and Forking
Explore LangGraph's time-travel capabilities. We show how to navigate state history, replay past checkpoints, and fork alternative execution paths.
3m 25s
13
Long-Term Memory: Stores Across Threads
Move beyond isolated threads. We introduce the Store interface and explain how to grant your agents persistent, cross-session memory.
3m 19s
14
Streaming Execution and the v2 Format
Enhance UX with real-time feedback. We break down stream modes (values, updates, messages) and the unified v2 StreamPart format.
3m 55s
15
Composing Complexity: Subgraphs
Scale your workflows by treating compiled graphs as nodes. We discuss composing subgraphs and managing shared versus private state schemas.
3m 08s
16
Subgraph Persistence and Multi-Agent Patterns
Master memory scoping in multi-agent systems. We explain the difference between per-invocation, per-thread, and stateless subgraph persistence.
3m 24s
17
Application Structure and Deployment Readiness
Transition from prototypes to production. We explore langgraph.json, proper file structure, and dependency management for stateful deployments.
3m 56s
18
Testing Graph Execution End-to-End
Learn robust testing strategies for graph workflows. We cover pytest integration, isolated node execution, and simulating partial state.
3m 37s

Episodes

1

The Orchestration Problem: Why LangGraph?

3m 35s

An introduction to the core problems LangGraph solves. We explore the transition from simple linear workflows to long-running, stateful agent orchestration.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 1 of 18. Most large language model scripts work perfectly for a quick prompt and a fast response. But when a task takes twenty minutes to run and the network drops halfway through, everything falls apart. You lose your progress, your context, and your API budget. The Orchestration Problem: Why LangGraph? is exactly about fixing this fragility. LangGraph is an orchestration framework built to handle stateful, multi-actor applications. You might hear the name and assume it is just a feature of LangChain. It is not. LangGraph is a lower-level orchestration engine, and you do not need to use LangChain at all to use it. It exists specifically to model agent workflows as stateful graphs rather than simple linear scripts. Standard scripts execute in memory. If a script stops unexpectedly, all that runtime data vanishes. Consider a scenario where you have a background agent researching a one hundred page document. The agent has been reading, extracting facts, and cross-referencing information for twenty uninterrupted minutes. If a server timeout occurs at minute nineteen, a standard script drops all of that state. You have to start the entire job over. LangGraph solves this orchestration problem through durable execution. By modeling your workflow as a graph, every distinct step becomes a node, and the logical connections between them are edges. As the application moves from one node to the next, LangGraph automatically saves its progress. It treats long-running processes as a series of safe checkpoints. If the system crashes, LangGraph resumes execution exactly where it left off. This checkpointing mechanism relies on comprehensive memory. Memory in LangGraph is not just a running list of chat messages. It is the entire state of the graph. When a node finishes its processing, it updates a shared state object. The next node in the sequence reads its input directly from that state. This means memory persists across the entire lifecycle of the application. The background agent researching your document does not forget the critical data it found on page five when it finally reaches page ninety, because the graph state holds onto it securely. Here is the key insight. Because the graph state is paused and saved cleanly between steps, you gain the ability to put a human in the loop. Sometimes an autonomous agent reaches a decision point where it needs permission before proceeding, like sending a final email or executing a financial transaction. In a standard script, pausing for a user to click a button often leads to connection timeouts. In LangGraph, you simply configure a specific node to halt execution. The system goes to sleep and preserves the current state perfectly. A human operator can then review the gathered data, approve the next action, or even manually modify the agent state before hitting continue. Once approved, the graph wakes up and resumes its work with the updated context. The core takeaway is that building complex agents relies heavily on managing state and failure. LangGraph shifts your architecture away from brittle, memory-bound scripts and toward resilient graphs that survive interruptions, remember their past, and wait patiently for human guidance. If you would like to help support the show, you can search for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
2

Thinking in LangGraph: The Mental Model

3m 47s

Learn how to translate complex AI tasks into the LangGraph mental model. We break down the fundamental concepts of Nodes, Edges, and State.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 2 of 18. You build an artificial intelligence agent by stuffing instructions, edge cases, and examples into one massive prompt, then you send it off and hope it does the right thing. It works until it fails, and debugging the result is frustrating. To fix this, you have to stop writing monolith prompts and start designing systems. That brings us to Thinking in LangGraph: The Mental Model. LangGraph forces you to step away from linear scripts. It asks you to think about your application as a state machine. The very first step is to simply define your process. Before writing any code, look at what you want the system to achieve and break it down into discrete actions. Consider a customer support triage system. A user sends a message. A human operator would read the email, decide if it is a billing issue or technical support, and then draft an appropriate reply. That sequence is your process. You map that process to a LangGraph workflow using three core components which are State, Nodes, and Edges. Let us start with State. State is the shared memory of your graph. It is a structure that holds the context of your entire operation at any given moment. Every step in your workflow will read from this state and write updates back to it. Listeners often make a specific mistake here. They try to store fully formatted prompt strings inside the State. Do not do this. The State should only hold raw data. It holds the original customer email text, the extracted category, or a raw list of previous messages. Formatting happens on-demand, later, exactly inside the step where it is needed. Next, we have Nodes. If State is the memory, Nodes are the workers doing the actual tasks. A node is just a Python function that executes a single logical step of your process. It receives the current State, performs an action, and returns an update. In our triage example, you would create three separate nodes. The first is a Read node. It takes the incoming email and saves the raw text to the State. The second is a Classify node. It looks at the raw text in the State, asks a language model to categorize it as billing or technical, and saves that resulting category back to the State. The third is a Draft node. It reads both the email and the category from the State, formats them into a prompt locally, and generates a response. Each node does exactly one job. Finally, you need a way to connect these workers. That is the role of Edges. Edges represent the routing logic. They dictate what happens after a node finishes its work. A standard edge simply says, once the Read node finishes, always go to the Classify node. But LangGraph also uses conditional edges. This is where it gets interesting. After the Classify node, you can use a conditional edge to inspect the State. If the category is billing, the edge routes the flow to a specific billing Draft node. If it is technical, it routes to a technical Draft node. Edges make traffic decisions based on the data your nodes just produced. You start with the process, you isolate the data into a shared State, you define the workers as Nodes, and you dictate the flow with Edges. By breaking the problem apart, you isolate failures. Treat your application not as a single text generator, but as a coordinated assembly line where raw data moves systematically from one specialized worker to the next. Thanks for listening, happy coding everyone!
3

The Graph API: State and Reducers

3m 03s

Dive into the mechanics of the Graph API. We explain how TypedDict defines your schema and how reducers manage state updates from multiple nodes.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 3 of 18. Two parallel functions finish executing at the exact same millisecond and try to write to the exact same shared list. Usually, one overwrites the other, and data vanishes. To prevent this, LangGraph relies on The Graph API: State and Reducers. The foundation of any LangGraph workflow is its state. You define this using a standard Python TypedDict. This dictionary lays out the exact keys your graph will track and the data types for each key. Think of it as the schema that gets passed from node to node as your graph runs. Here is the key insight. Nodes in LangGraph do not mutate the state directly. A node receives a copy of the current state, does its work, and returns a dictionary of updates. A common mistake is assuming that returning a dictionary replaces the entire state object. It does not. If your state has five keys and your node returns a dictionary with just one key, LangGraph leaves the other four untouched and only applies your specific update. How does LangGraph apply that update? That is where reducers come in. A reducer is simply a function that dictates how a returned value merges with the existing value for a specific key. By default, LangGraph uses an overwrite reducer. If your node returns a new string for a status key, the old string is gone, replaced entirely by the new one. Sometimes overwriting is exactly what you want to avoid. Consider a parallel data fetching workflow. You have a shared state key called results, which is a list. You spin up two nodes running at the same time to fetch different batches of data. If both nodes return a dictionary updating the results key, the default overwrite behavior causes whichever node finishes last to erase the other's work. To fix this, you annotate the results key in your TypedDict with a specific reducer, like Python's built-in operator dot add. Now, when the two nodes return their lists, the reducer acts as a traffic cop. It takes the existing list and safely appends the outputs from both nodes. Nothing gets dropped. There is one edge case. What if you have an append-style reducer on your results key, but you reach a point in your graph where you genuinely need to wipe the list clean and start over? If your node returns an empty list, the reducer just adds an empty list to the existing one, leaving the old data intact. For this scenario, LangGraph provides a special Overwrite type. When your node wraps its update in an Overwrite object, LangGraph detects it and bypasses the reducer entirely. It throws away the old list and forces a hard reset. State in a complex graph is not a fragile global variable being constantly mutated, but an append-only log of controlled updates governed by clear reduction rules. That is all for this one. Thanks for listening, and keep building!
4

The Functional API: @entrypoint and @task

3m 39s

Explore the Functional API as an alternative to the Graph API. We discuss how to gain enterprise-grade persistence using standard Python control flow.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 4 of 18. Sometimes you just want to write a standard Python script with normal if-statements and for-loops, but you still need enterprise-grade state persistence. You do not want to construct an explicit state machine just to run a few language model calls in sequence. This is where the Functional API, specifically the entrypoint and task decorators, resolves the tension. Building applications with explicit nodes and edges requires you to manually define how data routes from one step to the next. That structure provides immense control, but it can feel heavy when your logic is highly sequential or relies on standard programming loops. The Functional API allows you to write normal, top-to-bottom Python code while retaining built-in streaming and recovery features. Instead of instantiating a graph object, you apply decorators to your existing Python functions. You start with the task decorator. You apply this to the individual units of work in your application. Think of a task as a discrete step that does something specific, like querying a database, calculating a metric, or prompting a model. When a function carries the task decorator, the framework wraps it in a tracking layer to monitor its execution. Next, you use the entrypoint decorator. You place this on the main orchestrating function that directs the overall flow. Inside this entrypoint function, you call your decorated tasks using standard Python control flow. You assign the output of a task to a variable, then pass that variable to the next task. You can use try-except blocks, list comprehensions, or while-loops. The orchestration logic behaves exactly how native Python behaves. Because the code looks entirely standard, you might assume it lacks the memory of a formal state structure. This is a common misconception. The Functional API still automatically checkpoints your progress behind the scenes. Every time a task completes, LangGraph intercepts the return value and saves it to a persistent store. The framework securely records the inputs and outputs of every decorated function as they happen. Consider an automated essay-writing script. You define three decorated tasks: a function to generate an outline, a function to write a paragraph, and a function to review the draft. Inside your main entrypoint function, you call the outline generator first. Next, you write a standard for-loop that iterates over the sections of that outline, calling the write paragraph task for each one. You append the results to a local list. Finally, you run the review task. You use a simple if-statement to check the resulting score. If the score is poor, your code simply triggers a while-loop to rewrite specific paragraphs until the score improves. Here is the key insight. Because of the hidden checkpointing, if your script encounters a network timeout while writing the third paragraph, you do not lose your work. When you restart the process with the same thread identifier, LangGraph knows the outline and the first two paragraphs are already complete. It skips executing those tasks entirely, retrieves their cached outputs from the state store, and resumes execution precisely at the third paragraph. The Functional API shifts the cognitive load from visualizing abstract routing topologies back to reading code top-to-bottom, giving you the resilience of a state machine with the simplicity of a plain script. That is all for this one. Thanks for listening, and keep building!
5

Managing Conversation History with MessagesState

3m 35s

Understand the challenges of chat history in AI agents. We explore MessagesState and the add_messages reducer to handle edits and deduplication.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 5 of 18. You build a chat app, and a user spots a typo in their prompt. They click edit, correct it, and hit send. But instead of replacing their old message, your backend simply tacks the corrected version onto the end of the history, leaving the original typo right where it was as a duplicate ghost. Managing Conversation History with MessagesState is how you prevent this. When developers build their first graph, they usually define a custom state dictionary to hold their chat history. A common mistake is using standard list appending to manage this history. They attach the standard operator dot add function to their messages list. This tells the graph to simply take any new messages and glue them to the end of the existing array. This append-only approach works fine for a simple ping-pong bot where a user speaks, the AI replies, and the history grows sequentially. But it breaks completely when state needs to be mutable. If a human edits a past prompt, or an agent decides to regenerate its last response, standard addition cannot handle it. You end up with duplicates. LangGraph provides a built-in state structure to solve this called MessagesState. It contains a single key called messages. Here is the key insight. The power of MessagesState is not the key itself, but the specific reducer function attached to it, called add messages. The add messages reducer does not just blindly append data. It tracks message IDs. Every time a new message enters the state, the reducer checks its unique ID. If that ID already exists anywhere in the conversation history, the reducer overwrites the old message with the new one. If the ID is new, or if the message does not have an ID yet, the reducer appends it to the end of the list. Think back to our typo scenario. The human user sends a prompt. The system gives that message an ID of 123. The user realizes their mistake, edits the text, and submits the correction. The frontend sends the new text, explicitly tagging it with ID 123. When that data hits the graph, the add messages reducer scans the history, finds the original message at ID 123, and swaps the text in place. The duplicate ghost is gone. The conversation flows exactly as intended. Beyond managing IDs, the add messages reducer also handles data deserialization. In a production application, your messages often arrive in different formats. Your frontend might send raw JSON dictionaries containing role and content strings. Your internal graph nodes might generate native LangChain message objects. The reducer acts as a universal translator for these inputs. If you pass a list of plain Python dictionaries into the state, the add messages function automatically converts them into the correct LangChain message classes. You do not need to write boilerplate code to parse a dictionary into a HumanMessage or AIMessage before updating the state. It normalizes the data for you. When building chat agents, state history is not an append-only log, it is a living document, and tying your updates to unique message IDs is what keeps that document accurate. Thanks for tuning in. Until next time!
6

Choosing Your Abstraction: Graph vs Functional

3m 39s

A framework for deciding which API to use. We contrast the explicit visual routing of the Graph API against the imperative flow of the Functional API.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 6 of 18. Pick the wrong design paradigm early on, and you will either write a hundred lines of boilerplate for a simple script, or weave a spaghetti nightmare out of basic Python functions. Today, we are focusing on Choosing Your Abstraction: Graph versus Functional. It is easy to assume one of these APIs is inherently more powerful or more production-ready than the other. That is false. Under the hood, both the Graph API and the Functional API compile down to the exact same runtime engine. They both support persistence, streaming, and execution control in exactly the same way. The choice between them is purely about your mental model and how you want to express your logic. Let us look at the Functional API first. This relies on standard, imperative Python control flow. You write normal Python functions, flag them with a decorator, and route your execution using standard if-statements and loops. State management here is entirely function-scoped. Data flows strictly from the return value of one function into the arguments of the next. There is no shared global memory object floating around in the background. If your workflow is linear, or if it has predictable, tightly scoped logic, the Functional API keeps your code lean and familiar. You avoid the overhead of defining graph structures entirely. The Graph API requires a different mindset. Instead of calling functions directly, you define a shared, global state schema. You then write nodes, which are small functions whose only job is to read and mutate that shared state. Finally, you explicitly wire those nodes together using edges. Routing is not handled by a conditional statement hidden deep inside a function body. Instead, the logic that dictates where the application goes next is pulled out into explicit conditional edges declared at the top level of the graph. Here is the key insight. You choose between them based on how your system handles state and routing over time. Picture a developer building a basic data extraction tool. It runs a single language model, parses the output, and saves it. The Functional API is perfect for this. It is fast to write and easy to read. But fast forward three months. That simple script is being refactored into a complex multi-agent system. Now you have a researcher agent handing off data to a writer agent, a critic agent pushing back with corrections, and an execution pause waiting for a human manager to approve the final draft. If you try to build that asynchronous multi-agent workflow with the Functional API, the imperative approach breaks down. You end up passing massive data payloads up and down deep function call stacks. Your routing logic gets buried inside deeply nested conditionals. This is the exact moment you migrate to the Graph API. The Graph abstraction shines here because it decouples state from execution. Because the state is global and shared, your individual agent nodes do not need to pass heavy data structures to each other. A node simply reads the shared state, updates the specific key it is responsible for, and finishes. The explicit edges take over, making the routing highly visible. You can look at the graph definition and immediately map out the entire workflow without reading a single line of business logic. You use the Functional API when the control flow is simple enough to read top-to-bottom, but you switch to the Graph API when the routing becomes complex enough that you need to draw it on a whiteboard. Thanks for listening. Take care, everyone.
7

Dynamic Routing and Conditional Edges

3m 32s

Move beyond hardcoded logic. We discuss how to use LLMs with structured outputs alongside conditional edges to dynamically route workflows.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 7 of 18. Hardcoded conditionals only go so far when you are building an agent. If a user asks a complex question, a simple keyword search cannot reliably decide what your application should do next. What if the AI itself could dictate the path of your workflow? This is where dynamic routing and conditional edges come in. In a standard graph setup, you connect node A to node B. It is a static, guaranteed path. But when you are building an intelligent routing mechanism, the path needs to change based on the incoming data. You might assume that edges in LangGraph only accept hardcoded string connections. That is not the case. An edge can be a Python function that reads the current state of your graph and dynamically computes the name of the next node. You attach this logic to your graph using the add conditional edges method. This method requires three components. First, the starting node. Second, a routing function. Third, a dictionary that maps the possible string outputs of your routing function to the actual destination nodes in your graph. Here is the key insight. The most reliable way to drive a conditional edge is to combine it with a large language model generating structured data. You do not want the routing function itself to perform complex evaluation or natural language processing. Instead, you have an upstream node where the model is forced to return a strict structure, like a Pydantic model. Consider a customer service router. A user sends a message. The first node in your graph is an intent classifier. Inside this node, you pass the user message to an language model and require it to return a structured output with a single field called intent. The model evaluates the text and populates that field with a specific value, such as billing, tech support, or sales. This structured response is then saved to the graph state. Now the conditional edge takes over. The edge is attached to the classifier node. When the classifier node finishes, the conditional edge triggers a short Python function. This function takes the graph state as its input, looks inside the state, and extracts the intent value the model just generated. If the intent is billing, the function returns the string billing. The conditional edge looks at its mapping dictionary, sees that the string billing corresponds to your billing node, and hands execution over to that specific node. If the intent is tech support, it returns a different string, routing the flow to the tech support node. You are using the language model for its reasoning capabilities to categorize the input, but you are keeping the actual routing logic deterministic. The Python function in the conditional edge is just reading a variable and returning a string. It is highly predictable and easy to test. The single most useful takeaway here is that you should always decouple the decision from the direction. Let the language model decide the intent and write it to the state, then use a pure Python conditional edge to read that state and steer the graph. Before we wrap up, if you find these episodes useful and want to help support the show, you can search for DevStoriesEU on Patreon — it really helps us out. That is all for this one. Thanks for listening, and keep building!
8

Map-Reduce Workflows with the Send API

4m 01s

Master the Orchestrator-Worker pattern. We dive into the Send API to dynamically fan-out parallel worker nodes based on runtime plans.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 8 of 18. You cannot hardcode your execution paths when you have no idea how many sub-tasks your agent will decide to create until it actually runs. If your system decides on the fly that it needs to process three items, or thirty items, standard static routing will fail. To fix this, you need Map-Reduce Workflows with the Send API. A very common mistake when building in LangGraph is trying to use standard conditional edges for dynamic fan-out. Conditional edges are perfect when you want to choose between known, predetermined paths based on a logic check. However, they fall apart when you need to spawn an unknown number of identical parallel tasks at runtime. Standard parallelization allows you to route to fixed multiple nodes. You name the nodes, and the graph triggers them. But what happens when you need to run the exact same node multiple times simultaneously, each with a different piece of data? You cannot do this with basic routing. This brings us to the Orchestrator-worker pattern. In this architecture, a central node looks at the incoming data, calculates how many separate tasks are required, and dispatches dynamic workers to handle them concurrently. LangGraph enables this pattern specifically through the Send API. Consider an agent tasked with writing a comprehensive research report. The first node acts as the orchestrator. It reads the user prompt and generates an outline. Depending on the complexity of the topic, this outline might contain three sections, or it might contain twelve. You want a separate worker node to draft each section at the exact same time. To achieve this, you define a conditional edge function immediately following your orchestrator node. Instead of returning a simple string that points to the next static node in the graph, this edge function returns a list of Send objects. Here is the key insight. A Send object packages a destination and its data together. It takes two arguments. The first argument is the name of the worker node you want to trigger. The second argument is the specific payload for that isolated worker. In our report scenario, the orchestrator function iterates through the generated outline. For every section topic it finds, it creates a new Send object pointing to a single node called draft_section, passing the individual topic string as the payload. When LangGraph evaluates this edge function, it receives the list of Send objects. It then dynamically spins up a parallel instance of the draft_section node for every single item in that list. If the orchestrator generated an outline with seven sections, LangGraph launches seven parallel drafting nodes. Each node runs identical code, but operates on its own unique payload. Generating these dynamic workers is the map phase. Gathering their independent outputs is the reduce phase. Because these worker nodes run concurrently, they cannot safely overwrite a single string in your graph state. Your overall state must be configured to collect multiple incoming updates simultaneously. You handle this by attaching a reducer function to the specific state field that will hold your drafted sections, instructing it to append new items to a list rather than overwriting the previous value. As each parallel draft worker finishes writing, it returns its text block. LangGraph catches these responses and uses your reducer to safely stack each text block into the shared array. Once every dynamic worker completes its execution, the entire parallel step resolves. The workflow then moves forward, carrying a complete, populated list of all drafted sections. The Send API takes parallel execution out of your static graph definition and puts it directly in the hands of your runtime data. Thanks for hanging out. Hope you picked up something new.
9

Persistence: Threads and Checkpoints

3m 45s

Discover the foundation of statefulness. We explain Threads, Checkpoints, and Super-steps, showing how LangGraph guarantees survival from crashes.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 9 of 18. Your server crashes mid-thought while an agent is processing a massive dataset. It should not have to start over from zero. It should pick up exactly where it left off. That resilience is what we are covering today with Persistence, specifically Threads and Checkpoints. To add persistence to a LangGraph application, you need to understand the concept of a thread. A thread represents a single, isolated execution sequence or a specific user conversation. It holds the working state of the graph as it moves from node to node. Let me clear something up right away. People often confuse thread memory with long-term cross-session memory. A thread is not a global database where your agent remembers facts across different tasks forever. It is the short-term working memory strictly tied to one ongoing sequence. You enable this memory by providing a checkpointer when you compile your graph. A checkpointer is an object that handles saving and loading the graph state to a storage backend. Once your graph is compiled with a checkpointer, you trigger the persistence by passing a configuration object containing a thread ID whenever you invoke the graph. This ID is the unique key that the checkpointer uses to track the history of that specific run. When you run the graph with that thread ID, the checkpointer automatically saves the state. But it does not save continuously. It saves at specific boundaries called super-steps. A super-step is a distinct execution cycle in the graph. If your graph runs node A followed by node B, that is two super-steps. If your graph branches and runs node C and node D at the same time, that parallel execution is grouped into one single super-step. The checkpointer does not interrupt a node while it is working. It waits for the super-step boundary. Once all the nodes scheduled for that super-step finish executing and return their updates, LangGraph creates a checkpoint. This checkpoint contains a state snapshot, capturing exactly what the graph state variables look like at that exact moment. Let us look at how this behaves in practice. Suppose you have an agent analyzing a massive dataset. The graph has four steps. Step one fetches the data. Step two cleans it. Step three runs an expensive analysis. Step four formats the summary. You start the run, passing a configuration with thread ID one two three. The graph successfully completes the fetch, clean, and analysis steps. At the end of step three, the checkpointer saves a state snapshot. Then, before step four can finish, your server crashes. Because you used a checkpointer and a thread ID, the state is safe. When your server restarts, you simply invoke the graph again, passing the exact same thread ID one two three. The checkpointer looks up the latest checkpoint. It finds the state snapshot saved right after the analysis step. The graph loads that state and resumes execution immediately at step four. The fetch, clean, and analysis nodes are completely skipped because their outputs are already safely stored in the thread checkpoint. Here is the key insight. By compiling your graph with a checkpointer and tying your execution to a thread ID, you turn brittle in-memory operations into durable workflows that survive interruptions automatically. That is all for this one. Thanks for listening, and keep building!
10

Durable Execution and Idempotency

3m 38s

Understand the nuances of resuming workflows. We cover why side-effects must be idempotent and how to structure nodes for durable execution.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 10 of 18. Your workflow processes a payment, hits a rate limit on the next step, and crashes. When the system recovers and resumes the workflow, your customer gets billed a second time. Your code did not change, but the assumption you made about how the graph resumes was wrong. The fix requires understanding durable execution and idempotency. Many developers assume that when a long-running process pauses or fails, resuming it picks up from the exact line of Python code where it stopped. They expect the runtime to magically remember local variables mid-function. That is not what happens. LangGraph does not freeze the Python interpreter in place. State is only saved at the boundaries between nodes. Durable execution in LangGraph means the system tracks your progress by persisting the graph state after a node finishes its work and returns. If a node fails halfway through its logic, the system has no record of its partial progress. The last known good state is the one passed into the node when it started. When you restart or retry the graph, execution resumes by re-running that entire failed node from its very first line. Think about the payment scenario. Suppose you write a single node that performs two actions. First, it calls an external API to charge a credit card. Second, it updates a remote database to record the transaction. The credit card charge succeeds, but the database connection times out, causing the node to crash. The graph state does not advance. When the workflow resumes, it passes the old state back into that same node. The node starts over. It hits the external API and charges the credit card a second time. Here is the key insight. Because nodes can restart from the beginning, any side effect inside a node must be idempotent. Idempotency is a property where executing an operation multiple times yields the exact same result as executing it once. If your node interacts with the outside world, you have to write the code assuming it will run multiple times for the same step. How do you ensure this safety? You have two practical approaches. The first is leveraging idempotency keys with your external services. When you call the payment API, you pass a unique identifier derived from the current graph state. If the node crashes and re-runs, it sends the same unique identifier. The external service recognizes the duplicate request and returns a success response without actually moving money again. The second approach is structural graph design. If a specific operation is not natively idempotent, do not group it with other steps that might fail. Put the dangerous operation inside its own dedicated node. Make it the only thing that node does. If you put the payment charge in node A and the database update in node B, a database timeout only crashes node B. The graph resumes at node B. The payment charge in node A is completely safe, because node A finished, and the graph saved its state before moving forward. You control where the system saves its progress by how you draw your node boundaries. Never put an irreversible, non-idempotent action in the same node as something that might randomly fail. That is all for this one. Thanks for listening, and keep building!
11

Human-in-the-Loop: Interrupts

3m 50s

Learn how to freeze agents mid-execution. We detail the interrupt function and how to resume workflows with external human approval.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 11 of 18. Sometimes an AI should not have the final say. You might want to freeze an agent mid-thought, ask a human for approval, and inject their answer directly back into the running logic. That is exactly what Human-in-the-Loop interrupts do. When you need a human to make a decision inside a LangGraph workflow, you use a specific function called interrupt. It is vital to understand what this actually does under the hood. Listeners might confuse this with a standard Python input prompt. It is not. A standard input prompt blocks an active thread, tying up system memory while waiting for a user to press a key. In LangGraph, calling interrupt behaves very differently. It fully serializes the graph state, saves it to your checkpointer database, and suspends execution entirely. The graph goes to sleep. It can wait indefinitely for a response without consuming any active computing resources. The flow happens in two distinct phases: pausing and resuming. First, let us look at pausing. Inside one of your nodes, your agent reaches a point where it needs human authorization. At that exact line of code, you call the interrupt function. You pass a payload into this function, which is usually a JSON object containing the context the human needs. Consider an agent handling automated customer support. It decides to draft a refund for five hundred dollars. Before processing the payment, the agent node calls interrupt. It hands over a payload specifying the proposed action is a refund and the amount is five hundred. The moment that function is called, the graph halts. The LangGraph runtime catches this event and bubbles the JSON payload up to your client application. The graph process shuts down, leaving the payload waiting for human review on a web UI. Now for the second phase: waking the graph back up. An external process, like your backend server receiving an API call from the web UI, is responsible for restarting the graph. The human manager clicks approve on their dashboard. Your backend takes that approval and starts the graph again using a special instruction called a Command. When sending this Command, you include a resume argument containing the human's response. In our scenario, this response is a simple boolean value of true. Here is the key insight. When the graph wakes up, it does not re-run the paused node from the very beginning. It resumes execution at the exact line of code where it stopped. The interrupt function that originally paused the graph finishes executing, and it returns whatever value you sent via the resume command. The human's boolean response is injected straight into the variable waiting for the interrupt result. The agent then reads that true value, passes its conditional check, and finalizes the five hundred dollar refund. This architecture creates a clean boundary. The graph logic does not need to handle webhooks, emails, or user interfaces. It just calls a function that throws a payload over the wall and waits for a return value. The external system handles all the user interaction and simply pushes the answer back in. By injecting the human response directly into the function return, you avoid polluting your main graph state with temporary interaction data. The power of the interrupt function lies in treating human feedback not as a complex architectural detour, but as a standard function call that can safely pause the universe until it gets an answer. That is all for this one. Thanks for listening, and keep building!
12

Debugging the Past: Time Travel and Forking

3m 25s

Explore LangGraph's time-travel capabilities. We show how to navigate state history, replay past checkpoints, and fork alternative execution paths.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 12 of 18. Your agent goes off the rails, taking an action you did not want or generating a terrible response. Normally, you have to restart the entire process from scratch and hope it behaves better the second time. What if you could literally rewind the execution to the exact moment before the mistake, manually alter the state, and let it run down an alternative timeline? That is exactly what we are covering today with Debugging the Past: Time Travel and Forking. To manipulate the past, you first need to see it. You do this using a method called get state history, passing in your thread identifier. This method returns an iterator containing every state the graph passed through during that thread's execution. Every single one of these historical states possesses a unique identifier called a checkpoint ID. You can think of this ID as your exact coordinates in time. If you simply want to replay the graph from a specific point, you grab the target checkpoint ID from that history. You then call your graph's invoke method, passing a configuration object that includes both the thread ID and that specific checkpoint ID. The graph immediately resumes execution from that exact state. It does not re-run any of the prior nodes, saving compute and time. Replaying is useful, but the real power lies in changing the past to fork the execution. Let us look at a practical scenario. Suppose your agent was tasked with writing a joke, and it generated a terrible joke about a dog. You check the state history and find the checkpoint ID for the state just before the generation step occurred. Instead of just replaying from that point, you use the update state method. You provide the thread ID, the specific historical checkpoint ID, and the new state values you want to inject. In this case, you manually update the topic variable, changing it from a dog to chickens. Here is the key insight. Developers often think that updating a past state rolls back the execution, overwriting or deleting the original history. It does not. LangGraph operates on an append-only architecture. When you call update state on a historical checkpoint, the system safely creates a brand new checkpoint branching off that old one. Your original timeline, complete with the terrible dog joke, remains fully intact and accessible. You have not erased the past; you have forked a new reality. Once you apply that update, the graph is sitting at a newly created checkpoint with the altered state. To continue down this new timeline, you simply invoke the graph again with the thread ID, omitting any specific checkpoint ID. The graph defaults to the newest state on this newly forked branch and resumes execution. Your agent reads the updated state and generates a joke about chickens instead. If you are finding these technical breakdowns helpful and want to support the show, you can search for DevStoriesEU on Patreon. Time travel transforms debugging from guessing what went wrong into precisely manipulating the graph's history to explore alternative outcomes, without losing a single trace of the original run. That is all for this one. Thanks for listening, and keep building!
13

Long-Term Memory: Stores Across Threads

3m 19s

Move beyond isolated threads. We introduce the Store interface and explain how to grant your agents persistent, cross-session memory.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 13 of 18. You build an agent, and a user tells it they always want their code in Python 3.11. Tomorrow, they start a new conversation, and the agent completely forgets, outputting Python 3.9 instead. Thread memory is isolated to a single conversation. When your agent needs to retain facts across entirely separate sessions, you need Long-Term Memory using Stores Across Threads. A common mistake is trying to solve this by stuffing long-term facts into the checkpointer state. Checkpointers are short-term memory. They are strictly per-thread state snapshots designed to pause, resume, or replay a single conversation. If a user states a preference in thread A, thread B has absolutely no way to see it. To share knowledge across multiple threads, LangGraph provides the Store interface. A Store is a key-value memory layer that sits outside individual thread states. You set it up by passing a store object, like a PostgresStore, as an argument when you compile your graph. Once compiled, that store is attached to the graph execution environment. Inside your graph, nodes access this memory layer through the Runtime object. When you define a node, you can access the runtime context, which exposes the store. You simply access runtime dot store to interact with your long-term memory. Here is the key insight. Data in a store is organized using namespaces. A namespace is a hierarchical list of strings that partitions your data, much like a folder path on your computer. For a multi-tenant application, you might define a namespace that starts with the string users, followed by a specific user ID, and ending with preferences. Think about that coding assistant scenario. A user starts a session on Monday. During the chat, they mention they prefer Python 3.11 and dark mode. A node in your graph recognizes this as a permanent preference. It calls the put method on runtime dot store. It passes the namespace for that specific user, a unique key for the item, and a dictionary containing the preferences. The data is now saved outside the thread. On Friday, the same user opens your application and starts a brand new thread. The checkpointer state for this new thread is completely empty. However, your graph includes a setup node that runs first. This node calls the search method on runtime dot store, providing the user namespace prefix. The store returns the saved preferences. The node then places those preferences into the current thread state. From that point on, the agent knows to use Python 3.11 and dark mode for this new conversation. The store interface provides three main operations. You use put to save or overwrite an item. You use get to retrieve a single item when you know its exact namespace and key. You use search to retrieve multiple items that share a namespace prefix. Searching is particularly useful when you have saved several distinct memory fragments for a user over time and need to pull them all into the current context. By separating short-term state from a cross-thread store, you decouple the lifespan of your agent knowledge from the lifespan of a single conversation. Appreciate you listening — catch you next time.
14

Streaming Execution and the v2 Format

3m 55s

Enhance UX with real-time feedback. We break down stream modes (values, updates, messages) and the unified v2 StreamPart format.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 14 of 18. Users hate staring at a static loading spinner for thirty seconds while your system works behind the scenes. You want to show them the system's thought process in real time, but capturing those internal signals often requires wiring up complex custom callbacks. Streaming execution and the v2 format solve this by unifying every internal event into a single, predictable flow. First, clear up a common misunderstanding. Engineers often confuse streaming language model tokens with streaming application state. They are entirely different layers of information. A token stream is just text appearing word by word. A state stream tracks the broader progress of your workflow moving from one task to the next. LangGraph handles both simultaneously. You access this behavior by requesting a stream and passing the argument version v2 to your execution method. This standardizes the output. Instead of dealing with mixed data types, every single event that leaves your graph becomes a unified dictionary containing exactly three fields: type, ns, and data. The type field defines the category of the event. The ns field stands for namespace, indicating the exact path in your graph hierarchy where the event originated. This becomes critical when you have nested subgraphs and need to know exactly which sub-component fired the event. Finally, the data field holds the actual payload. You control exactly what gets put into this stream by selecting one or more stream modes. The values mode pushes the complete, updated graph state to you every time any node completes its work. This proves useful if your application requires the full picture at every step. The updates mode is much lighter. It streams only the specific data returned by a node, representing just the delta or change made to the overall state. The messages mode operates at a more granular level, streaming the individual chunks of a generated chat message as they are produced by an underlying language model. Picture a frontend interface. You want a glowing status indicator that highlights which step is currently active—perhaps fetching context, then evaluating documents, then drafting—while simultaneously displaying the draft text token by token. To build this, you start your graph execution with stream modes set to both updates and messages, ensuring you pass the v2 version flag. Your frontend begins receiving a continuous, unified stream of these dictionaries. When a dictionary arrives with the type set to updates, you read the namespace field. This tells you exactly which node just finished its work. You use that signal to shift your glowing status indicator to the next step on the user interface. Milliseconds later, the stream delivers a new dictionary with the type set to messages. You pull the raw text token from the data field and append it directly to the paragraph your user is reading. Both high-level state changes and low-level text generation arrive through the exact same pipe. Here is the key insight. By forcing tokens, state changes, and node progress into a single three-field dictionary structure, the v2 format completely removes the need to write separate handling logic or complex asynchronous callbacks for different types of real-time events. That is your lot for this one. Catch you next time!
15

Composing Complexity: Subgraphs

3m 08s

Scale your workflows by treating compiled graphs as nodes. We discuss composing subgraphs and managing shared versus private state schemas.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 15 of 18. When your AI agent grows complex, you often end up with one massive, unreadable mega-graph where a single change breaks everything. You do not have to build that way—you can build specialized mini-graphs and snap them together instead. We are talking about Composing Complexity: Subgraphs. Subgraphs let you reuse logic and distribute development across different teams. Instead of putting every single step of your application into one file, you create smaller, self-contained graphs. Once a graph is compiled, it behaves exactly like a standard callable function. That means you can take an entire compiled graph and drop it directly into another graph as a single node. Think of a master routing system for an enterprise assistant. The main graph handles user input, checks security, and decides what to do next. When a user asks a deep technical question, the router needs to perform complex data gathering. Instead of coding that logic directly into the router, you delegate it to a dedicated Research subgraph. A completely different engineering team can build, test, and refine this Research graph in isolation. The parent graph does not care how the research is done. It just calls the node. There is a common tendency to overcomplicate how data passes between these graphs. If the parent graph and the subgraph use the exact same state schema—meaning they share the exact same state keys—you do not need any special adapters. You simply pass the compiled Research subgraph directly into the add node function of your master graph. The engine automatically feeds the parent state into the subgraph, runs the logic, and merges the results back into the parent state when it finishes. Now, what happens when the teams do not coordinate perfectly? Suppose your parent router uses a state key called user query, but the separate engineering team built the Research subgraph to expect a key called search term. You cannot drop the compiled subgraph into the parent graph directly. The keys will not match, and the execution will fail. Here is the key insight. You bridge this mismatch using a simple wrapper function. In your parent graph, you define a standard node function that accepts the parent state. Inside this function, you extract the user query value. You then call the compiled Research subgraph manually, passing it a payload where you map that user query to the search term key. The subgraph runs its internal logic and returns its final state. Your wrapper function takes that output, translates the results back into the specific keys the parent expects, and returns them. To the parent graph, this wrapper looks like any normal node. It has no idea a massive, complex subgraph just executed inside it. It simply passed data in and got updated state back out. This pattern gives you strict modularity without sacrificing control. Treating a compiled graph as just another callable function behind a simple wrapper is the single most powerful way to scale an AI architecture without collapsing under your own code. That is all for this one. Thanks for listening, and keep building!
16

Subgraph Persistence and Multi-Agent Patterns

3m 24s

Master memory scoping in multi-agent systems. We explain the difference between per-invocation, per-thread, and stateless subgraph persistence.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 16 of 18. If an expert subagent is called twice in a single conversation, should it remember the first call, or start fresh with total amnesia? That choice changes everything about how multi-agent systems behave, and it is controlled entirely by Subgraph Persistence and Multi-Agent Patterns. When you build a parent graph that routes tasks to subgraphs, state management gets complicated. Think of a primary customer service bot that handles general chat. When a user asks a complex question about an invoice, the primary bot routes the request to a dedicated Billing Expert subgraph. By default, subgraphs are entirely stateless. When you compile that Billing Expert without specifying a checkpointer, it operates strictly on a per-invocation basis. The primary bot hands it the required inputs, the expert runs its internal steps, returns a result, and immediately discards its internal state. If the user asks a follow-up billing question five minutes later, the primary bot calls the expert again. The expert has no memory of the previous exchange. It starts completely fresh. For a simple data-extraction subgraph, that amnesia is perfectly fine. For an interactive, specialized agent, it is incredibly frustrating for the user. To fix this, the expert needs its own memory across turns. A very common mistake here is passing a brand new checkpointer instance, like a memory saver object, directly into the subgraph's compile method. Do not do this unless you want the subgraph to share the exact same state across completely different users and sessions. If user A and user B both talk to the system at the same time, passing an explicit checkpointer instance to the subgraph means their data gets mashed together into one global state. This creates massive cross-talk between isolated parent threads. Instead, you just pass the boolean value True to the checkpointer argument when you compile the subgraph. This is where it gets interesting. Setting it to True tells the subgraph to rely on the parent graph's checkpointer mechanism, but to maintain a completely isolated multi-turn history specifically for itself. Behind the scenes, the framework handles the namespacing. It automatically creates a unique thread ID for the subgraph that is permanently tied to the parent's thread ID. Now look at the Billing Expert scenario again with this configuration. The user asks an invoice question. The primary bot routes it to the expert. The expert answers and goes dormant. Later in the same conversation, the user asks a follow-up. The primary bot routes back to the expert. Because it was compiled with the checkpointer set to True, the expert wakes up, checks its dedicated sub-thread, and loads the context of the invoice from the earlier turn. It acts like a persistent participant in the conversation. And because that sub-thread is strictly scoped to the parent's thread, a different user talking to the system gets their own completely clean instance of the billing expert. The way you configure a subgraph's checkpointer dictates its entire identity in your system: leaving it blank creates a disposable, stateless utility function, while setting it to True creates a continuous, context-aware collaborator. Thanks for spending a few minutes with me. Until next time, take it easy.
17

Application Structure and Deployment Readiness

3m 56s

Transition from prototypes to production. We explore langgraph.json, proper file structure, and dependency management for stateful deployments.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 17 of 18. A Python script running successfully on your laptop is not a production application. If you try to run stateful agents by executing standalone scripts, you will inevitably hit a wall when it is time to scale. To solve this, we are looking at Application Structure and Deployment Readiness. When you first build a LangGraph agent, you probably prototype it in a Jupyter notebook or a single Python file. You define the nodes, connect the edges, compile the graph, and call the invoke method right there in the same file to see if it works. That is fine for testing. However, a production server cannot read your mind. It needs a standardized way to serve that graph as an API, install its required packages, and inject environment variables. To make your prototype deployment-ready, you must organize your code into a clean directory structure. Let us say you create a new folder named my-app. You move your Python code out of the notebook and into a clean file inside this folder. Next, you add a dependency file, typically requirements dot txt. Finally, you create a configuration file named langgraph dot json in the root of the my-app folder. The langgraph dot json file is the core blueprint for your application. When you use the LangGraph Command Line Interface, or deploy to a production environment, this configuration file tells the underlying system exactly how to build and run your project. It requires three main pieces of information regarding dependencies, environment variables, and the graph entry points. First, you declare your dependencies. This is just a path string in the JSON file pointing to your requirements file. It ensures the deployment server installs the exact Python packages your agent relies on, preventing missing module errors in production. Next, you define the environment string. This points to your dot env file. Stateful agents always need secrets, like database credentials or model API keys. Pointing to the environment file ensures the runtime securely loads these keys before attempting to start the graph. This is the part that matters. The third requirement in the configuration file is the graphs mapping. This tells the server exactly where your compiled graph lives in the source code. It acts like a dictionary. You assign your graph an ID, which becomes its official name in the generated API. Then, you map that ID to a specific Python module and variable name. For example, you might map the ID customer-support-agent to the string agent dot py colon compiled-graph. The server looks at the agent dot py file, finds the variable named compiled-graph, and loads it into memory. This structure requires a deliberate shift in how you write your code. Beginners often run graphs via standalone Python scripts that execute actions as soon as they are run. But the LangGraph runtime relies on langgraph dot json to expose the graph dynamically as a web service. It does not run your script from top to bottom. It only imports the compiled graph object you specified in the configuration file. Because of this, your Python file should only define the nodes, connect them, and assign the compiled graph to a variable. You must remove any leftover testing code at the bottom that manually invokes the graph. If you leave testing code in the file, it will execute during the import phase on the server, causing deployment failures or unwanted API calls just from starting up the service. By explicitly declaring your dependencies, environment, and graph paths in one central JSON file, you separate the definition of your agent from its execution, turning a local script into a robust, deployable service. That is all for this one. Thanks for listening, and keep building!
18

Testing Graph Execution End-to-End

3m 37s

Learn robust testing strategies for graph workflows. We cover pytest integration, isolated node execution, and simulating partial state.

Download
Hi, this is Alex from DEV STORIES DOT EU. LangGraph, episode 18 of 18. You have a complex multi-agent workflow, and you need to test one specific routing edge case in step four. You should not have to run the entire system from start to finish just to hit that condition. Testing Graph Execution End-to-End is how you target exactly the logic you need. When developers try to isolate parts of a graph for testing, they often reach for complex mock objects. They try to mock the surrounding graph structure or stub out all the preceding nodes. In LangGraph, you do not need to do this. The architecture revolves entirely around state. Because nodes are just functions that read and write state, you can manually inject a specific state payload and test isolated node fragments natively. This is where state injection and breakpoints become incredibly useful in your test suite. You only need two tools to jump directly into the middle of a graph. The first is the update state method. The second is a configuration parameter called interrupt after. Using these inside a standard testing framework like pytest lets you simulate exact conditions without executing the whole application. Let us apply this to a concrete scenario. Suppose you have a graph where node three makes an external API call, and node four checks the result. You want to verify that if the API payload contains a specific failure code, node four correctly routes the execution flow to your error handler node. Instead of running nodes one and two to trigger this, you isolate the problem. You initialize the graph with a thread identifier. Then, you use update state to insert a simulated, failed API payload directly into the thread state. You act as if node three is about to run with that specific data. Next, you invoke the graph, but you pass a configuration dictionary setting interrupt after to node four. When you start the graph, execution begins immediately at node three using your injected failure state. Node three processes the bad payload and passes the resulting state to node four. Node four evaluates the logic and decides to route to the error handler. Because you set a breakpoint, the graph pauses execution the moment node four finishes. Now your test can evaluate the outcome. You pull the current state from the graph. You can write your assertions to ensure node four updated the state variables correctly. More importantly, you can inspect the graph execution plan. By looking at the next pending node in the state metadata, you can confirm that the routing logic worked perfectly and the error handler is queued up. Here is the key insight. By manipulating state directly, you turn a highly interconnected, unpredictable chain of agents into a deterministic, step-by-step test case. You verify exactly how the graph transitions from one node to the next without waiting for language models or network calls in the preceding steps. Since this is the final episode of the series, I encourage you to explore the official documentation and try building these workflows hands-on. If you have ideas for what we should cover next, visit devstories dot eu and suggest a topic. That is all for this one. Thanks for listening, and keep building!