Back to catalog
Season 10 20 Episodes 1h 10m 2026

asyncio

v3.14 — 2026 Edition. A deep dive into Python's asyncio framework, covering the event loop, coroutines, structured concurrency, synchronization primitives, and advanced asynchronous patterns. For Python 3.14.

Python Core Async Programming
asyncio
Now Playing
Click play to start
0:00
0:00
1
The Event Loop & Mental Model
Establish your foundational mental model for asyncio. Learn how the event loop acts as an orchestra conductor, managing jobs cooperatively without relying on multithreading.
3m 39s
2
Coroutines vs Awaitables
Demystify async and await keywords. We explore the critical distinction between a coroutine function and a coroutine object, and what actually happens when you await an operation.
3m 29s
3
The asyncio.run() Entry Point
Discover how to bootstrap an asyncio application safely. We discuss asyncio.run, executor shutdowns, and the Runner context manager for complex loop lifecycles.
3m 38s
4
Scheduling with Tasks
Learn how to execute operations concurrently using asyncio.create_task(). We uncover the severe consequences of garbage collection on unreferenced tasks.
3m 22s
5
Structured Concurrency with Task Groups
Master structured concurrency. Understand how asyncio.TaskGroup safely manages multiple concurrent operations and ensures clean teardowns when exceptions occur.
3m 20s
6
Task Cancellation & Timeouts
Explore the mechanics of aborting operations. Learn why asyncio.CancelledError is raised, how to handle it in a finally block, and why you should never swallow it.
3m 43s
7
Yielding Control with Sleep
Understand the true purpose of asyncio.sleep(0). Discover how yielding control prevents CPU-heavy loops from starving the event loop and freezing the application.
3m 25s
8
Synchronization: Locks & Mutexes
Prevent race conditions in async code. We explore asyncio.Lock, discuss its non-thread-safe nature, and show why threading locks will freeze your event loop.
4m 05s
9
Coordinating State with Events
Learn to broadcast signals to multiple waiting tasks. We explain how asyncio.Event and asyncio.Condition elegantly replace inefficient polling loops.
3m 33s
10
Limiting Concurrency with Semaphores
Protect fragile resources and prevent rate-limiting bans. Discover how asyncio.Semaphore bounds concurrent execution without blocking your architecture.
3m 46s
11
Producer-Consumer Workflows
Decouple fast producers from slow consumers safely. Explore asyncio.Queue, task completion signaling, and the new shutdown mechanics for queues.
3m 25s
12
High-Level Networking with Streams
Dive into high-level IO Streams. We discuss StreamReader, StreamWriter, and why omitting await writer.drain() can silently destroy your server's memory.
3m 40s
13
Building Async Servers
Construct highly concurrent network servers. Learn how asyncio.start_server abstracts away client connections, spawning an isolated task for every peer.
3m 39s
14
Non-blocking Subprocesses
Run shell commands asynchronously. Discover why using the standard subprocess module halts the event loop, and how asyncio.create_subprocess_exec fixes it.
3m 14s
15
Futures: The Low-Level Bridge
Unpack the foundation of await statements. We examine asyncio.Future, its role as an eventual result, and how it bridges legacy callback code with modern syntax.
3m 39s
16
Transports and Protocols
Go under the hood to see how asyncio talks to the OS. Understand the callback-driven, 1:1 relationship between Transports (how bytes move) and Protocols (what bytes mean).
3m 42s
17
Threading in an Async World
Bridge synchronous and asynchronous worlds. Learn how to offload heavy blocking code safely using executors and thread-safe callbacks without locking up the loop.
3m 17s
18
Async Generators & Cleanup
Avoid resource leaks with async generators. We explore why 'async for' iteration can leave dangling connections when interrupted, and how aclosing() provides safety.
3m 10s
19
Mastering Debug Mode
Catch concurrency bugs instantly. Learn how to use PYTHONASYNCIODEBUG to profile slow callbacks, unearth unawaited coroutines, and pinpoint never-retrieved exceptions.
3m 22s
20
Extending & Custom Loops
The finale. We explore advanced integration and what it takes to write a custom event loop or subclass BaseEventLoop for specialized, high-performance environments.
3m 30s

Episodes

1

The Event Loop & Mental Model

3m 39s

Establish your foundational mental model for asyncio. Learn how the event loop acts as an orchestra conductor, managing jobs cooperatively without relying on multithreading.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 1 of 20. Many developers hear the word asynchronous and assume their code will execute in parallel across multiple CPU cores. But then they inspect their application and find it running entirely on a single thread. The secret to this efficiency without true parallelism is the event loop, and understanding its mental model is the foundation of asyncio. The event loop is the central execution manager of any asyncio application. It is exactly what the name implies: a continuous loop that checks for operations that are ready to run, executes them, and then looks for the next operation. It is vital to separate this concept from multithreading. In a multithreaded program, the operating system controls execution. The OS will forcibly pause one thread and switch to another to share CPU time. The threads themselves have no control over when they are paused. This requires significant system overhead to manage the context switches and protect shared memory. The event loop operates on a completely different model called cooperative multitasking. Everything runs sequentially on one single thread. The loop never interrupts an operation. Instead, it relies on the code to explicitly yield control back to the loop when it has to wait for something. Think of the event loop as a single expert chef in a busy restaurant kitchen. The chef receives multiple orders at once. If they place a large pot of stock on the stove to simmer, they do not stand in front of the stove staring at the liquid until it finishes. That approach would block the entire kitchen and nothing else would get cooked. Instead, the chef starts the stove, leaves the pot to simmer, and immediately pivots to chopping vegetables for another dish. The chef represents the single thread of execution. The event loop is the chef continually scanning the kitchen, knowing exactly which pots are simmering, which pans need flipping, and moving instantly to the next available job. In your software, a simmering pot is usually an input or output operation. When your code sends a request to a database, the database takes time to process the query and send the data back. A traditional synchronous program would freeze and wait for the response. With an event loop, the operation registers its request and then tells the loop it is waiting. The event loop immediately switches to another piece of code that actually has data ready to process. When the database finally responds, the original operation signals the event loop that it is ready to resume. The event loop places it back into the queue and will resume executing it as soon as the current job yields. Here is the key insight. Because the event loop cannot forcefully stop an operation, the entire system relies entirely on cooperation. If one job decides to perform a massive mathematical calculation without ever yielding control, the event loop stops. The single thread is occupied. In our kitchen, this is the chef deciding to manually grind a massive bag of flour while ignoring every other dish. The simmering pots boil over, new orders pile up, and the kitchen grinds to a halt. The loop is only as efficient as the code running inside it. True asynchronous efficiency does not come from performing multiple calculations at the exact same physical moment, but from ensuring your single thread never wastes a single millisecond idling while waiting for the outside world. If you want to help keep the show going, you can support us by searching for DevStoriesEU on Patreon. Thanks for listening, happy coding everyone!
2

Coroutines vs Awaitables

3m 29s

Demystify async and await keywords. We explore the critical distinction between a coroutine function and a coroutine object, and what actually happens when you await an operation.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 2 of 20. You write a function, you call the function, and absolutely nothing happens. Your code runs without errors, but the database is empty and the network request never fires. The problem is a fundamental misunderstanding of what calling an asynchronous function actually does. Today, we are looking at Coroutines vs Awaitables. In regular Python, when you call a standard function, it runs immediately. Asynchronous functions break this rule entirely. There is a strict difference between a coroutine function and a coroutine object. When you write async def, you are creating a coroutine function. When you call that function in your code, it does not execute the body of the function. Instead, it returns a coroutine object. Think of it like ordering a coffee. The async def function is the menu item. Calling that function is like placing your order at the register. You get a receipt. That receipt is your coroutine object. You have stated your intent, but you do not have your drink yet, and no one has even started making it. To actually trigger the brewing process and get your coffee, you have to wait at the counter. In Python, you do this using the await keyword. When you type await followed by that coroutine object, two distinct things happen. First, the coroutine finally begins executing its internal code. Second, the function where you placed the await pauses entirely. It yields control back to Python, stating it cannot proceed until this specific coroutine finishes. This pausing behavior is the core mechanical difference of asynchronous programming. While your function is paused waiting for the coffee, Python is free to go run other code elsewhere. This brings us to the broader term awaitable. An awaitable is simply any object that Python allows you to use with the await keyword. All coroutines are awaitables. When you see await, read it as a direct command: run this awaitable object to completion, and suspend my current progress until it yields a final result. If you write an async function called fetch data, simply calling fetch data returns the coroutine object. If you assign that call to a variable named pending request, that variable just holds the unexecuted coroutine. The network remains completely quiet. Later in your script, when you write await pending request, Python finally executes the network call. The execution of your current block of code stops at that exact line. Once the server replies, the await expression resolves into the returned data, and your surrounding code continues to the next line. Here is the key insight. You can only use the await keyword inside an async def function. Because awaiting an object requires pausing the current execution, the function containing the await must itself be capable of being paused. That is why asynchronous behavior propagates outward. To await a coroutine, you must be inside a coroutine. You are constructing a chain of suspended operations, all waiting for the lowest level task to resolve. Remember, calling an async function without awaiting it is just generating a receipt for work you never actually asked anyone to perform. The code will never run until you await it. Thanks for tuning in. Until next time!
3

The asyncio.run() Entry Point

3m 38s

Discover how to bootstrap an asyncio application safely. We discuss asyncio.run, executor shutdowns, and the Runner context manager for complex loop lifecycles.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 3 of 20. Misusing the starting point of your asynchronous application can leave behind dangling thread executors and unclosed async generators. To avoid hidden resource leaks, you need to use the right tool to start and stop your application, which brings us to the asyncio.run entry point. Many developers mistakenly try to use this tool to execute individual coroutines randomly from synchronous code. That is not its purpose. You cannot call the run function when another asyncio event loop is already running in the exact same thread. Doing so immediately triggers a runtime error. It is designed specifically to be the single, high-level entry point for a program. Think of initializing a web server main loop that coordinates all incoming traffic requests. You have a central asynchronous function that binds to a network port, sets up the request handlers, and keeps the server alive. You pass that single main function into the run function. When you do this, asyncio manages the entire lifecycle of the event loop automatically. First, it creates a new event loop and sets it as the current active loop for the thread. Next, it executes your main web server coroutine until it completes. Here is the key insight. The most valuable work this function does happens after your main code finishes executing. It performs a rigorous cleanup. Before returning control to the synchronous part of your program, it cancels any leftover pending tasks. It then safely shuts down background threads in the default executor. Finally, it finalizes all asynchronous generators before closing the event loop entirely. You can also pass a debug flag to this function, which forces the underlying loop to run in debug mode to help trace execution issues. Because this standard function tears everything down at the end, it creates a rigid boundary. If you have a scenario where you need to run several distinct asynchronous blocks from synchronous code, but you want them to share the same event loop, calling the standard run function back to back will fail because a new loop is created and destroyed every single time. For that situation, you use the asyncio Runner context manager. You open a context block using the standard Python with statement. Entering this block initializes the event loop. Once inside, you can call the runner object's own run method. You pass it a coroutine, it runs it to completion, and returns the result. You can call this internal run method multiple times within the same context block. The event loop stays alive, maintaining state, cached data, and connections between those separate calls. You can configure the context manager when you create it by passing a debug flag, or even a custom loop factory if your environment requires a specialized event loop implementation. When execution finally exits the context manager block, the runner executes the exact same teardown sequence as the standalone function. It cleans up the executors, finalizes the generators, and safely closes the loop. Your application stability depends entirely on how it starts and finishes. Whether you use a single function call or the context manager, routing your execution through these official entry points is the only way to guarantee your asynchronous resources are reliably torn down when the program exits. That is all for this one. Thanks for listening, and keep building!
4

Scheduling with Tasks

3m 22s

Learn how to execute operations concurrently using asyncio.create_task(). We uncover the severe consequences of garbage collection on unreferenced tasks.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 4 of 20. You kick off a background process to ship system metrics. You check your dashboard later, and half the data is missing. No errors were thrown. Your code just silently stopped mid-execution. This happens because you treated your background job as fire-and-forget. Today, we are covering Scheduling with Tasks, and why you must always hold on to the things you create. When you have a coroutine that you want to run concurrently with other code, you use the asyncio create task function. You pass your coroutine into this function, and asyncio wraps it inside a Task object. This tells the event loop to schedule the task for execution. The function immediately returns the new Task object back to you, allowing your main program to continue running while the task operates in the background. Many developers call create task and ignore the return value. This is a massive trap. Here is the key insight. The asyncio event loop only maintains weak references to the tasks it is running. The loop itself does not protect your task from Python's garbage collector. If you do not assign the returned Task object to a variable or store it in a data structure, the garbage collector will eventually notice that no hard references exist. When that happens, Python destroys the task object. It does not care if the coroutine is right in the middle of executing a database query or waiting for a network response. The task just vanishes. Think of an async function called ship metrics. It formats a data payload and sends an HTTP request to an external server. You call create task and pass in ship metrics, but you do not assign the result to anything. The task starts running. It formats the payload. Then it hits the network call and pauses to wait for a connection. While it is paused, the garbage collector runs. The strong reference count is zero. The task is destroyed. The server never receives the payload, and your application never logs an error because the execution simply ceased to exist. To prevent this, you must always keep a strong reference to the tasks you schedule. If you are creating a single task, assign it to a variable. If you are scheduling multiple background tasks inside a loop, add them to a standard Python set or a list. As long as that set exists in memory, the strong references exist, and the garbage collector will leave your running tasks alone. You can then use a callback to remove the task from your set once it is done. The create task function also accepts a few optional arguments. You can pass a string to the name parameter, which assigns a specific identifier to the task. This is highly recommended for debugging, as it makes it much easier to track down which specific operation failed if an exception is raised later. You can also pass a context argument to establish a specific context variable state for the task. Treating background operations as fire-and-forget will eventually burn you with silent failures. If you ask the event loop to run something, you must keep a hard reference to the resulting object until the work is completely finished. Thanks for listening, happy coding everyone!
5

Structured Concurrency with Task Groups

3m 20s

Master structured concurrency. Understand how asyncio.TaskGroup safely manages multiple concurrent operations and ensures clean teardowns when exceptions occur.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 5 of 20. Before Python 3.11, firing off multiple concurrent tasks was easy, but handling them safely when one crashed was notoriously difficult. You would often end up with orphaned background tasks quietly wasting resources. The solution to this mess is structured concurrency using TaskGroup. A TaskGroup is an asynchronous context manager. People sometimes confuse it with a standard list of tasks, but it is much stricter. It provides strong safety guarantees about how tasks begin and end. It enforces a rule that a parent routine cannot finish until all its child operations are either complete or cleanly cancelled. You use it by opening an async with block. Inside that block, you call the create task method directly on the group object to start your concurrent operations. You do not dump these tasks into a standard array and await them manually. Instead, when the code reaches the end of the async with block, the TaskGroup implicitly pauses. It waits right there until every spawned task finishes. The block simply will not exit early. Here is the key insight. The real power of a TaskGroup lies in how it handles failure. With legacy tools like gather, if you started several tasks and one threw an error, the others kept running in the background. You had to write complex error handling logic to track down the survivors and kill them. A TaskGroup handles this automatically. Take the scenario of a web scraper fetching three distinct API endpoints simultaneously. You need user data, recent posts, and system alerts. You open a TaskGroup and spawn three tasks. They all start running concurrently over the network. Halfway through the operation, the recent posts endpoint times out and raises a connection error. The TaskGroup immediately detects this failure. It intercepts the error and automatically sends a cancellation signal to the user data task and the system alerts task. It cleans up those pending operations so they do not continue eating network bandwidth or memory. The remaining tasks raise a cancelled error internally, acknowledging the shutdown. Once all the remaining tasks are safely stopped, the TaskGroup bundles the original connection error into a new structure called an ExceptionGroup, and raises it out of the context block. This behavior makes your asynchronous code entirely predictable. If execution moves past the block successfully, you know for a fact that every single task succeeded. If the block raises an ExceptionGroup, you know that the failure was caught and everything else was properly shut down. You never leave rogue tasks running in the background. If you need the results of the successful tasks, you can retrieve them directly from the task objects you created, provided they completed before the failure occurred. By binding tasks to a strict lifecycle block, TaskGroups guarantee that concurrent operations enter and exit your application as a single, coordinated unit. That is it for today. Thanks for listening — go build something cool.
6

Task Cancellation & Timeouts

3m 43s

Explore the mechanics of aborting operations. Learn why asyncio.CancelledError is raised, how to handle it in a finally block, and why you should never swallow it.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 6 of 20. You wrote a robust error handler, catching all generic exceptions in your async worker. But now, during shutdown, your event loop gets clogged with zombie tasks that refuse to die. Your safety net is actually trapping them alive. This is exactly what we cover today: Task Cancellation and Timeouts. When you need to stop a running task, you call its cancel method. This does not instantly terminate the task like killing a system process. Instead, asyncio requests a stop by injecting an error, specifically an asyncio CancelledError, into the task. This error is raised exactly at the task's current or next await point. The coroutine then unwinds its stack just like it would for any standard Python error. This mechanism is the foundation for timeouts as well. When you wrap a task in a timeout function and the timer expires, the event loop does not magically halt the task. It simply calls cancel on that task. The task receives the CancelledError at its next await, unwinds its state, and eventually tells the timeout wrapper that it has stopped. Only then does the timeout wrapper raise a TimeoutError back to you. Here is the key insight. Since Python 3.8, CancelledError inherits directly from BaseException, not the standard Exception class. This design choice prevents a specific, catastrophic mistake. Developers routinely wrap network or file operations in try and except blocks that catch generic Exception classes to prevent a crash. If CancelledError were a standard Exception, those blocks would catch the cancellation signal. The task would perhaps log a warning, swallow the signal, and keep executing as a zombie. By moving CancelledError up the hierarchy to BaseException, Python guarantees your everyday error handlers will not accidentally intercept a cancellation request. So how do you safely manage state when a task is cancelled? You rely on the try and finally structure. Consider a web server processing an incoming HTTP request. The user asks for a massive report but then closes their browser window. The server detects the disconnect and cancels the request task. Inside your code, you are currently awaiting a long-running database query. That await suddenly raises a CancelledError. Because you placed your database interaction inside a try block, execution instantly jumps to your finally block. You use that finally block to cleanly roll back the pending transaction and return the database connection to the pool. Once the finally block finishes, the CancelledError continues to bubble up, successfully terminating the task. Sometimes a finally block is not enough. If you absolutely must perform asynchronous cleanup, like sending a network request to a remote microservice to announce the abort, you can explicitly catch CancelledError. But if you do this, you must explicitly re-raise that exact error at the end of your except block. Failing to re-raise it breaks the internal mechanics of asyncio. The task will look like it finished successfully instead of being cancelled, which corrupts the state of your application and breaks structured concurrency. The rule to remember is that cancellation is a cooperative request, not a forceful kill command, and it relies entirely on exceptions bubbling up untouched. If you would like to support the show, you can search for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
7

Yielding Control with Sleep

3m 25s

Understand the true purpose of asyncio.sleep(0). Discover how yielding control prevents CPU-heavy loops from starving the event loop and freezing the application.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 7 of 20. Sometimes the secret to keeping your network server responsive is telling your heaviest tasks to go to sleep for exactly zero seconds. If a function never pauses, your entire application stops listening to the outside world. To fix this, you use yielding control with sleep. In the asyncio framework, the event loop runs exactly one task at a time. It relies entirely on cooperative multitasking. A task runs continuously until it hits an await keyword, which acts as a checkpoint to hand execution control back to the loop. If you write an async function containing a purely CPU-bound operation, you create a bottleneck. Think of parsing a massive JSON payload or transforming thousands of strings. There are no natural await points in a standard data processing loop. Because the task never yields, the event loop remains blocked. Any incoming network requests, database replies, or health checks just sit in a queue, starving while they wait for your loop to finish. The native way to solve this is to manually hand control back to the event loop. You do this using a specific idiom: awaiting asyncio dot sleep with an argument of zero. At first glance, sleeping for zero seconds looks like a useless operation. Why ask the system to wait for no time at all? Here is the key insight. A zero-second sleep is not about the passage of time. It is an explicit signal to the event loop. When you await a sleep of zero, the current coroutine is immediately suspended. The event loop takes over, places your suspended task at the back of the runnable queue, and checks if any other scheduled tasks are ready to execute. If a background network handler is waiting to acknowledge an incoming connection, it gets its turn. Once the other tasks hit their own await points or finish, your original task makes its way back to the front of the queue and resumes right where it left off. Let us apply this to a concrete scenario. You are writing an async function to process millions of records from a JSON file. If you run a while loop straight through, your server appears dead. Instead, you introduce a counter variable. Inside the loop, you process a record and increment the counter. Then you add a simple condition. If the counter indicates one hundred iterations have passed, you await asyncio dot sleep zero. This breaks the massive computation into manageable chunks. The loop processes one hundred records, steps aside to let the server answer pings or accept new data, and then resumes parsing the next hundred. The number of iterations between yields is a parameter you must tune. Yielding on every single iteration adds too much overhead because suspending and resuming a coroutine has a small computational cost. Yielding every ten thousand iterations might still block the event loop for too long. One hundred is a reasonable starting point to keep the loop breathing. Forcing a zero-second sleep is the simplest way to keep your application cooperative, ensuring that a single heavy loop never starves the rest of your system. Thanks for listening, happy coding everyone!
8

Synchronization: Locks & Mutexes

4m 05s

Prevent race conditions in async code. We explore asyncio.Lock, discuss its non-thread-safe nature, and show why threading locks will freeze your event loop.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 8 of 20. You drop a standard threading lock into your async application to protect a shared resource, and suddenly your entire event loop freezes completely. The lock did its job, but it halted everything else in the process. To solve this without blocking the loop, we use Synchronization: Locks and Mutexes. An asyncio Lock, often called a mutex, guarantees exclusive access to a shared resource among asynchronous tasks. First, we need to clear up a common confusion. You cannot use a standard thread lock from the Python threading module inside an async application. A threading lock operates at the operating system level. If it cannot acquire the lock, it pauses the entire thread. Because asyncio runs multiple tasks cooperatively on a single thread, blocking that thread means the event loop stops. No network requests fire, no timers tick. Everything freezes. An asyncio lock solves this by being task-safe, not thread-safe. When an asyncio task tries to acquire a locked mutex, it does not block the thread. Instead, it suspends itself and yields control back to the event loop. This allows other unrelated tasks to continue their work while the first task waits in line. Let us anchor this to a concrete scenario. You have an application with dozens of async tasks making external API calls. Your OAuth token expires. Two different tasks notice the expired token at the exact same millisecond. Without synchronization, both tasks will independently fire off a request to the authentication server to refresh the token. This redundant work can trigger rate limits or immediately invalidate the first token due to strict rotation policies. To prevent this race condition, you create a single asyncio lock when you initialize your application. This lock object is passed to or shared among all your API tasks. Now, look at the flow. Task A and Task B both detect the expired token. Task A reaches the synchronization block first and awaits the lock. It successfully acquires it. Task B arrives a fraction of a second later and awaits the same lock. Because Task A holds it, Task B goes to sleep, letting the event loop handle other chores. When multiple tasks wait for the same lock, asyncio lines them up. Once the lock is released, the event loop wakes up the first task in line. Task A securely requests the new token, updates the shared token variable, and releases the lock. At that moment, the event loop wakes up Task B. Task B finally acquires the lock. However, before making a network call, Task B checks the token again. It sees the token is already valid, skips the refresh step, releases the lock, and continues with its primary API request. The safest way to implement this logic is using an asynchronous context manager. In your code, you write an async with statement followed by the lock object. When the execution enters this block, it waits for exclusive access. When the execution exits the block, either normally or because an error crashed the task, it automatically releases the lock. You do not need to manually call acquire or release methods, which eliminates the risk of accidentally leaving a lock engaged forever. Here is the key insight. An asyncio lock does not protect your state from other operating system threads; it protects your state from your own concurrent tasks stepping on each other while awaiting other operations. Thanks for hanging out. Hope you picked up something new.
9

Coordinating State with Events

3m 33s

Learn to broadcast signals to multiple waiting tasks. We explain how asyncio.Event and asyncio.Condition elegantly replace inefficient polling loops.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 9 of 20. You have fifty tasks waiting for a database to connect. You definitely do not want them polling in a loop, wasting CPU cycles while they check if the connection is ready. You need a single broadcast signal that tells all of them to start querying at the exact same moment. This is exactly what coordinating state with Events and Conditions handles. An asyncio Event manages a simple internal boolean flag. It starts as false. Before we look at the flow, let us clear up a common confusion between Events and Locks. A Lock grants exclusive access to exactly one task at a time, keeping others out. An Event does the opposite. It notifies multiple waiting tasks simultaneously, letting them all proceed at once. Think about that database connection scenario. Your background task is working to establish the connection. Meanwhile, your fifty worker tasks reach a point where they need the database. Each worker calls the wait method on your shared Event object. Because the internal flag is false, all fifty tasks suspend. They sit idle. Eventually, the background task succeeds and calls the set method on the Event. The flag becomes true. Instantly, all fifty suspended worker tasks wake up and resume execution. If you need to shut down the connection later, you can call the clear method on the Event. The flag goes back to false, and any future calls to wait will block again. You can also check the current status of the flag at any time by calling the is set method, which returns true or false without blocking the task. That covers simple broadcast signals. Sometimes a single boolean flag is not enough. You might have multiple tasks that need to wait for a shared resource to reach a specific complex state, and they need exclusive access to safely check or modify that state. This is where asyncio Condition comes in. A Condition is built around an underlying Lock. To do anything with a Condition, a task must first acquire it. Once acquired, the task checks the shared state. If the state is not what the task needs, the task calls the wait method on the Condition. Here is the key insight. Calling wait on a Condition does two things at once: it releases the underlying lock, allowing other tasks to access the state, and it suspends the current task. While that task is suspended, another task can acquire the Condition, change the shared state, and then call the notify method. The notify method takes an argument specifying exactly how many waiting tasks to wake up, defaulting to one. You can also call notify all to wake up everyone at once. When a suspended task wakes up, it does not just immediately run. It must wait to re-acquire the underlying lock before the wait method returns. Because another task might grab the lock and change the state before the awakened task gets its turn, the wait call is almost always placed inside a while loop that continuously checks the desired state. Once it has the lock back and the state is correct, it can safely proceed and eventually release the Condition. When deciding between the two, remember that an Event is a simple broadcast telling tasks that a one-off action happened, while a Condition allows tasks to safely wait for a complex state change without constantly polling a locked resource. Thanks for spending a few minutes with me. Until next time, take it easy.
10

Limiting Concurrency with Semaphores

3m 46s

Protect fragile resources and prevent rate-limiting bans. Discover how asyncio.Semaphore bounds concurrent execution without blocking your architecture.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 10 of 20. Firing off ten thousand asynchronous requests to a fragile third-party API is a highly efficient way to get your IP address permanently banned. Your code runs flawlessly, but the server on the other end collapses under the sudden spike in traffic. To protect external services and your own access, you have to throttle your application. That shield is Limiting Concurrency with Semaphores. It helps to clarify a common misconception right away. A semaphore is not a rate limiter. It does not cap how many requests your program makes per second. Instead, it limits concurrent operations. It strictly controls how many tasks can execute a specific block of network or file operations at the exact same moment. If a task finishes its API call in ten milliseconds, that slot opens up immediately for the next task in line. You could still process hundreds of operations per second, provided no more than the allowed limit are in flight simultaneously. An asyncio Semaphore manages a simple internal counter. When you create the semaphore object, you provide an initial value. Let us take the scenario of limiting outgoing HTTP requests to a delicate external API to exactly ten concurrent connections. You initialize your semaphore with a value of ten. Before any asynchronous task makes a network request, it must acquire the semaphore. This action decreases the internal counter by one. When the network request finishes, the task releases the semaphore, increasing the counter back up by one. Here is the key insight. If ten tasks have already acquired the semaphore, the counter sits at zero. When the eleventh task tries to acquire it, that task is suspended. The acquire method blocks progress until one of the first ten tasks finishes and releases its hold. This simple numeric lock ensures you never exceed your hard limit of ten active connections. In actual usage, you should rarely call the acquire and release methods manually. Instead, you use the semaphore as an asynchronous context manager. By wrapping your HTTP request in an asynchronous with statement, Python guarantees the semaphore is released when the code block exits. This release happens even if the API times out, drops the connection, or throws an unhandled exception. If you attempt manual releases and an error skips your release call, that concurrency slot is lost forever. If you lose all ten slots to transient network errors, your entire program deadlocks quietly. There is a subtle danger with the standard semaphore. If a logic error in your code causes a task to release the semaphore more times than it acquired it, the internal counter will increase beyond your original limit of ten. Suddenly, your concurrency shield is broken, and you are unwittingly sending twelve or fifteen simultaneous requests. To prevent this, you should use an asyncio Bounded Semaphore. A bounded semaphore behaves exactly like a standard semaphore, but it tracks the initial value you gave it. If a rogue task tries to release the semaphore past that starting limit, the bounded semaphore immediately raises a value error. It crashes early and loudly instead of silently overwhelming the external API. Always default to a bounded semaphore unless you have a highly specific architectural reason to inflate your concurrency limits dynamically. Bounded semaphores catch logical release errors the moment they happen, keeping your API connection limits strict and your systems running predictably. That is all for this one. Thanks for listening, and keep building!
11

Producer-Consumer Workflows

3m 25s

Decouple fast producers from slow consumers safely. Explore asyncio.Queue, task completion signaling, and the new shutdown mechanics for queues.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 11 of 20. You have an async web server handling thousands of requests per second, and for every request, you need to write a log entry to disk. If your server waits for that disk write to finish before responding, your performance collapses. The most reliable way to decouple fast producers from slow consumers in async Python is built right into the standard library. Today we are looking at Producer-Consumer Workflows using asyncio queues. Some developers coming from multi-threaded programming assume they need to wrap this queue in locks to prevent race conditions. You do not. The asyncio queue is designed specifically for concurrent tasks running on a single event loop. It is inherently safe for those tasks. Leave the thread-safe queues from the standard queue module for threading; use the asyncio version for async. Think of the queue as a pipe. On one end, you have producers pushing items in. On the other end, you have consumers pulling items out. Let us use that logging scenario. Your web request handler is the producer. It receives an incoming request, formats a log event, and calls the asynchronous put method on the queue. If you set a maximum size when creating the queue, you get automatic backpressure. When the queue is full, awaiting the put method pauses the producer until space frees up. This prevents an overwhelming spike in traffic from exhausting your system memory. On the other side of the pipe, you have a separate background task acting as the consumer. This task runs in a continuous loop. It calls the asynchronous get method on the queue. If the queue is empty, the consumer safely goes to sleep. The event loop wakes it up the exact moment a producer drops a new log event into the pipe. The consumer takes the event, writes it to disk, and then signals that the specific job is complete by calling a method named task done. Managing this flow during application teardown is critical. If you need to shut down your web server gracefully, you want to ensure all queued log events actually get written to disk. The queue has a method called join. When you await join, your program blocks until the number of task done calls exactly matches the number of items originally put into the queue. This guarantees every single piece of work was fully processed. Here is the key insight. Python 3.13 introduced a new queue method called shutdown. Previously, stopping a producer-consumer loop cleanly required passing special sentinel values, like injecting a None object into the queue, just to tell the consumer to exit its loop. Now, you can simply call shutdown. When you do this, any task currently blocked waiting to put or get an item is immediately hit with a QueueShutDown exception. You catch this exception in your worker tasks, clean up your resources, and exit cleanly without any fragile sentinel logic. When designing an asyncio system, remember that queues are not just data structures; they are flow control mechanisms that natively handle backpressure, keeping your memory footprint stable even when producers heavily outpace consumers. That is all for this one. Thanks for listening, and keep building!
12

High-Level Networking with Streams

3m 40s

Dive into high-level IO Streams. We discuss StreamReader, StreamWriter, and why omitting await writer.drain() can silently destroy your server's memory.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 12 of 20. You are sending data over a network connection, and your loop looks perfectly fine. But behind the scenes, your application is silently eating up gigabytes of memory until the system kills it. The issue usually comes down to one missing line of code that handles flow control. That is why we are looking at high-level networking with streams today. Asyncio provides a high-level API to work with network connections without touching raw sockets or low-level transport protocols. To establish a TCP connection, you use a top-level function called open_connection. You pass it a host string and a port integer. It immediately returns a tuple of two objects: a StreamReader and a StreamWriter. If you are building a server instead of a client, you use start_server. You provide a callback function, a host, and a port. Every time a new client connects, asyncio triggers your callback, passing it a dedicated reader and writer for that specific client connection. The StreamReader is your interface for receiving data. It provides asynchronous methods to pull bytes off the network. You can read a specific maximum number of bytes using the read method. If you are parsing line-based protocols, you can read until a specific separator like a newline using the readuntil method. If your protocol requires a fixed-size header, you can use readexactly, which will wait until that exact number of bytes arrives. Because all these operations depend on network traffic and latency, they pause the coroutine, meaning you must await them. Now, the second piece of this is the StreamWriter. This object handles sending data back out. You use the write method to push bytes into the stream. Here is the key insight. The write method is a regular function, not an asynchronous one. You do not await it. When you call write, you are not instantly putting data on the network wire. You are simply putting data into an internal asyncio buffer. The underlying event loop tries to flush this buffer to the network in the background. This buffer is where developers run into trouble. Think about a TCP client sending a massive file payload to a slow server. If you put your write call in a tight loop reading chunks from a local disk, Python will read the file vastly faster than the network can transmit it. Because write does not block your code, your loop keeps spinning. The internal buffer absorbs the entire file, consuming all available system memory. This is where backpressure comes in. To manage flow control, you must pair your write calls with the drain method. The drain method is asynchronous, meaning you await it. When you await drain, you tell the event loop to pause your coroutine if the internal buffer has exceeded its high-water mark. Your code waits until the background process pushes enough data over the network to shrink the buffer down to a safe size. The network gets time to catch up, the buffer clears out, and your memory usage stays flat. When you are finished sending your file, you call the close method on the writer. Just like write, close is not an async function. To ensure the connection actually shuts down cleanly and all final bytes are flushed before your program moves on, you follow it by awaiting the wait_closed method. The StreamWriter makes writing to a network feel instantaneous, but physics still applies. Always await drain after you write to ensure your application respects the actual speed of the network connection. Thanks for listening, happy coding everyone!
13

Building Async Servers

3m 39s

Construct highly concurrent network servers. Learn how asyncio.start_server abstracts away client connections, spawning an isolated task for every peer.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 13 of 20. Building a highly concurrent TCP server in Python usually means wrestling with thread pools or complex event loop configurations. You can actually handle thousands of connections in fewer than ten lines of code. That is exactly what we are covering today by building async servers with asyncio streams. The foundation of a network server in asyncio is a function called start_server. You pass it three things: a callback function, an IP address, and a port. When you await start_server, it binds to that address and begins listening for incoming TCP connections on the network interface you specified. Developers often assume they need to manually intercept these incoming connections and write boilerplate code to dispatch them to worker threads or custom background tasks. That is completely unnecessary. The framework handles the concurrency for you. Every single time a new client connects to your port, start_server automatically spawns a brand new asyncio task dedicated entirely to that specific client. Think about building a simple chat room server. When your first user connects, start_server triggers your callback function and hands it two objects: a stream reader and a stream writer. If fifty more users connect simultaneously, fifty separate tasks instantly spin up to run that exact same callback function. Each task receives its own isolated reader and writer pair. Inside your callback function, you write the logic as if you are only talking to one person at a time. You use the reader object to listen for incoming messages. You await a read method on the reader, specifying a maximum number of bytes you want to accept, like one hundred bytes. The reader gives you raw bytes from the network, which you decode into a standard text string. To reply to the client, you reverse the process. You encode your response string back into bytes and pass it directly to the writer object. Here is the key insight. Passing data to the writer is not an asynchronous operation, but ensuring that data actually leaves the physical machine is. After giving data to the writer, you must await the writer's drain method. Draining pauses your current client task until the operating system's network buffer has enough free space to push the bytes out over the wire. This step is critical because it prevents your server from consuming all available memory if a client has a slow network connection. When the conversation finishes, or if the client disconnects, you tell the writer to close. You then await the writer's wait_closed method to ensure all final bytes are transmitted and the underlying socket shuts down cleanly. Back in your main setup function, start_server returned a server object. By default, the server stops listening if the main Python script reaches the end of its instructions. To keep your chat room open indefinitely, you take that server object and await its serve_forever method. This locks the main asyncio task into an infinite loop, quietly accepting new connections and spawning new client tasks in the background. The real power of this design is that it abstracts away the networking complexity. You write straightforward, sequential code for a single isolated connection, and the event loop scales it across concurrent tasks automatically. If you want to help support the show, you can search for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
14

Non-blocking Subprocesses

3m 14s

Run shell commands asynchronously. Discover why using the standard subprocess module halts the event loop, and how asyncio.create_subprocess_exec fixes it.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 14 of 20. You build an asynchronous web API, trigger a standard system command inside an endpoint, and suddenly every other concurrent task instantly paralyzes. Nothing moves until that system command finishes. The culprit is the standard Python subprocess module, and resolving this requires Non-blocking Subprocesses. Calling a function like standard subprocess dot run executes an operating system command and waits for it to complete. In an asynchronous Python application, the event loop runs on a single thread. When you block that thread waiting for the operating system, the event loop stops. Every other concurrent request to your API sits frozen. To fix this, asyncio provides its own subprocess functions designed specifically for the event loop. The primary tool is asyncio dot create subprocess exec. Here is the key insight. This function does not execute the command directly in Python. It asks the operating system to spawn a child process, but instead of blocking while waiting for the result, it immediately yields control back to the event loop. Your API handles other requests while the external program runs. Take the scenario of a web API that converts video files using FFmpeg. You want to trigger the conversion and stream the output logs back to the user in real time. Inside your async endpoint, you call create subprocess exec. You pass the program name, FFmpeg, followed by its arguments. To capture the logs, you tell the function to route the standard output and standard error to asyncio pipes. The function returns an asyncio Process object. This object represents the running OS command and gives you asynchronous hooks to interact with it. Because you routed the outputs to pipes, the Process object exposes them as asynchronous stream readers. You read the FFmpeg logs by iterating over the standard error stream asynchronously, since FFmpeg typically logs there. For each line the external process produces, your async loop wakes up, reads the line, and streams it back to the web user. While waiting for the next line, the Python event loop goes right back to serving other users. You get real-time log streaming without freezing the server. If you do not need to stream the output line by line, the Process object also provides an async communicate method. You await communicate to send data to standard input and read all standard output and standard error data at once. This keeps the loop free until the external process finishes completely and returns the data. If you handled the streams manually like in the FFmpeg example, you instead await the wait method on the Process object to wait for the process to terminate and collect its exit code. The event loop does not care if the operating system is doing the actual computation; if your Python code waits synchronously for the OS to report back, your entire asynchronous application is dead in the water. That is all for this one. Thanks for listening, and keep building!
15

Futures: The Low-Level Bridge

3m 39s

Unpack the foundation of await statements. We examine asyncio.Future, its role as an eventual result, and how it bridges legacy callback code with modern syntax.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 15 of 20. You write clean, modern asynchronous code, but eventually, you have to interface with an old, stubborn library that relies entirely on callbacks. You cannot await a callback directly, which breaks your entire asynchronous flow. The mechanism that bridges these two worlds is Futures: The Low-Level Bridge. Let us clear up a common confusion immediately. People frequently mix up Tasks and Futures. A Task is a specific subclass of a Future. A Task wraps a coroutine and actively schedules it on the event loop, driving its execution step by step. A Future does not run anything. It has no execution logic of its own. It is simply a state container. It is a low-level primitive representing an eventual result from an asynchronous operation. When writing modern Python, you almost never instantiate a Future directly. The event loop creates them under the hood. But when you need to wrap legacy, callback-based code, you construct them manually. Consider a scenario where you are using an older network protocol library. It has a request method that takes a network address, a success callback, and a failure callback. You want your modern async function to simply call await on this request. Here is how you bridge the gap. Inside your async function, you get the current running event loop and ask it to create a new Future object. At this exact moment, the Future is in a pending state. It is empty and waiting. Next, you write a small success callback function. When triggered, this function takes the incoming data and calls the set result method on your Future. You also write an error callback that calls the set exception method on the same Future. You pass both of these functions into the legacy request method and start the network call. Finally, you await the Future. Here is the key insight. Awaiting a pending Future pauses the current coroutine. It yields control back to the event loop, allowing other tasks to run. Your code sits frozen at that await statement. Meanwhile, the legacy client does its network input and output in the background. When the data arrives, the legacy client triggers your success callback. Your callback calls set result on the Future. The Future immediately transitions from the pending state to the finished state. The event loop notices this state change. It wakes up the coroutine that was waiting on that Future, unpacks the stored result, and your async function resumes execution just as if it had awaited a native coroutine. If the network call failed, your error callback sets an exception on the Future instead. When the event loop wakes the coroutine, it raises that exact exception at the await line. A Future has strict rules about state. It can only transition out of the pending state once. If a callback attempts to call set result on a Future that is already finished, Python raises an invalid state error. You can also cancel a Future manually. If you do, it enters a cancelled state, and any coroutine awaiting it immediately receives an asyncio Cancelled Error. Futures provide the necessary structural glue between event-driven callbacks and procedural-looking await statements. Understanding that every await statement ultimately pauses execution until a low-level Future is marked as finished gives you total clarity on how asynchronous Python actually operates under the surface. That is all for this one. Thanks for listening, and keep building!
16

Transports and Protocols

3m 42s

Go under the hood to see how asyncio talks to the OS. Understand the callback-driven, 1:1 relationship between Transports (how bytes move) and Protocols (what bytes mean).

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 16 of 20. When you use high-level asyncio streams, your code looks clean, sequential, and safely awaited. But underneath those friendly coroutines sits a heavily optimized, callback-driven engine dealing with messy operating system calls. To understand how your Python application actually talks to a network, you need to look at Transports and Protocols. These two abstractions form the foundation of asyncio networking. They always work as a pair. The protocol handles the application logic, deciding what bytes to send and how to interpret incoming data. The transport handles the mechanics. It does not care what your data means or how it is formatted. Its only job is figuring out how to push those bytes over the wire. Today, we are focusing squarely on the transport layer. Think about what happens when you write to a non-blocking TCP socket directly. You have to ask the operating system if the socket is ready. You have to handle partial writes if the network buffer is full. You have to track which bytes actually sent and which ones need to be tried again later. An asyncio transport hides all of this complexity. It acts as an opaque wrapper around the raw socket and the underlying operating system calls. You generally never instantiate a transport yourself. Instead, you call an event loop method to create a network connection. The event loop sets up the socket, creates the transport, links it to your protocol, and hands the pair back to you. Here is the key insight. Once that connection is established, the transport takes over the input and output buffering. When your protocol wants to send a message, it simply passes a chunk of bytes to the transport write method. The transport does not block your code to wait for the network. It immediately drops those bytes into its own internal buffer. The transport then works with the event loop in the background, firing off the non-blocking socket calls to the operating system. If the system can only take half the bytes right now, the transport holds onto the rest and tries again on the next loop iteration. Your application never has to micromanage that queue. Flow control is built right into this mechanism. If you write data faster than the network can send it, the transport internal buffer will start to fill up. Once it hits a designated limit, the transport triggers a specific callback on your protocol to pause writing. When the buffer finally drains, it fires another callback to resume. On the receiving side, the transport listens to the event loop. When the operating system signals that incoming bytes have arrived, the transport pulls them from the socket and feeds them directly into the protocol through a callback. Everything at this low level is purely callback-driven. There are no awaitables here. Transports also provide standardized methods for managing the connection lifecycle. You can gracefully close a transport, which tells it to finish sending any buffered data before safely shutting down the socket. If things go wrong, you can call an abort method to tear down the connection immediately, discarding whatever is left in the queue. And if your protocol needs to know who it is talking to, the transport provides a method to request extra information, allowing you to peek through the abstraction and retrieve the underlying socket IP address or peer details. The transport abstraction is what allows your asyncio code to remain purely focused on data logic. Transports isolate your application from the chaotic mechanics of non-blocking I/O; they take raw bytes from your protocol and silently handle the buffering, retries, and operating system socket calls needed to move them across the network. That is all for this one. Thanks for listening, and keep building!
17

Threading in an Async World

3m 17s

Bridge synchronous and asynchronous worlds. Learn how to offload heavy blocking code safely using executors and thread-safe callbacks without locking up the loop.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 17 of 20. You drop a standard background thread into your async web server to handle a slow task, and suddenly your application starts deadlocking or throwing cryptic state errors. Mixing standard threads with an async event loop is a recipe for disaster unless you use the designated thread-safe bridges. Today we are covering Threading in an Async World. The primary rule of asyncio is that the event loop runs in a single thread. Because of this, almost all asyncio objects are not thread-safe. A common mistake is spawning a standard background thread, doing some work, and then trying to resolve an async future or schedule a callback directly from that thread. If you touch an asyncio object from a thread other than the one running the event loop, you will corrupt the loop state. To send a message from a background thread into your event loop, you must use call soon threadsafe. This is a method on the loop itself. You give it the callback function you want to run and the arguments. Instead of executing it immediately, your background thread places that callback into a secure internal queue. The main event loop checks this queue and executes your callback safely in the main thread during its normal cycle. This is the only safe way for an external thread to poke the event loop. Now consider the reverse situation. You are running your async event loop, and you need to execute a piece of synchronous, blocking code. A classic scenario is querying a slow, synchronous PostgreSQL driver like psycopg2. If you execute a five-second database query directly inside your async request handler, your entire web server halts. The event loop cannot process any other network traffic or timers until that database query returns. Here is the key insight. To prevent the loop from freezing, you push that blocking work out to a separate thread using run in executor. This is another method on the event loop. You pass it a thread pool executor and your synchronous database function. The loop hands the function off to a background thread in the pool and immediately returns an awaitable object. You await that object. While your database query runs in the background thread, your event loop is totally free to pause that specific task and go handle hundreds of other web requests. Once the PostgreSQL driver finally returns the data, the thread pool safely passes the result back to the event loop. Your awaitable resolves, and your original async function resumes execution right where it left off, now holding the database results. You have two one-way bridges. Use call soon threadsafe to push events from a worker thread into your async loop. Use run in executor to push blocking synchronous work out of your async loop into a worker thread. Never let a synchronous call hijack your event loop, and never let a background thread touch your async objects directly. That is all for this one. Thanks for listening, and keep building!
18

Async Generators & Cleanup

3m 10s

Avoid resource leaks with async generators. We explore why 'async for' iteration can leave dangling connections when interrupted, and how aclosing() provides safety.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 18 of 20. You hit a timeout fetching rows from a database. Your code handles the exception and moves on, but days later your application crashes because your connection pool is completely exhausted. You exited an asynchronous loop early, and it silently left database connections open in the background. The fix lies in mastering Async Generators and Cleanup. When you write an asynchronous generator to yield items over time, you often manage resources. Consider a database cursor. You write a generator that acquires a connection, yields rows one by one, and uses a try-finally block to return that connection to the pool when the fetching is done. If you iterate through every single row, the generator finishes, hits the finally block, and cleans up. Everything works. The danger appears when you do not consume the entire generator. If your iteration is wrapped in a timeout, or if you simply hit a break statement after finding the row you need, the generator pauses. It is suspended at the last yield. It has not reached the finally block. Your database connection is still held open. You might expect Python's garbage collector to eventually handle this. In synchronous code, when a generator loses all references and is garbage collected, Python injects an exit exception that runs the finally blocks. But asynchronous code complicates this. Garbage collection is a synchronous process. When the garbage collector eventually finds your suspended async generator, it cannot reliably run asynchronous teardown code. The event loop might be busy, or it might even be closed. Relying on the garbage collector to clean up an async generator results in unpredictable behavior and dangling resources. This is the part that matters. The official asyncio documentation explicitly states you should never rely on garbage collection for async generator cleanup. You must close them manually. The standard library provides a direct tool for this called aclosing, found in the contextlib module. It acts as an asynchronous context manager. Its only job is to guarantee that the generator's aclose method is called and awaited the moment you are done with it. Instead of feeding your generator directly into an async for loop, you wrap it. First you create the generator instance. Then you pass that to an async with aclosing statement. Inside that context block, you run your async for loop. When you structure your code this way, an early exit triggers the context manager. If a timeout interrupts the loop, the async with block catches the exit. It explicitly awaits the aclose method on the generator. This safely injects the exit exception into the suspended generator while you are still actively running in the event loop. Your finally block executes immediately, awaiting any necessary teardown steps, and your database connection goes safely back to the pool. Whenever an asynchronous generator acquires network connections, file descriptors, or database locks, wrap it in aclosing before iterating to guarantee deterministic cleanup, regardless of timeouts or early breaks. That is all for this one. Thanks for listening, and keep building!
19

Mastering Debug Mode

3m 22s

Catch concurrency bugs instantly. Learn how to use PYTHONASYNCIODEBUG to profile slow callbacks, unearth unawaited coroutines, and pinpoint never-retrieved exceptions.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 19 of 20. Your production server is experiencing mysterious lag spikes, and your background operations are randomly swallowing errors without a trace. The problem is not your application logic, but how standard asyncio hides concurrency mistakes to save on performance. Mastering Debug Mode is the answer to instantly exposing these failures. Asyncio debug mode acts as a strict mode for the event loop. By default, asyncio prioritizes raw speed over runtime safety checks. This means when things go wrong, they often fail silently. You enable debug mode globally by setting the environment variable PYTHONASYNCIODEBUG to one, or by running Python with the dash X dev flag. You can also turn it on dynamically by calling set debug true directly on the event loop object. Take the lag spike scenario. You have a web server handling thousands of concurrent requests, and suddenly a single endpoint causes the entire application to freeze. You suspect a rogue regex operation is locking up the thread, but standard logging only tells you when a request starts or finishes, not what blocked the loop in between. When debug mode is active, the event loop measures the execution time of every single callback. If a callback blocks the loop for more than one hundred milliseconds, asyncio automatically logs a warning. This warning includes the exact file and line number where the freeze occurred, pointing you straight to that expensive regex search. That one hundred millisecond threshold is the default, but you can tune it for your specific latency requirements by modifying the slow callback duration property on the loop. Debug mode also catches silent execution failures. A frequent mistake in async code is calling a coroutine function but forgetting the await keyword. The function returns a coroutine object, but the actual logic never runs. In normal execution, that object gets quietly discarded. Debug mode tracks this. When the garbage collector cleans up an unawaited coroutine, the debug loop intercepts it and emits a resource warning, showing exactly where the orphan coroutine was created so you can fix the invocation. This same safety net applies to background tasks. If an asyncio task crashes, the exception is stored inside the task object itself. If your code never explicitly awaits that task or retrieves its result, the exception simply vanishes. With debug mode enabled, asyncio monitors the lifecycle of every task. If a task is destroyed and its internal exception was never retrieved, the event loop loudly logs the error along with the traceback showing where the task was originally spawned. These checks do add overhead, so you typically leave debug mode off in normal production environments, saving it for local development or targeted troubleshooting. Here is the key insight. Enabling debug mode shifts the burden of finding silent concurrency bugs from your own manual logging right back onto the event loop itself. If you enjoy the podcast and want to help support us, just search for DevStoriesEU on Patreon. That is all for this one. Thanks for listening, and keep building!
20

Extending & Custom Loops

3m 30s

The finale. We explore advanced integration and what it takes to write a custom event loop or subclass BaseEventLoop for specialized, high-performance environments.

Download
Hi, this is Alex from DEV STORIES DOT EU. asyncio, episode 20 of 20. You hit a performance wall with your asynchronous code, and profiling points directly at the core event loop itself. You cannot rewrite the standard library, but you need lower-level control over exactly how the system handles sockets and tasks. The answer lies in Extending and Custom Loops. The standard asyncio event loop is not a hardcoded black box. It is an extensible interface. It was designed from the ground up to be fully replaceable by high-performance C libraries or specialized Python implementations. Most application developers will never need to build a custom loop. However, if you are a framework author or you are building an optimized loop like uvloop, you need to bypass standard behavior and integrate directly with lower-level operating system primitives. To build a custom event loop, you start by subclassing BaseEventLoop. This base class defines the entire contract for how asynchronous operations must behave. By inheriting from it, you get the structure, but you can override specific methods to intercept and redefine fundamental operations. Consider socket creation. In a standard application, you ask asyncio to open a connection, and it uses the default Python socket implementation. But in a custom loop subclass, you can override the network creation methods. This means when the application requests a network connection, your custom loop intercepts that call. You can then route that request through highly optimized C code, or tie it directly to advanced kernel features that standard Python does not expose. The application code does not change, but the underlying machinery is entirely yours. This granular control also applies to task management. Here is the key insight. The event loop is responsible for tracking every single asynchronous task. If you look under the hood of BaseEventLoop, you will find an internal method called underscore register task. By overriding this specific method, your custom loop intercepts a task the exact microsecond it is created. Why does this matter? If you are building a custom runtime, you might need to track deep diagnostics, implement specialized memory pooling for tasks, or send task state directly to a custom monitoring service. Overriding underscore register task gives you a guaranteed hook into the lifecycle of every coroutine before it even begins executing. You can also override the corresponding unregister method to handle cleanup exactly how your framework requires. Once your custom loop class is built, you have to tell Python to actually use it. You do this by creating a custom event loop policy. The policy is just a factory that dictates which loop implementation gets created when a thread asks for one. You set your custom policy globally. From that point forward, any standard library function that requests an event loop will be handed your optimized, custom version. The true power of asyncio is not just the async and await syntax. It is the fact that the entire execution engine is a pluggable interface, ready to be swapped out the moment standard performance limits your architecture. Since this wraps up our series, I encourage you to read the official documentation, try extending these components hands-on, or visit devstories dot eu to suggest topics for future series. That is all for this one. Thanks for listening, and keep building!