Back to catalog
Season 37 7 Episodes 24 min 2026

SQLAlchemy

v2.0 — 2026 Edition. A comprehensive audio course on SQLAlchemy, covering both the Core and the ORM, designed for the 2.0 version released in 2026. Learn how to map your domain, structure your application, manage transactions with the Session, and execute queries effectively.

Databases ORM Python Core
SQLAlchemy
Now Playing
Click play to start
0:00
0:00
1
The Foundation: What is SQLAlchemy and the ORM?
Welcome to SQLAlchemy. We introduce the core architecture, explaining the difference between the schema-centric Core and the domain-centric ORM. You will learn the foundational lingo and why you need an ORM.
3m 24s
2
The Engine: Your Gateway to the Database
Every SQLAlchemy application starts with the Engine. Learn how to establish connectivity, what connection pooling is, and how dialects and DBAPIs bridge the gap to your database.
3m 30s
3
Mapping the Domain: Declarative Base and Models
Translate your Python classes into database tables automatically. We cover the DeclarativeBase, Mapped types, and how mapped_column constructs your database metadata.
3m 52s
4
Project Layout: Structuring Your Application
Code organization matters. Learn the best practices for structuring an SQLAlchemy project repository so your engine, models, and sessions stay clean and maintainable.
3m 19s
5
The Session: Mastering the Unit of Work
Discover the Unit of Work pattern through the ORM Session. Learn how to add objects, when flushes occur, and how to commit transactions perfectly.
3m 39s
6
Querying Data: The Modern Select Construct
Fetch your data exactly how you need it. We explore SQLAlchemy 2.0's unified select() construct, filtering with where(), and executing queries with the session.
3m 03s
7
Connecting the Dots: Relationships and Joins
Link your tables seamlessly. Learn how to configure relationships, use back_populates, and automatically manage SQL JOINs across related models.
3m 41s

Episodes

1

The Foundation: What is SQLAlchemy and the ORM?

3m 24s

Welcome to SQLAlchemy. We introduce the core architecture, explaining the difference between the schema-centric Core and the domain-centric ORM. You will learn the foundational lingo and why you need an ORM.

Download
Hi, this is Alex from DEV STORIES DOT EU. SQLAlchemy, episode 1 of 7. Have you ever accidentally dropped a table or mangled a query because of a typo in a raw SQL string? String-concatenated queries are fragile, hard to maintain, and a security risk. This episode covers the solution: The Foundation: What is SQLAlchemy and the ORM? The core problem developers face when writing database-backed applications is that Python and relational databases think differently. Python uses objects. Objects have state, behavior, and complex relationships. Relational databases use tables, columns, and rows. They rely on primary and foreign keys. Getting a Python object into a database table requires translating its state into a SQL statement. Getting it back requires parsing rows and repopulating a new object. This friction is called the object-relational impedance mismatch. Writing raw SQL strings to handle this translation is tedious and error-prone. SQLAlchemy is a Python SQL toolkit designed to solve this exact problem. Unlike some tools that try to hide the database behind a wall of abstraction, SQLAlchemy embraces SQL. It provides a robust, Pythonic workflow that generates optimal SQL while keeping you in control. The architecture is split into two distinct parts: the Core and the ORM. The SQLAlchemy Core is the foundation. It is command-oriented and schema-oriented. This means you work directly with database tables, columns, and rows, but you do it using Python objects instead of raw text strings. If you want to select data, you call a select method. If you want to filter, you chain a where method. This removes the risk of syntax errors from string concatenation and protects against SQL injection. The Core is essentially a SQL expression language. The ORM, or Object Relational Mapper, sits on top of the Core. While the Core is schema-oriented, the ORM is state and domain-oriented. Here is the key insight. With the ORM, you stop thinking about tables and rows, and start thinking about your application domain model. You define a standard Python class, for example, a user class. You configure the ORM to map this class to a user table in the database, and map the class attributes to the table columns. When you query the database using the ORM, you do not get back raw rows of data. You get back fully populated instances of your user class. The ORM handles the translation seamlessly. More importantly, it tracks the state of those objects. If you change a user name in Python, the ORM notices that the object has been modified. When you tell the database session to save your changes, the ORM automatically figures out the correct update statements and executes them through the Core. You focus on your Python objects, and the ORM handles the database synchronization. The most important thing to remember is that the ORM is not a black box that replaces the Core. It is an optional layer built entirely out of Core components, allowing you to seamlessly drop down to explicit SQL generation whenever your domain logic requires maximum performance. If you find these episodes helpful and want to support the show, you can search for DevStoriesEU on Patreon. Thanks for spending a few minutes with me. Until next time, take it easy.
2

The Engine: Your Gateway to the Database

3m 30s

Every SQLAlchemy application starts with the Engine. Learn how to establish connectivity, what connection pooling is, and how dialects and DBAPIs bridge the gap to your database.

Download
Hi, this is Alex from DEV STORIES DOT EU. SQLAlchemy, episode 2 of 7. You want to talk to a database, but different databases speak different dialects, and Python drivers all have their own specific quirks. You need a single, unified component that handles connection pooling, dialect translation, and driver management without bleeding those details into your application logic. That component is the Engine. The Engine is the starting point for any SQLAlchemy application. It acts as a central registry and a factory for database connections. When your application needs to talk to the database, it ultimately goes through the Engine. Behind the scenes, the Engine holds a connection pool. This means it maintains a cache of active database connections, keeping them open and ready to use. Reusing connections from a pool is much faster than negotiating a brand new network connection every single time you need to execute a query. To instantiate an Engine, you use a function called create engine. This function requires a database URL. The URL is a string that provides the location of the database, the credentials, and two critical pieces of configuration: the dialect and the DBAPI. The dialect is the database family you are targeting. SQL is a standard, but every database engine implements it slightly differently. The dialect tells SQLAlchemy whether it is formatting queries for PostgreSQL, MySQL, Oracle, or SQLite. It handles all the vendor-specific variations so your Python code can remain consistent. Here is the key insight. SQLAlchemy does not connect to your database directly. It always delegates the actual network communication to a third-party Python driver. This driver is known as the DBAPI. If you are using PostgreSQL, your DBAPI might be psycopg2. If you are using SQLite, it is typically pysqlite. The database URL specifies both of these together. For example, the URL might start with the string sqlite plus pysqlite. This explicitly tells the Engine to format queries using the SQLite dialect and to transmit them using the pysqlite driver. Let us look at setting up an in-memory SQLite database. This is a very common pattern for testing because the database lives entirely in RAM and disappears when the program exits. You call the create engine function and pass it your URL string. For an in-memory database, the URL is sqlite plus pysqlite colon slash slash slash colon memory colon. When you run that line of code, the Engine object is created, but it does not immediately connect to the database. The Engine is lazy. It prepares the configuration and sets up the connection pool, but it waits until the very first time you explicitly ask it to execute a task before it actually reaches out to the DBAPI to establish a connection. While developing, you often need to verify exactly what SQLAlchemy is sending to the database. The create engine function accepts an optional parameter called echo. If you set echo to true, the Engine will log all the raw SQL statements it generates directly to your standard output. It acts as a built-in debugging tool, letting you see the exact translation between your Python operations and the resulting SQL. The Engine exists to abstract away the messy reality of network connections and database drivers. It gives your application a clean, stable interface, ensuring that the rest of your code never has to care about how the database is physically reached. Thanks for hanging out. Hope you picked up something new.
3

Mapping the Domain: Declarative Base and Models

3m 52s

Translate your Python classes into database tables automatically. We cover the DeclarativeBase, Mapped types, and how mapped_column constructs your database metadata.

Download
Hi, this is Alex from DEV STORIES DOT EU. SQLAlchemy, episode 3 of 7. You write your Python domain models, and then you have to write a separate pile of SQL just to create the tables to hold them. Over time, those two separate definitions almost always drift out of sync. What if your Python classes could automatically design your database schema? That is exactly what we solve today with Mapping the Domain: Declarative Base and Models. Before we look at the classes themselves, we have to talk about database metadata. In SQLAlchemy, metadata is effectively a structural catalog. It is a central Python registry that stores the exact blueprint of your database. It tracks every table, every column, and every constraint you define. Whenever you map a class in SQLAlchemy, the details of that class feed directly into this single catalog. To connect your Python classes to this metadata catalog, you use a construct called DeclarativeBase. You do not use DeclarativeBase directly. Instead, you create your own custom base class by inheriting from it. From that point forward, every single model you build in your application will inherit from your custom base class. The moment a class inherits from it, SQLAlchemy quietly registers that new model into the underlying metadata catalog. Consider how you create a User model. You define a Python class named User and inherit from your custom base class. The first thing you define inside this class is an attribute named tablename, spelled with double underscores on each side. You set this to the string user_account. This explicit assignment tells SQLAlchemy exactly which underlying database table will store these Python objects. Next, you establish the columns. SQLAlchemy 2.0 relies on standard Python type hints to do this. You define your attributes using a special annotation called Mapped. For an identifier, you annotate your ID attribute as Mapped containing an integer. For a username, you annotate it as Mapped containing a string. This type hint is strictly for Python. It tells your IDE and your type checker what data to expect. But the database engine needs more specific instructions than a simple Python type. This is where a function called mapped_column comes into play. You assign mapped_column to your class attribute, and inside its parentheses, you configure the database-specific rules. For your user ID, you call mapped_column and pass a flag that explicitly marks it as the primary key. For a string column, you might pass a maximum character limit or a flag requiring the field to be unique. If your Python type hint provides all the context SQLAlchemy needs, you can actually omit mapped_column entirely. SQLAlchemy will infer a basic, standard database column directly from the Mapped annotation. But for primary keys, or any specific database constraints, mapped_column is strictly required. Here is the key insight. Once your Python classes are written, your schema design is already finished. The metadata catalog now contains a perfect map of your domain. You can call a method named create_all on your base class metadata, passing in your database engine. SQLAlchemy looks at your User class, reads the mapped columns, instantly translates them into perfectly formatted CREATE TABLE statements, and applies them. The exact same Python code you write to run your application becomes the single source of truth that builds your database structure. Thanks for hanging out. Hope you picked up something new.
4

Project Layout: Structuring Your Application

3m 19s

Code organization matters. Learn the best practices for structuring an SQLAlchemy project repository so your engine, models, and sessions stay clean and maintainable.

Download
Hi, this is Alex from DEV STORIES DOT EU. SQLAlchemy, episode 4 of 7. You have written your models, your engine, and your queries. But dropping them all into a single file is a fast track to a maintenance nightmare and circular imports. The solution is Project Layout: Structuring Your Application. When you look at tutorials, the code is usually contained in one script. You see the engine setup, the base class, the models, and the queries stacked top to bottom. That design is great for learning the syntax, but if you try to build a production system that way, it falls apart. As soon as you add web routes or background workers, you run into trouble. If a web route needs to import a user model, but the user model is in the same file that starts the application server or initializes the database connection, Python gets confused. You end up with circular imports, and your code becomes impossible to test. You need strict separation of concerns. The standard pattern is to break that single script into distinct modules based on their job. First, you create a dedicated file, typically called database dot py. This file has exactly one responsibility, which is managing the database connection. This is where you put your create engine call and set up your session maker. You do not define tables here, and you do not run queries here. By isolating the engine, you ensure that your application only creates a single connection pool. Any other module in your project can safely import the session maker from this file without accidentally triggering application startup logic. Next, you move your table definitions into a file called models dot py. This file contains your Declarative Base and all your mapped classes, like a User or an Address object. It imports nothing from your database file. This is the part that matters. Your models define the shape of your data, but they do not know how to connect to the database. Because models dot py has no dependencies on the engine or the active session, you can import your mapped classes anywhere in your application without worrying about side effects. If your project grows large, you might even split models into a directory with separate files for different domains. As long as they all import the exact same Declarative Base instance, SQLAlchemy knows they belong to the same metadata collection. Finally, you need a place to actually execute your queries. You keep query execution entirely separate from your models and connection files. Usually, this goes into your route handlers or dedicated service files. When an application needs data, the service file imports the session maker from your database module, and the required mapped classes from your models module. It opens a session using a context manager, runs a select statement, retrieves the objects, and lets the context manager safely close the connection when the block finishes. The service file acts as the coordinator. Structuring your project this way keeps your connection logic isolated, your data schemas portable, and your business operations clean. The single most important rule to remember is that your data models should never import your database engine. That is all for this one. Thanks for listening, and keep building!
5

The Session: Mastering the Unit of Work

3m 39s

Discover the Unit of Work pattern through the ORM Session. Learn how to add objects, when flushes occur, and how to commit transactions perfectly.

Download
Hi, this is Alex from DEV STORIES DOT EU. SQLAlchemy, episode 5 of 7. Imagine a staging area where you can queue up dozens of database changes and push them all perfectly at once, or cancel them instantly. That is the exact problem solved by the concept in this episode: The Session: Mastering the Unit of Work. The SQLAlchemy Session is the primary way your Python objects communicate with the database. It operates on a pattern called the Unit of Work. When you use the Session, you are not firing individual commands directly at the database. You are interacting with a smart workspace. The Session watches your operations, gathers up the additions, modifications, and deletions, and calculates the most efficient way to translate them into SQL when the time is right. Inside this workspace is a mechanism called the Identity Map. This is an internal dictionary that links the primary key of a database row to the specific memory address of your Python object. If you request a user with the ID of five, the Session checks the Identity Map first. If the object is already loaded, you get the exact same Python instance back. This guarantees you never have two distinct Python objects representing the same database row at the same time. Let us look at inserting new data. You start by creating an instance of your mapped User class, giving it a name like Alice. At this stage, your object is in a state called transient. It exists solely in Python memory. The database has no idea it exists, and the Session is unaware of it. To link the object to your workspace, you pass it into the add method of the Session. The object moves into a state called pending. The Session has recorded your intent to insert this new user, but it has not sent any SQL over the network yet. The object is simply waiting in the queue. This introduces the strict difference between a flush and a commit. When you call flush on the Session, it takes your pending queue and pushes the changes into the current database transaction. It emits the SQL INSERT statement. The database executes it and assigns a primary key. Back in your Python code, your user object is instantly populated with that new database ID. The object has transitioned to a persistent state. Here is the key insight. Even though the object is persistent and has an ID, the change is still not permanent. The database is isolating this transaction. Suppose your code suddenly catches a mistake, like an invalid email address for the user you just flushed. Because you have not committed, you can issue a rollback. The database discards the transaction entirely, and nothing is saved to the actual tables. When you are absolutely certain the data is correct, you call commit. A commit finalizes the transaction and saves the data to the disk. Calling commit automatically triggers a flush first, so you generally do not need to call flush manually unless you specifically need to read a generated primary key before completing the rest of your logic. The Session acts as a buffer between your application memory and permanent storage, allowing you to orchestrate complex data changes safely before locking them into reality. Thanks for listening. Take care, everyone.
6

Querying Data: The Modern Select Construct

3m 03s

Fetch your data exactly how you need it. We explore SQLAlchemy 2.0's unified select() construct, filtering with where(), and executing queries with the session.

Download
Hi, this is Alex from DEV STORIES DOT EU. SQLAlchemy, episode 6 of 7. In older versions of SQLAlchemy, querying felt like working with a split brain. You had to memorize one syntax for Core operations and a completely different set of rules for the ORM. Every time you wrote a query, you had to stop and ask yourself if you were fetching raw rows or mapped objects. Version 2.0 erased that line with the modern Select construct. The concept is straightforward. There is now a single, unified function called select. You pass the entity you want to query directly into it. If you have an ORM class named User, you pass the User class to the select function. This creates a select object representing a query against the underlying user table. This select object is generative. Calling the select function does not execute anything against the database. Instead, it creates an expression object in memory. When you call a method on this object to add conditions, it returns a fresh select object with those new conditions applied. This allows you to build complex queries dynamically, step by step, passing the object around your application before ever talking to the database. That covers the base query. How do you filter it? You append the where method to your select object. Inside the where method, you use standard Python operators directly on your mapped class attributes. For example, to find a specific user, you type the User class dot name, followed by two equals signs, and the target string. SQLAlchemy intercepts this. It overloads standard Python operators like the double equals or the greater-than sign. Instead of evaluating the expression to true or false in Python, it translates it into the correct SQL syntax for the database WHERE clause. Now you have a fully constructed query object. To actually retrieve your data, you hand this object off to a session. You can pass it to the standard execute method on the session, but that returns a result object where every row is essentially a tuple. Even if you queried a single ORM class, your mapped object is trapped inside a one-item tuple, and you have to extract it manually upon return. Here is the key insight. When you want ORM objects back, use the scalars method on the session instead. You pass your select object to session dot scalars. The session runs the query, automatically unwraps those tuples from the underlying result set, and yields an iterable of your fully populated ORM objects. You immediately get a clean collection of User instances, ready to modify or read. The unified select construct means the exact same query building blocks you use for ORM objects can be executed directly against low-level Core connections, leaving you with exactly one mental model to maintain across your entire database layer. That is all for this one. Thanks for listening, and keep building!
7

Connecting the Dots: Relationships and Joins

3m 41s

Link your tables seamlessly. Learn how to configure relationships, use back_populates, and automatically manage SQL JOINs across related models.

Download
Hi, this is Alex from DEV STORIES DOT EU. SQLAlchemy, episode 7 of 7. Relational databases exist because data connects, but writing complex joins manually every time you need a related record gets old fast. You end up with repetitive query logic scattered across your application. Connecting the Dots: Relationships and Joins solves this by letting the object-relational mapper handle the connections for you. At the database level, tables connect using foreign keys. If a user has multiple addresses, the address table has a column storing the user ID. That is a standard one-to-many relationship. But when you write Python code, you do not want to extract an ID and write a second query just to find those addresses. You want to access a property on your user object and instantly see a list of address objects. This is exactly what the relationship construct provides. It bridges the gap between database foreign keys and Python object attributes. To set this up, you need two things. First, you declare the foreign key on the database side. In your Address model, you define a column called user ID and explicitly mark it as a foreign key pointing to the User table. Second, you define the relationship on the Python side. In your User model, you define a property called addresses, using the relationship function, pointing to the Address model. You also define a relationship property on the Address model pointing back to the User. Here is the key insight. SQLAlchemy needs to know that these two properties represent the two sides of the exact same connection. You tell it this by using the back populates parameter on both relationship definitions. On the User model, the addresses relationship sets back populates to the string name of the user property. On the Address model, the user relationship sets back populates to the string name of the addresses property. Because of back populates, the ORM keeps your Python objects synchronized in memory. If you take an address object and assign it to a user, that address instantly appears in the user's list of addresses. The framework handles the bookkeeping before you ever commit anything to the database. Once your objects are linked, you query them. Suppose you execute a select statement to find a user named Alice. The database returns the user row, and you get an object. At this exact moment, Alice's addresses are not loaded. The ORM does not fetch them because you have not asked for them yet. When you finally access the addresses property in your code, perhaps to iterate over the street names, the ORM notices the data is missing. It automatically pauses your program, emits a second select statement to the database to find all addresses with Alice's ID, and populates the list. This is called lazy loading. It is the default behavior because it prevents the application from pulling thousands of related rows into memory unless they are strictly required. You query a user, you access their properties, and the system seamlessly navigates the foreign keys and issues the necessary queries in the background. The real power of the relationship construct is that it hides the mechanical complexity of joins, letting you navigate an entire database just by interacting with connected Python objects. To master these concepts, explore the official documentation and try setting up your own models hands-on. Feel free to visit devstories dot eu to suggest topics you want covered in future series. That is all for this one. Thanks for listening, and keep building!