Prompt Flow: The Complete Guide
v1.13 — 2026 Edition. A comprehensive guide to Prompt Flow v1.13, a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications. Learn how to design, test, trace, evaluate, and deploy your AI apps.
Episodes
The Philosophy of Prompt Flow
3m 45sThis episode covers the core design principles behind Prompt Flow and why it prioritizes prompt visibility. Listeners will learn the difference between hiding prompts inside frameworks and exposing them for continuous experimentation and tuning.
Flows and the DAG Architecture
4m 00sThis episode covers the high-level mental model of treating LLM applications as Directed Acyclic Graphs (DAGs). Listeners will learn the difference between Flex flows and DAG flows, and how Standard, Chat, and Evaluation flows serve different purposes.
The Building Blocks: Tools
4m 40sThis episode covers Tools, the fundamental executable units in Prompt Flow. Listeners will learn how to leverage the three core built-in tools: LLM, Python, and Prompt.
Managing Secrets with Connections
4m 49sThis episode covers how Connections securely manage credentials for external services across local and cloud environments. Listeners will learn why hardcoding API keys is dangerous and how Prompt Flow isolates secrets.
The Prompty Specification
5m 04sThis episode covers the anatomy of a .prompty file, including its YAML front matter and Jinja template. Listeners will learn how to standardize prompt management into a single, version-controllable markdown asset.
Dynamic Prompty Execution
3m 16sThis episode covers how to execute Prompty files dynamically in Python. Listeners will learn how to override model configurations at runtime and test Prompty files via the CLI.
Flex Flows: Function-Based Development
3m 41sThis episode covers how to encapsulate LLM application logic using pure Python functions. Listeners will learn how to leverage the @trace decorator for minimal-friction entry points into Flex flows.
Flex Flows: Class-Based Development
3m 53sThis episode covers managing state and lifecycle using Python classes in Flex Flows. Listeners will learn how to build complex conversational agents that maintain connections and history.
DAG Flows: Building from YAML
3m 52sThis episode covers defining logic explicitly using flow.dag.yaml files. Listeners will learn how to connect functions and tools via input/output dependencies and utilize visual editors.
Tracing LLM Interactions
3m 29sThis episode covers tracking and debugging LLM calls using the promptflow-tracing package. Listeners will learn how to implement OpenTelemetry specification tracing to get deep visibility into execution latency and inputs.
Advanced Tracing: LangChain and AutoGen
3m 25sThis episode covers how Prompt Flow tracing integrates with third-party orchestration libraries. Listeners will learn how to gain execution visibility into LangChain and AutoGen scripts without a massive rewrite.
Scaling Up: Batch Runs with Data
4m 16sThis episode covers running flows against large datasets using JSONL files. Listeners will learn how to map inputs to data columns and execute batch processes to validate their prompts against edge cases.
The Evaluation Paradigm
3m 39sThis episode covers using evaluation flows to compute metrics on the outputs of a batch run. Listeners will learn how to transition from traditional unit testing to statistical grading of stochastic LLM responses.
Taking Flows to Production
3m 51sThis final episode covers the myriad deployment options available for a completed flow. Listeners will learn how a flow serves as a production-ready artifact that can be deployed to Docker, Kubernetes, or App Services.