Sponsored
Data Pipelines with Apache Airflow, Second Edition - by Julian de Ruiter & Ismael Cabral & Kris Geusebroek & Daniel Van Der Ende & Bas Harenslak
Pre-order
Sponsored
About this item
Highlights
- Simplify, streamline, and scale your data operations with data pipelines built on Apache Airflow Data Pipelines with Apache Airflow has empowered thousands of data engineers to build more successful data platforms.
- About the Author: Julian de Ruiter is a Data + AI engineering lead at Xebia Data, with a background in computer and life sciences and a PhD in computational cancer biology.
- 512 Pages
- Computers + Internet, Databases
Description
Book Synopsis
Simplify, streamline, and scale your data operations with data pipelines built on Apache Airflow
Data Pipelines with Apache Airflow has empowered thousands of data engineers to build more successful data platforms. This new second edition has been fully revised for Airflow 3 with coverage of all the latest features of Apache Airflow, including the Taskflow API, deferrable operators, and Large Language Model integration. Filled with real-world scenarios and examples, you'll be carefully guided from Airflow novice to expert.
In Data Pipelines with Apache Airflow, Second Edition you'll learn how to:
- Master the core concepts of Airflow architecture and workflow design
- Schedule data pipelines using the Dataset API and time tables, including complex irregular schedules
- Develop custom Airflow components for your specific needs
- Implement comprehensive testing strategies for your pipelines
- Apply industry best practices for building and maintaining Airflow workflows
- Deploy and operate Airflow in production environments
- Orchestrate workflows in container-native environments
- Build and deploy Machine Learning and Generative AI models using Airflow
Using real-world scenarios and examples, Data Pipelines with Apache Airflow, Second Edition teaches you how to simplify and automate data pipelines, reduce operational overhead, and smoothly integrate all the technologies in your stack. Part reference and part tutorial, each technique is illustrated with engaging hands-on examples, from training machine learning models for generative AI to optimizing delivery routes.
About the Technology
Apache Airflow provides a unified platform for collecting, consolidating, cleaning, and analyzing data. With its easy-to-use UI, powerful scheduling and monitoring features, plug-and-play options, and flexible Python scripting, Airflow makes it easy to implement secure, consistent pipelines for any data or AI task.
About the book
Data Pipelines with Apache Airflow, Second Edition teaches you how to build, monitor, and maintain effective data workflows. This new edition adds comprehensive coverage of Airflow 3 features, such as event-driven scheduling, dynamic task mapping, DAG versioning, and Airflow's entirely new UI. The numerous examples address common use cases like data ingestion and transformation and connecting to multiple data sources, along with AI-aware techniques such as building RAG systems.
What's inside
- Deploying data pipelines as Airflow DAGs
- Time and event-based scheduling strategies
- Integrating with databases, LLMs, and AI models
- Deploying Airflow using Kubernetes
About the reader
For data engineers, machine learning engineers, DevOps, and sysadmins with intermediate Python skills.
About the author
Julian de Ruiter, Ismael Cabral, Kris Geusebroek, Daniel van der Ende, and Bas Harenslak are seasoned data engineers and Airflow experts.
Get a free eBook (PDF or ePub) from Manning as well as access to the online liveBook format (and its AI assistant that will answer your questions in any language) when you purchase the print book.
About the Author
Julian de Ruiter is a Data + AI engineering lead at Xebia Data, with a background in computer and life sciences and a PhD in computational cancer biology. As consultant at Xebia Data, he enjoys helping clients design and build AI solutions and platforms, as well as the teams that drive them. From this work, he has extensive experience in deploying and applying Apache Airflow in production in diverse environments.
Ismael Cabral is a Machine Learning Engineer and Airflow trainer with experience spanning across Europe, US, Mexico, and South America, where he has worked with market-leading companies. He has vast experience implementing data pipelines and deploying machine learning models in production.
Kris Geusebroek is a data-engineering consultant with extensive hands-on experience with Apache Airflow at several clients and is the maintainer of Whirl (the open source local testing with Airflow repository), where he is actively adding new examples based on new functionality and new technologies that integrate with Airflow.
Daniel van der Ende is a Data Engineer who first started using Apache Airflow back in 2016. Since then, he has worked in many different Airflow environments, both on-premises and in the cloud. He has actively contributed to the Airflow project itself, as well as related projects such as Astronomer-Cosmos.
Bas Harenslak is a Staff Architect at Astronomer, where he helps customers develop mission-critical data pipelines at large scale using Apache Airflow and the Astro platform. With a background in software engineering and computer science, he enjoys working on software and data as if they are challenging puzzles. He favours working on open source software, is a committer on the Apache Airflow project, and co-author of the first edition of Data Pipelines with Apache Airflow.