Sponsored
Vision Language Models - by Merve Noyan & Andrés Marafioti & Miquel Farré & Orr Zohar (Paperback)
Pre-order
Sponsored
About this item
Highlights
- Vision language models (VLMs) combine computer vision and natural language processing to create powerful systems that can interpret, generate, and respond in multimodal contexts.
- Author(s): Merve Noyan & Andrés Marafioti & Miquel Farré & Orr Zohar
- 300 Pages
- Computers + Internet, Computer Vision & Pattern Recognition
Description
Book Synopsis
Vision language models (VLMs) combine computer vision and natural language processing to create powerful systems that can interpret, generate, and respond in multimodal contexts. Vision Language Models is a hands-on guide to building real-world VLMs using the most up-to-date stack of machine learning tools from Hugging Face, Meta (PyTorch), NVIDIA (Cuda), OpenAI (CLIP), and others, written by leading researchers and practitioners Merve Noyan, Miquel Farré, Andrés Marafioti, and Orr Zohar. From image captioning and document understanding to advanced zero-shot inference and retrieval-augmented generation (RAG), this book covers the full VLM application and development lifecycle.
Designed for ML engineers, data scientists, and developers, this guide distills cutting-edge VLM research into practical techniques. Readers will learn how to prepare datasets, select the right architectures, fine-tune and deploy models, and apply them to real-world tasks across a range of industries.
- Explore core model architectures and alignment techniques
- Train and fine-tune VLMs with Hugging Face, PyTorch, and others
- Deploy models for applications like image search and captioning
- Implement advanced inference strategies, from zero-shot to agentic systems
- Build scalable VLM systems ready for production use