Domain Adaptation in Computer Vision Applications, Softcover reprint of the original 1st ed. 2017
Advances in Computer Vision and Pattern Recognition Series

Coordinator: Csurka Gabriela

Language: English

158.24 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Domain Adaptation in Computer Vision Applications
Publication date:
Support: Print on demand

158.24 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Domain Adaptation in Computer Vision Applications
Publication date:
344 p. · 15.5x23.5 cm · Hardback
This comprehensive text/reference presents a broad review of diverse domain adaptation (DA) methods for machine learning, with a focus on solutions for visual applications. The book collects together solutions and perspectives proposed by an international selection of pre-eminent experts in the field, addressing not only classical image categorization, but also other computer vision tasks such as detection, segmentation and visual attributes.

Topics and features: surveys the complete field of visual DA, including shallow methods designed for homogeneous and heterogeneous data as well as deep architectures; presents a positioning of the dataset bias in the CNN-based feature arena; proposes detailed analyses of popular shallow methods that addresses landmark data selection, kernel embedding, feature alignment, joint feature transformation and classifier adaptation, or the case of limited access to the source data; discusses more recent deep DA methods, including discrepancy-based adaptation networks and adversarial discriminative DA models; addresses domain adaptation problems beyond image categorization, such as a Fisher encoding adaptation for vehicle re-identification, semantic segmentation and detection trained on synthetic images, and domain generalization for semantic part detection; describes a multi-source domain generalization technique for visual attributes and a unifying framework for multi-domain and multi-task learning.

This authoritative volume will be of great interest to a broad audience ranging from researchers and practitioners, to students involved in computer vision, pattern recognition and machine learning.

A Comprehensive Survey on Domain Adaptation for Visual Applications.- A Deeper Look at Dataset Bias.- Part I: Shallow Domain Adaptation Methods.- Geodesic Flow Kernel and Landmarks: Kernel Methods for Unsupervised Domain Adaptation.- Unsupervised Domain Adaptation based on Subspace Alignment.- Learning Domain Invariant Embeddings by Matching Distributions.- Adaptive Transductive Transfer Machines: A Pipeline for Unsupervised Domain Adaptation.- What To Do When the Access to the Source Data is Constrained?.- Part II: Deep Domain Adaptation Methods.- Correlation Alignment for Unsupervised Domain Adaptation.- Simultaneous Deep Transfer Across Domains and Tasks.- Domain-Adversarial Training of Neural Networks.- Part III: Beyond Image Classification.- Unsupervised Fisher Vector Adaptation for Re-Identification.- Semantic Segmentation of Urban Scenes via Domain Adaptation of SYNTHIA.- From Virtual to Real World Visual Perception using Domain Adaptation – The DPM as Example.- Generalizing Semantic Part Detectors Across Domains.- Part IV: Beyond Domain Adaptation: Unifying Perspectives.- A Multi-Source Domain Generalization Approach to Visual Attribute Detection.- Unifying Multi-Domain Multi-Task Learning: Tensor and Neural Network Perspectives.

Dr. Gabriela Csurka is a Senior Scientist in the Computer Vision Team at Naver Labs Europe, Meylan, France.
The first book focused on domain adaptation for visual applications Provides a comprehensive experimental study, highlighting the strengths and weaknesses of popular methods, and introducing new and more challenging datasets Presents an historical overview of research in this area Covers such tasks as object detection, image segmentation and video application, where the need for domain adaptation has been rarely addressed by the community Considers real-world, industrial applications, and solutions for cases where existing methods might not be applicable Includes supplementary material: sn.pub/extras Includes supplementary material: sn.pub/extras