Web of Science: 3 citas, Scopus: 2 citas, Google Scholar: citas,
Co-Training for Deep Object Detection : Comparing Single-Modal and Multi-Modal Approaches
Gomez Zurita, Jose Luis (Universitat Autònoma de Barcelona. Departament de Ciències de la Computació)
Villalonga, Gabriel (Centre de Visió per Computador (Bellaterra, Catalunya))
López Peña, Antonio M. (Universitat Autònoma de Barcelona. Departament de Ciències de la Computació)

Fecha: 2021
Resumen: Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i. e. , the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images.
Ayudas: Agencia Estatal de Investigación TIN2017-88709-R
Ministerio de Economía y Competitividad FPU16/04131
Derechos: Aquest document està subjecte a una llicència d'ús Creative Commons. Es permet la reproducció total o parcial, la distribució, la comunicació pública de l'obra i la creació d'obres derivades, sempre que no sigui amb finalitats comercials, i sempre que es reconegui l'autoria de l'obra original. Creative Commons
Lengua: Anglès
Documento: Article ; recerca ; Versió publicada
Materia: Co-training ; Multi-modality ; Vision-based object detection ; ADAS ; Self-driving
Publicado en: Sensors (Basel, Switzerland), Vol. 21, Num. 9 (May 2021) , art. 3185, ISSN 1424-8220

DOI: 10.3390/s21093185
PMID: 34064323


21 p, 5.1 MB

El registro aparece en las colecciones:
Artículos > Artículos de investigación
Artículos > Artículos publicados

 Registro creado el 2022-02-20, última modificación el 2023-05-28



   Favorit i Compartir