Per citar aquest document:
From pixels to gestures : learning visual representations for human analysis in color and depth data sequences
Hernandez-Vela, Antonio

Data: 2015
Resum: The visual analysis of humans from images is an important topic of interest due to its relevance to many computer vision applications likepedestrian detection, monitoring and surveillance, human-computer interaction, e-health or content-based image retrieval, among others. In this dissertation we are interested in learning different visual representations of the human body that are helpful for the visual analysis of humans in images and video sequences. To that end, we analyze both RGB and depth image modalities and address the problem from three different research lines, at different levels of abstraction; from pixels to gestures: human segmentation, human pose estimation and gesture recognition. First, we show how binary segmentation (object vs. background) of the human body in image sequences is helpful to remove all the background clutter present in the scene. The presented method, based on Graph cuts optimization, enforces spatio-temporal consistency of the produced segmentation masks among consecutive frames. Secondly, we present a framework for multi-label segmentation for obtaining much more detailed segmentation masks: instead of just obtaining a binary representation separating the human body from the background, finer segmentation masks can be obtained separating the different body parts. At a higher level of abstraction, we aim for a simpler yet descriptive representation of the human body. Human pose estimation methods usually rely on skeletal models of the human body, formed by segments (or rectangles) that represent the body limbs, appropriately connected following the kinematic constraints of the human body. In practice, such skeletal models must fulfill some constraints in order to allow for efficient inference, while actually limiting the expressiveness of the model. In order to cope with this, we introduce a top-down approach for predicting the position of the body parts in the model, using a mid-level part representation based on Poselets. Finally, we propose a framework for gesture recognition based on the bag of visual words framework. We leverage the benefits of RGB and depth image modalities by combining modality-specific visual vocabularies in a late fusion fashion. A new rotation-variant depth descriptor is presented, yielding better results than other state-of-the-art descriptors. Moreover, spatio-temporal pyramids are used to encode rough spatial and temporal structure. In addition, we present a probabilistic reformulation of Dynamic Time Warping for gesture segmentation in video sequences. A Gaussian-based probabilistic model of a gesture is learnt, implicitly encoding possible deformations in both spatial and time domains.
Nota: Advisor/s: Sergio Escalera and Stan Sclaroff (Boston University). Date and location of PhD thesis defense: 9 March 2015, Universitat de Barcelona
Drets: Aquest document està subjecte a una llicència d'ús Creative Commons. Es permet la reproducció total o parcial i la comunicació pública de l'obra, sempre que no sigui amb finalitats comercials, i sempre que es reconegui l'autoria de l'obra original. No es permet la creació d'obres derivades. Creative Commons
Llengua: Anglès.
Document: other ; abstract ; publishedVersion
Matèria: Computer vision ; Pattern recognition ; Statistical pattern recognition ; Classification and clusttering ; Separation and segmentation ; Face and gesture ; Human pose estimation ; Recognition an finger and other body parts
Publicat a: ELCVIA : Electronic Letters on Computer Vision and Image Analysis, Vol. 14 Núm. 3 (2015) , p. 26-28 (Special Issue on Recent PhD Thesis Dissemination (2014)) , ISSN 1577-5097

Adreça alternativa:
Adreça original:
DOI: 10.5565/rev/elcvia.723

3 p, 422.9 KB

El registre apareix a les col·leccions:
Articles > Articles publicats > ELCVIA

 Registre creat el 2015-12-24, darrera modificació el 2017-10-14

   Favorit i Compartir