Split screen exploration in sign language users : an eye-tracking study
Soler Vilageliu, Olga (Universitat Autònoma de Barcelona. Transmedia Catalonia Research Group)
Bosch Baliarda, Marta (Universitat Autònoma de Barcelona. Transmedia Catalonia Research Group)
Orero, Pilar (Universitat Autònoma de Barcelona. Transmedia Catalonia Research Group)

Date: 2018
Abstract: In this research we applied eye-tracking measures to examine how sign-language users explore split TV screens. We used a sign-translated documentary where both visual and linguistic information is relevant. Four possible screen combinations (see Figure 1) resulted from combining Position of the SLI sub-screen (Left/Right) and Size (Small- 1/5 of the screen width; Medium- 1/4 of the screen width). Figure 1. Screen compositions, from left to right: Small/Right; Small/Left; Medium/Right; Medium/left. Participants were 28 deaf signers from 17 to 74 years old. The documentary "Joining the Dots" (Romero-Fresco, 2012) was translated into Catalan SL and edited into four clips displaying all four combinations. All participants watched all contents in different combinations using a Latin Square design while eye movements were recorded with Tobii Eye Tracker. We defined two areas of interest: SLI sub-screen and documentary sub-screen. After watching each clip, participants filled up two questionnaires to evaluate their recall of linguistic content (SL interpretation) and visual content (Documentary visual information). We analysed the effects of the factors: Size, Position and Area on the measures Fixation Count, Fixation Duration, and Total Visit Duration using a GLM with Repeated Measures. Area was the only factor showing significant effects: the SLI sub-screen was visited for longer time, with longer fixations, and more fixations. Position and size in this experiment were not relevant for Sign Language users, whose pattern of exploration consists mainly on focussing in SLI with shorter gazes to the general screen. We ran Paired Samples T-tests in order to check if there were differences between Linguistic and Visual recall for each screen configuration. Linguistic recall was better for configuration Small Size/Left position. Visual recall did not differ significantly from linguistic recall, even if users tended to make longer visits with longer fixation durations on the SLI sub-screen. Probably deaf sign-language users collect visual information parafoveally. This interpretation is based on some perceptual studies that point out that parafoveal vision is enhanced in sign language users (Dye, Seymour, Hauser, 2016; Siple, 1978). A tentative conclusion from our results is that sign-language users seem to adapt swiftly to different screen configurations. Further studies could test on other screen designs to favor usability and guide directions to content producers.
Grants: Agència de Gestió d'Ajuts Universitaris i de Recerca 2017/SGR113
Ministerio de Economía y Competitividad FFI2015-64038-P
Rights: Aquest document està subjecte a una llicència d'ús Creative Commons. Es permet la reproducció total o parcial, la comunicació pública de l'obra i la creació d'obres derivades, sempre que no sigui amb finalitats comercials i que es distribueixin sota la mateixa llicència que regula l'obra original. Cal que es reconegui l'autoria de l'obra original. Creative Commons
Language: Anglès
Document: Pòster de congrés
Published in: Scandinavian Workshop on Eye Tracking (SWAET2018). Copenhagen, Dinamarca, : 2018



1 p, 5.9 MB

The record appears in these collections:
Research literature > UAB research groups literature > Research Centres and Groups (research output) > Arts and Humanities > TransMedia Catalonia
Contributions to meetings and congresses > Posters

 Record created 2018-09-04, last modified 2022-06-04



   Favorit i Compartir