Traditionally, the quality of machine translation (MT) output at best was sufficient to serve as an informative translation for users without any knowledge of the source language but not for the purpose of professional translation. However, still restricted to limited scenarios dependent on language pair and text type, MT quality has improved in such a way that it has found its way into professional translation workflows–especially when software localization and technical documentation are concerned. With this development in mind, the research questions of our study focus on the empirical investigation of the efficiency of post-editing and on typical revision strategies and processes. We present an empirical comparison of three translation tasks using Translog-II and Tobii eyetracking, in which 24 translators translate 6 English texts into German: two of the texts were translated from scratch, two other texts were pre-translated with Google MT, which the translator then had to post-edit and in a third task, two Google pre-translated texts had to be post-edited without the translator being able to consult the source text. We use keylogging, eyetracking and retrospective interviews to track back the different (un)conscious cognitive processes and problems involved in the different tasks. On the basis of this multi-method approach, we compare post-editing strategies to translation strategies. Furthermore, processing time as well as cognitive efforts during translations are contrasted and discussed.
![](../sites/pagines.uab.cat.trec/files/styles/capcalera/public/capcalera960-2_2.png%3Fitok=OdUYfU56)