Incorporation of visual aids into sign language interpretation in a remote educational setting
Alapuranen, Marjo-Leea (2023)
Alapuranen, Marjo-Leea
2023
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2023110628714
https://urn.fi/URN:NBN:fi:amk-2023110628714
Tiivistelmä
This study examines how visual aids are utilised in remote educational interpreting into Finnish Sign Language. As a result of the COVID-19 pandemic, sign language interpreting in Finland had to take a sudden leap into remote environments. However, the practices employed in these settings have not yet been documented. The issue is approached with two research questions: 1) How do interpreters utilise visual aids in an online educational interpreting setting, and 2) What are the differences between interpreters' decisions, and how might those differences be explained?
To answer these questions, two datasets were collected. The primary data set consists of video recordings of seven participants interpreting the same source text. The first data set was analysed using multimodal (inter)action analysis (Norris 2004, 2019). The analysis focused on sequences where the participants incorporated the visual aid into their interpretation and where they pointed to the slide or its contents. The secondary data set consists of retrospective, self-initiated task reviews from the seven participants, which were analysed by content analysis to identify preliminary themes.
This study shows that during interpreting, visual aids can be incorporated into the interpretation by using different modes, primarily body shift, classifier constructions, buoy constructions, and pointing. The different modes create chaining (Bagga-Gupta 2000, 2004) sequences through which parts of the visual source text are included in the interpretation. The findings show that the participants make use of the remote environment's affordances by manipulating especially the mode of pointing. There were similarities within the group and variation between the participants. Also, individual preference could be inferred from the interpreting task data.
The differences between the participants are explained by features of the spoken and visual STs, decisions made during preparation, participant's conceptualisations of the discussed topics, their familiarity with the remote environment, and their previous experiences and historical bodies (Scollon & Scollon 2004).
The results show that, even though meaning is constructed and communicated multimodally in online and offline environments, the remote environment has distinctive features. Practitioners and trainers need to be aware of these features to be able to adapt their working practices accordingly.
To answer these questions, two datasets were collected. The primary data set consists of video recordings of seven participants interpreting the same source text. The first data set was analysed using multimodal (inter)action analysis (Norris 2004, 2019). The analysis focused on sequences where the participants incorporated the visual aid into their interpretation and where they pointed to the slide or its contents. The secondary data set consists of retrospective, self-initiated task reviews from the seven participants, which were analysed by content analysis to identify preliminary themes.
This study shows that during interpreting, visual aids can be incorporated into the interpretation by using different modes, primarily body shift, classifier constructions, buoy constructions, and pointing. The different modes create chaining (Bagga-Gupta 2000, 2004) sequences through which parts of the visual source text are included in the interpretation. The findings show that the participants make use of the remote environment's affordances by manipulating especially the mode of pointing. There were similarities within the group and variation between the participants. Also, individual preference could be inferred from the interpreting task data.
The differences between the participants are explained by features of the spoken and visual STs, decisions made during preparation, participant's conceptualisations of the discussed topics, their familiarity with the remote environment, and their previous experiences and historical bodies (Scollon & Scollon 2004).
The results show that, even though meaning is constructed and communicated multimodally in online and offline environments, the remote environment has distinctive features. Practitioners and trainers need to be aware of these features to be able to adapt their working practices accordingly.