Chapter One of Sign Language Interpreting continued...
banner Examinations of simultaneous interpretation have focused on how interpreters process simultaneous input and output. Welford (1968) described the interpreter's ability to perform these dual tasks by positing that the interpreters actually learn to ignore their own speech in order to focus on the listening task. However, the fact that interpreters initiate repairs, or corrections, within their own utterances indicates that there is attention to their own vocal feedback (Paneth 1957; Gerver 1974a). With regard to the processing of simultaneous input, Pinter (1969) found that subjects with experience interpreting were better able to repeat sentences and answer yes-no questions and Wh-questions that overlapped (or occur simultaneously) with their responses than subjects with no interpreting experience. Wh-questions are those that in English contain interrogative words beginning with "Wh," such as who, what, when, and where. A study of interpreting students showed that the interpreting students are able to recall and comprehend material that has been interpreted better than material that has been shadowed (repeated in the same language), indicating that it is possible for interpreters to cognitively handle more than one task at a time (Gerver 1974b).

Split attention or split memory is an information-processing approach to understanding interpreters' ability to engage in multiple tasks (Van Hoof 1962). Three-track memory, a notion proposed by Hromosová (1972), is an attempt to account for the interaction between short-term and long-term memory as an interpreter stores the incoming source message, retrieves linguistic knowledge of both languages, and articulates the translation. Numerous models of the interpreting process focus on such issues as input and memory and follow theories of information processing (Richards 1953; Nida 1964; Kade and Cartellieri 1971; Chernov 1973; Gerver 1976; Moser 1978).

Early research regarding simultaneous interpretation (Paneth 1957) addresses the issue of how interpreters manage information. Paneth discusses interpreters' use of lag time, segmentation of the message, and the use of pauses as a time to catch up to the original speaker's point in the presentation. Lag time refers to the time difference between the interpreter hearing the input and producing the translation and, for this reason, has also been referred to as "ear-voice span" (Treisman 1965; Oléron-Nanpon 1965). Treisman examines both shadowing and simultaneous interpreting among noninterpreters, finding that interpreting requires a greater lag time than shadowing. The length of lag time is determined, in part, by the relative difficulty of the input (Oléron-Nanpon 1965). Interestingly, a study of lag time in English–British Sign Language interpretation found that interpreters used a very short lag time (Llewellyn-Jones 1981). In examining ASL-English interpreters, Cokely showed that the length of lag time does influence the quality of the output (Cokely 1992). He indicates that shorter lag times result in a higher number of miscues. Similarly, in a spoken-spoken language interpretation study, Barik (1975) finds that too short a lag yields errors and false starts. Barik also finds that with too long a lag, omissions increase. Because the segmentation of information is critical to accuracy of output, some researchers have focused on how interpreters segment, or organize, information into manageable units.

The manner in which interpreters segment incoming information is inherently linked to the rate at which that information arrives. A study of the effects of input rate on simultaneous interpretation showed that the faster the incoming message, the longer the lag time exhibited by interpreters (Gerver 1969). This study confirmed an earlier estimate of the ideal input rate (Seleskovitch 1965) of approximately 95 to

Previous Page
Back to information about this book
Next page
Catalog Submissions Permissions Ordering Home