View Our Catalog

Join Our E-Mail List

What's New

Sign Language Studies

American Annals of the Deaf

Press Home

Prosodic Markers and Utterance Boundaries in American Sign Language Interpretation

Previous Page

Next Page


One of the challenges in analyzing the phonetic structure of signed languages has been that production is highly variable across signers. To date, this variation in expression has received fairly little attention in the literature (for exceptions, see Crasborn, 2001; Wilbur, 1990; Wilbur & Nolen, 1986; Wilcox, 1992), but the scant phonetic description of signed languages has been caused by the lack of tools to accurately measure the articulation of signs. Video recordings of signed languages have typically been used to create transcriptions of signed languages, rather than for analyzing phonetic structure.

To resolve this issue, researchers devised transcription systems that employed detailed notations of physical elements, pictures, diagrams, and glosses in the local written language (for a complete description, see Hoiting & Slobin, 2001). Many of these systems are still in use, but either lack a standard approach to transcription or the ability to be represented using a standard keyboard. Additionally, while certain lexical items can be readily transcribed, the non-segmental articulations of signed languages are more problematic. Therefore, the transcription issues that occurred in the study of prosody in spoken languages are parallel to those in transcribing signed languages.

New technologies are providing the means to study signed language phonetic systems. ELAN is a linguistic annotation tool that was designed for the creation of text annotations to audio and video files (Crasborn, van der Kooij, Broeder, & Brugman, 2004). ELAN’s first application was in the measurement of gesture that co-occurred with speech; however, in recent years it has increasingly been used in the study of signed languages (Broeder, Brugman, Oostdijk, & Wittenburg, 2004; Brugman, Crasborn, & Russell, 2004). Another tool, SignStream, was developed for analysis of signed language captured on video (Neidle, 2002). SignStream provides an environment for manipulating digital video and linking specific frame sequences to simultaneously occurring linguistic events encoded in a multi-level transcription tier. Programs such as ELAN and SignStream greatly simplify the transcription process and increase the accuracy of transcriptions for signed languages. In this way, software is beginning to provide a means to conduct phonetic analysis on signed languages.

The study of language across two distinct modalities provides a rich opportunity to investigate modality effects on grammar as well as identify linguistic characteristics that are universal. There have been a variety of approaches to the examination of signed language prosody; this literature review focuses on two aspects of research to that are particularly relevant to the study reported here: (1) Studies that focus on the specific morphosyntactic functions of individual prosodic markers, and (2) models of signed language structure based on the theory of prosodic phonology.

Signed Language Prosody and Morphosyntactic Structure

As discussed in the earlier section on spoken languages, prosodic structure is distinct from, but associated with syntactic structure. The same is true for signed languages. Both spoken and signed languages use prosodic structure to emphasize selected constituents, and to communicate the discourse function of the sentence (Sandler & Lillo-Martin, 2001). This section provides an overview of research conducted on the association between prosody and morphosyntax in signed languages and assembles evidence that prosodic structure is an integral part of the linguistic systems.

Signed languages are frequently portrayed as manual languages, that is, produced solely by the signer’s hands. The facial expressions used during the production of signed languages were initially thought to convey the signer’s emotions, and little more. In the past several decades, however, linguistic research has demonstrated that non-manual components, produced by the signer’s eyes, face, head, and torso, contribute to marking syntactic structure across a variety of signed languages (Baker-Shenk, 1985; Bergman, 1983; Lawson, 1983; Sorenson, 1979; Vogt-Svendsen, 1981; Woll, 1981).

It has been well established that particular facial expressions in ASL span syntactic constituents, such as yes–no questions, wh-questions, topicalized elements, and relative clauses (e.g., Aarons, Bahan, Kegl, & Neidle, 1992; Baker-Shenk, 1983; Coulter, 1979; Liddell, 1978, 1980; Petronio & Lillo-Martin, 1997). Further, Israeli Sign Language (ISL) has been shown to use facial expressions that correspond to the tonal melodies in spoken language in many ways (Nespor & Sandler, 1999).

The distinction between facial behaviors that convey affect and those that mark grammatical structures has been supported by brain studies indicating that affective expressions appear to be primarily mediated by the right hemisphere and linguistic expressions involve left hemisphere mediation (Corina, Bellugi, & Reilly, 1999). Affectual facial expressions are random and optional, but linguistic facial expressions are grammaticized, fixed, and systematic (Sandler & Lillo-Martin, 2001).

In a study of linguistic structure, Liddell (1978) pointed out that relative clauses are grammatically marked in ASL, not by function words such as that, but by nonmanual grammatical markers consisting of raised brows, a backward head tilt, and a tensed upper lip. Differences in head movement were found to distinguish the signals for yes-no questions and topics (Liddell, 1980). It was later found that the signals for yes-no questions and wh-rhetorical questions differ in head movement and movement of the upper eyelid (Baker-Shenk, 1983).


Previous Page

Next Page