View Our Catalog

Join Our E-Mail List

What's New

Sign Language Studies

American Annals of the Deaf

Press Home

Sign Languages in Contact

Previous Page

Next Page

The Utilization of Gestural Resources

Signers take advantage of commonly used nonlinguistic gestures from the ambient hearing — and perhaps even Deaf — communities. Some of those gestures may become part of the lexicon or grammar of the sign languages as evidenced, in part, by changes in their articulation vis-à-vis the manner in which hearing people use those gestures. However, deaf signers also articulate gestures that, at least on the surface, do not appear to differ from some of those that hearing people use in conjunction with speech. As with iconic devices, such gestural resources — some of which become lexicalized or grammaticized over time and others that remain as gestures — present challenges for the researcher of signed language contact. One challenge for some analyses (e.g., a syntactic account of code switching) is to determine whether a meaningful form is, in some cases, a sign or a gesture.

Various authors have suggested ways in which the gestures — both manual and nonmanual — of hearing people can now be considered as part of a sign language. For example, Janzen and Shaffer (2002) maintain that some hand gestures have been grammaticalized as modals in ASL and that some facial gestures (specifically brow raise) have been incorporated as nonmanual signals that provide syntactic information (e.g., topic markers). McClave (2001) has also proposed that nonmanual signals (e.g., head shifts for direct quotes) in ASL have been influenced by the gestures of hearing people. Casey (2003a, 2003b) has shown that directional gestures and torso movements of nonsigners are similar to verb directionality and torso movement for role shift in signed language. She suggests that directionality in ASL (and other sign languages) originates from nonlinguistic gestures, but first- versus non-first-person distinctions have been grammaticized; thus not all of the directional gestures can be considered purely gestural.

Another way in which signers use the common gestures of the hearing communities in which they are situated is through their use of emblems or emblematic gestures. These meaningful gestures have been discussed by various authors (e.g., Efron 1941; Ekman and Friesen 1969; Kendon 1981, 1988; de Jorio 2000; McNeill 2002), who have described them as culture-specific displays that normally follow standards of form. In some instances, they actually substitute for spoken words, but they can accompany speech as well. Pietrosemoli (2001) writes about the emblems (or “cultural signs,” in her terminology) that hearing Venezuelans commonly use and that signers of Venezuelan Sign Language (LSV) also produce. She reports that the emblematic signs appear to reflect a code switching of emblems with linguistic items or a borrowing of the emblems into LSV. Pietrosemoli suggests that code switching and lexical borrowing are related to deaf Venezuelans’ interaction with hearing Venezuelans and the concept of politeness. She employs the Brown and Levinson (1987) model of politeness as a framework to show that the use of some emblematic signs by LSV signers is intentional (but not face threatening), whereas some serve as face-threatening acts. Additionally, she describes how cultural misunderstandings, due to the mutual inaccessibility of the languages in question, are the result of the use of emblematic signs.

In Quinto-Pozos (2002, 2004), I note that emblematic gestures alternate with lexical signs of LSM and ASL in the discourse of some deaf signers who live along the U.S.-Mexico border. For instance, the emblem that I have glossed as “well” (see Figure 1; consisting of palms turned upward and an optional shrug of the shoulders and/or tilt of the head to one side) occurs with high frequency in the contact data, and the emblem was produced by signers of both LSM and ASL.

Specifically, that emblem appeared 236 times within a data set of 6,477 lexical items, which translates into a frequency of approximately 36 per 1,000 signs. By way of comparison, the most frequent nonpronominal lexical item in the ASL corpus described in Morford and MacFarlane (2003) was name, with a frequency of 13.4 per 1,000 signs. In the Morford and MacFarlane data set, well occurred with a frequency of 7.5 per 1,000 signs, although those authors seem to have considered that item an ASL sign as opposed to a commonly used gesture. One of my points (Quinto-Pozos 2004) is that emblems such as “well” should be categorized separately from the lexical signs of a sign language because it is not clear whether they are signs of the language (i.e., borrowings) or emblems that have been code-switched. This could be particularly important if linguistic studies were to use emblems for dataelicitation tasks. The interaction of emblems with linguistic structures has been studied minimally at best; and at this point it is unclear whether they are processed differently.

Ways in which signers direct or “point” signs — either to present or hypothetical entities — should also be considered in signed language contact analyses. According to some accounts (e.g., Liddell 2002, 2003), some signs are directed at physically or conceptually present entities and can be described along linguistic and gestural parameters. The gestural parameters are presumably understood, at least to some degree, crosslinguistically, and this could impact cross-linguistic communication. Liddell (2002, 75) suggests that “Signers know where things are and direct signs toward them through the ability to point. The handshapes, orientation of the hand(s), the type of movement (straight, arc), are linguistically defined. The directionality of the signs is not linguistically defined. It is variable, depending completely on the actual or conceptualized location of the entity the sign is directed toward.”

Previous Page

Next Page