View Our Catalog

Join Our E-Mail List

What's New

Sign Language Studies

American Annals of the Deaf

Press Home

Language and the Law in Deaf Communities

Previous Page

Next Page


The human apparatus available for producing and receiving a visual-based language is quite different. For producing visual signals, ASL makes use of at least eight articulators: (1) dominant hand for signing; (2) nondominant hand for signing; (3) eye gaze; (4) eyebrow posture; (5) cheek posture; (6) mouth posture; (7) head movement and posture; and (8) shoulder posture. The postures for each of these articulators are very specific to affect the intended syntactic function or lexical meaning. At any given moment in the production of an ASL sentence, one or more of these articulators produce nonmanual signals at the same time the hands are producing manual signals. Thus, unlike spoken languages that can only produce a linear chain of meaningful units, ASL chains together clusters of meaningful and syntactic units.

For receiving visual signals, ASL uses the eyes. Unlike the ear, which is limited to processing one meaningful sound “bit” at a time, the eyes are powerful receptors capable of processing multiple visual bits and their interrelationship simultaneously.

Because ASL uses multiple articulators to simultaneously produce multiple units of meaning, one might assume that it can express thoughts (or “propositional” content) more quickly than spoken languages. But there is one more piece to this engineering puzzle. Signs produced on the hands (the manual signs) require gross motor movement. Words spoken, on the other hand, are articulated using the fine motor movement of the tongue, jaws, and vocal cords. This fine motor movement for articulating a spoken word is much more rapid than the gross motor movement used for producing signs. The cumulative effect is that speech, although limited to a single channel for communication, uses a channel that is quite rapid. ASL uses slower gross motor movement but simultaneously combines multiple units of meaning. The net result is that spoken language and ASL express propositional content at more or less the same rate.

Syntactic Nonmanual Signals

Nonmanual signals are of two types: lexical and syntactic. See both Liddell (1980) and Metzger and Bridges (1996) for detailed descriptions of lexical and syntactic nonmanual signals in ASL. For the purpose of this study, I have chosen to focus on syntactic nonmanuals since such signals are a crucial syntactic component of nearly every ASL utterance. In particular, I have focused on the following syntactic signals (for which there is widespread agreement among linguists, native Deaf signers, and other researchers as to form and function):

  1. Affirmation
  2. Negation
  3. Yes/No Question
  4. Wh- Question
  5. Conditionals
  6. Listing
  7. Topicalization
  8. Comparative Structure
  9. Role Shift

(Liddell 1980, 10–63; Valli and Lucas 1992, 277–84; Bridges and Metzger 1996, 13–20)


Previous Page

Next Page