|View Our Catalog||
Sign Languages in
Another work on the topic of lexical comparisons between sign languages, this time with the focus on two Asian sign languages, is authored by Daisuke Sasaki, who addresses lexical contact between JSL (also referred to as NS, or Nihon Syuwa, in some works but referred to as JSL in this volume) and Taiwan Sign Language (TSL). Historical accounts of the development of TSL cite JSL as one of the sign languages that influenced the development of TSL. Thus, Sasaki compares lexical items with an emphasis on the handshape parameter of articulation, and he further focuses the analytical lens on similarly articulated signs — those that differ only in one phonological parameter (i.e., handshape for Sasaki’s analysis) but share the same meaning. Sasaki finds that a number of similarly articulated TSL-JSL signs show that TSL appears to contain handshapes that may be more difficult to articulate than those found in the JSL signs. The author suggests that this is due to conservatism on the part of TSL, which has allowed that language to retain older forms that may have also been a part of JSL but no longer exist in that language because of language internal changes that tend toward efficiency and ease of articulation.
Contact between Two or More Signed Languages
Lucas and Valli (1992) have briefly discussed several possible outcomes of contact between two signed languages: lexical borrowing; foreigner talk; interference; and the creation of pidgins, creoles, and mixed systems. Whereas these areas of inquiry have not yielded many published writings, there also likely exist unpublished works that provide descriptions of signed language contact. A list of characteristics of signed language that seem to influence such contact is found in the next section, but first I present a review of contact between sign languages in terms of lexical borrowing, code switching, interference, IS as a pidgin, and language attrition and death.
Lucas and Valli (1992) caution that it would be difficult to determine the difference between an instance of lexical borrowing and code switching (or code mixing) in signed language. The issue is that, in spoken language work, borrowings have often been characterized by the integration of the borrowed word into the phonology of the other language, but this integration may not be evident in signed language. The authors maintain that this is because sign language phonologies share many basic components. Thus, in an environment in which two sign languages are frequently used, it might be difficult to definitively determine which phonology (e.g., that of Language A or Language B) the signer is using in some instances. Because of this, the authors claim that using terms like borrowing and code switching may be problematic when looking at signed language contact situations.
Keeping in mind these points about code switching versus borrowing, my dissertation work (Quinto-Pozos 2002) provides evidence that U.S.- Mexico border signers of LSM and ASL engage in code switching. That work and another (Quinto-Pozos in press) describe the sequential use of synonymous signs from ASL and LSM for the purposes of reiteration — much like certain switches described in spoken languages (e.g., see Auer 1998; Eldridge 1996; Pakir 1989). In some cases, the reiterative switches seem to emphasize a particular sign, and at other times, they appear to be used to ensure that an interlocutor comprehends the message. However, there also seem to be examples of reiterative switches that do not place a focus on the switched item.
In addition, I present examples of nonreiterative switches and the complexity of dealing with items that may be articulated similarly in both sign languages and, as a result, are relatively transparent to the interlocutor (Quinto-Pozos in press). Examples are various types of points, so-called classifier constructions, commonly used gestures, and the more mimetic-looking examples often referred to as construction action. When such meaningful devices exist within the sign stream, it is not clear how to label a particular utterance (e.g., a so-called classifier construction from Language A or Language B, an emblem from the ambient hearing community versus a sign, or the use of constructed action versus a language-specific lexical item).
As a result, investigators of code switching in signed language have faced the task of examining the way in which signers switch not only between the two languages but also between meaningful linguistic versus nonlinguistic elements. The latter (e.g., points, other gestural material) may even co-occur with the linguistic devices of the sign languages, and this must be addressed as well. The data presentation and analysis in my work (Quinto-Pozos 2002, to appear) focus primarily on lexical phenomena, but ultimately an in-depth syntactic analysis of code switching between two sign languages is needed to compare this phenomenon across languages in the two modalities. For such phrase-level analyses, a framework for treating pointing, the use of gestures, and the use of mimetic devices in signed language must be employed.
Interference is another possible outcome of contact between two sign languages that Lucas and Valli (1992) have discussed. Interference can be described as the surfacing of the articulatory norms of one sign language in the production of another. Some instances of this phenomenon may be evident in the phonological parameters of sign formation. Lucas and Valli (ibid., 35) refer to this type of interference as follows: “It might be precisely the lack of phonological integration that might signal interference — for example, the involuntary use of a handshape, location, palm orientation, movement, or facial expression from one sign language in the discourse of the other.” Interference may also be evident at other levels of language structure, such as the morphology or syntax of one or both of the signed languages.