Project Detail |
Words in sign languages are rich in visual meaning. They contain shapes, movements, relations in space, etc. that depict objects and actions as symbolic metaphors; e.g. the action of pulling words out of the head in one language means ‘to ponder’. Yet, signs are also encoded into units of form that are articulated in simultaneous constructions, unlike sequences of consonants and vowels in spoken words. How, then, do signers store richly symbolic words that occur in highly simultaneous forms in their mental lexicons? At present, insight into these mental mappings remains highly occluded, not only at the level of behavioral and neural phenomena, but in terms of linguistic analysis as well. What, indeed, is morphology in sign languages when even the smallest units of form—like hooked fingers or a location at the throat—can carry meaning? What is the nature of these units? Do they vary across sign languages or are the iconic roots of form-meaning mappings so powerful that the same ones re-occur across unrelated sign languages? Answering these questions is urgently needed to create better sign language resources for teaching and learning, and to advance language technologies.
The SemaSign project proposes a ground-breaking approach to these questions by locating form-meaning correspondences in sign languages through computational means while creating new empirically-robust datasets to reveal how signs are organized in the mental lexicon. Semantic networks are created for sign languages from Germany, Kenya, and Guinea Bissau on the basis of word association responses in which a signer sees a sign from their language and responds with the first three signs that come to mind. This will establish an objective measure of semantic relatedness, enabling computational means to locate clusters of signs unusually close in both form and meaning. As the language in Guinea Bissau was formed only 15 years ago, we will also discover how lexicons emerge and grow at a very early stage |