In my research, I seek to understand the relationship between physical language production (what linguists call phonetics) and the meanings we attach to different characteristics of production. As we use language, humans produce and percieve physical signals: auditory, in the case of spoken languages; visual, in the case of signed languages; and tactile, in the case of protactile language of DeafBlind communities. These signals allow us to communicate because we map them to meanings, which can be linguistic (the ASL sign TABLE refers to a familiar object), social (American English users associate the word 'y'all' but particular -- especially Southern -- varieties of English), or even emotional or physical (I can tell by someone's voice or signing if they are tired, upset, or sick). Sometimes we consciously access this process of mapping between the signal and meaning. For example, if my friend's hands are shaking as they sign, I could point that out as a sign that they are upset or scared or angry. But often, we subconsciously map characteristics of the language signal to meaning without even knowing we are doing so. And sometimes we can't describe exactly what it is about someone's language use that tells us a particular piece of information. For example, I might be able to tell from someone's ASL use that they are hearing or that they are from a particular city, even if I can't explain why I know that.
In my research, I ask what aspects of the language signal humans map to different meanings. For example,
Do iconic mappings between form and meaning affect grammatical processes in ASL?
How do physiological and cognitive factors impact the structure of ASL phonology?