As we go through the day, listening, reading, instant messaging with co-workers, we are constantly decoding sentences and constructing meaning from them. As we build meaning based on syntax and semantics, we are also, it seems, calculating how likely what we heard is what the other person meant.
A recent study by Edward Gibson, Leon Bergen, and Steven Piantadosi argues that we mentally compensate for noisy environments and producer or perceiver errors when we hear or read ambiguous sentences. After reading, “The mother gave the candle the daughter,” we might decide something more sensible like ,“The mother gave the candle to the daughter,” was intended.
In fact we seem “well designed” for “recovering intended meaning from noisy utterances,” say the authors. To date, most sentence processing theories assume that sentence transmission is error free. In reality though, if we are young or a non-native speaker or stressed, confused, or tired, our language can be rife with errors. We mishear people and misread sentences. People speak to us in noisy, crowded bars. People write us notes with sloppy, near-illegible handwriting.
“Given the prevalence of these noise sources,” the authors write, “it is plausible that language processing mechanisms are well adapted to handling noisy input, and so a complete model of language comprehension must allow for the existence of noise.”
Communication is counted a success, the say, when the meaning gleaned is the same as the meaning intended. We seem mentally optimized for decoding a person’s true meaning, acting, the authors write, as rational Bayesian decoders–able to assimilate new information into our prior knowledge and expectations.
The authors report evidence for each of four predictions they make based on their model. In the first, semantic cues pull us toward plausible interpretations, especially if the structural changes needed in the sentence are few. For example take, “The mother gave the candle the daughter” and “The mother gave the candle to the daughter.” We expect mothers to give things to their daughters. To candles? Not so much.
In the second prediction, deletions should be counted as more likely than insertions. While “a deletion only requires a particular word to be randomly selected from a sentence,” write the authors, “an insertion requires its selection from (a subset of) the producer’s vocabulary.” If a word is added to the original, intended sentence we’re more likely to assume the literal meaning.
As a third line of evidence, the authors predicted comprehenders would be more willing to infer non-literal, more plausible meanings in noisy environments. So, if you hear, “The ball kicked the girl” in a crowded room, you are more likely to assume your companion really meant, “The ball was kicked by the girl,” than if you are listening in a quiet place.
Lastly, Gibson and his colleagues find that if we listen to someone who speaks in many nonsensical sentences we start assuming they are not making errors, but they intend to speak or write that way. (We may also start assuming they’re crazy.)
This model–that we rationally integrate noise and expectations of sentence structure and meaning into our interpretations–helps explain some of the neural firing patterns observed in the event-related potential research. It may also, he says, help explain how people with agrammatic aphasia can understand language so well, even if their own speech is grammatically incorrect.
“More generally,” he says, “we think that noisy-channel models of language can help explain word-order origins and variation in human language.”
Language like Japanese, Turkish, or Hindi, he believes retain a more general, basic word order: subject – object – verb. Languages like English and Chinese, may have evolved in word order (to subject – verb – object) to minimize confusion when both the subject and object are animate. In the future work, the researchers plan to look for similar patterns of rational Bayesian inference across other languages.