Sunday, National Geographic reported that researchers at the Massachusetts Institute of Technology had successfully tested a computer system that, by itself, deciphered a substantial amount of Ugaritic “in a matter of hours” (MIT press release). This system is based on a statistical model developed by Benjamin Snyder and Regina Barzilay of MIT and Kevin Knight of the University of Southern California. According to MIT,
Snyder and Barzilay don’t suppose that a system like the one they designed with Knight would ever replace human decipherers. “But it is a powerful tool that can aid the human decipherment process,” Barzilay says. Moreover, a variation of it could also help expand the versatility of translation software. Many online translators rely on the analysis of parallel texts to determine word correspondences: They might, for instance, go through the collected works of Voltaire, Balzac, Proust and a host of other writers, in both English and French, looking for consistent mappings between words. “That’s the way statistical translation systems have worked for the last 25 years,” Knight says.
But not all languages have such exhaustively translated literatures: At present, Snyder points out, Google Translate works for only 57 languages. The techniques used in the decipherment system could be adapted to help build lexicons for thousands of other languages. “The technology is very similar,” says Knight, who works on machine translation. “They feed off each other.”
See also CNN; HT: Mark Catlin (personal blog).