Stanford University researchers have written a program that "learns to decode sounds from different languages in the same way that a baby does helps to shed new light on how people learn to talk," according to Reuters
. It supports the theory (which is different from coming close to proof
) that babies listen to sounds and sort out how the language is put together.
"In the past, people have tried to argue it wasn't possible for any machine to learn these things, and so it had to be hard-wired (in humans)," [Stanford psychology professor James McClelland] said. "Those arguments, in my view, were not particularly well grounded."
I want to know when they have the computer start talking, based on what it learned. Will the first word be programmama
Labels: baby, computers, learning, programs, research, speech, Stanford, talk