3.12.11

Neural bases of language

PARIS--(BUSINESS WIRE)--Dec 2, 2011 - The international jury under the presidency of Prof. Albert Galaburda (Harvard Medical School, Boston, USA) awarded on November 29th, 2011 the 20th Jean-Louis Signoret Neuropsychology Prize of the Fondation Ipsen (Paris:IPN) (20.000€) to Prof.Patricia K. Kuhl (University of Washington, Seattle, USA) for her work that played a major role in the understanding of language acquisition and the neural bases of language.

Infants are born with innate abilities to easily identify every sound of every language, however, by the end of the first year of life, infants show a perceptual narrowing of their language skills. Their ability to discern differences in the sounds that make up words in the world’s languages shrinks. Nonnative sounds are no longer differentiated.

This developmental transition is caused by two interacting factors: the child’s computational skills and their social brains. Computational skills allow rapid statistical learning and social interaction is necessary for this computational learning process to occur.

Neuroimaging using Magnetoencephalography (MEG) may help explain the neuroplasticity of the child’s mind versus the more expert (but less open) mind of the adult, and account for the “critical period” for language.

She also studied the early development of the brain of bilingual children. By 10 to 12 months bilingual infants do not show the perceptual narrowing of the monolingual children. This is another piece of evidence that experiences shape the brain.

http://www.pharmalive.com/News/Index.cfm?articleid=816351

Sam: Research in Le Beaumont reveals that language acquisition involves 3 stages,
1. voice recognition(birth to 9 months) & production(13 to 14th month);
2. vocabulary spurt (15-20th month);
3. pattern recognition (24-36th month).

The first stage (0 to 9th month) involves the development of a software program for voice recognition in the brain. The person can only recognize voices found in the voice recognition system. A foreign voice not found in the system is diverted to the nearest sound in the system. This distortion of sounds, arising from voice recognition, causes ascents. [Share your views with sam@beaumont.hk]

沒有留言: