Thursday, July 10, 2008

The Neurosciences and Music III conference - Part 3

Friday, June 27th was a much better day. The day started off with a two and a half hour poster session. Steven Mithen's keynote lecture was refreshing. It was nice to hear from an established archaeologist's perspective about how humans could have evolved as a musical species. The gist of his talk can be summarized by a few lines from his abstract. "New research in the study of hominin fossils, archaeological remains as well as in the fields of neuroscience, developmental psychology and musicology are allowing" insights into music's evolutionary history, suggesting that "communication by variations in pitch and rhythm most likely preceded that of speech, and may indeed have provided the foundation from which spoken language evolved."

Symposium 5 dealt with the topic of emotions and music. Maria Saccuman gave an interesting talk about musicality in infants. Saccuman et al. subjected 18 two-day old infants to an fMRI study, with three kinds of stimuli involving western tonal music excerpts; alterations of these excerpts; and excerpts with violations of tonal syntax. Their results indicated the existence of dedicated systems for processing music-like stimuli and sensitivity to syntactic violations of these stimuli. Daniel Levitin presented results from his lab suggesting how adolescents with autistic spectrum disorders (ASD) are more sensitive to structural features of music than to the expressive (emotive) features. It was interesting to compare this with Levitin's previous work on music perception in individuals with Williams syndrome who, unlike ASD individuals, display a strong sensitivity to music's expressive features.

While there were several more talks on Friday as well as on Saturday, which I am sure were of interest to people in various domains of music cognition, I would like to focus on the one talk that impacted me the most on Saturday, June 28th: Gottfried Schlaug's work on how singing helps aphasic patients, as a part of Symposium 6. This was by far the BEST talk of the entire conference for me. His presentation and results were evidence of music's multimodal nature, and its ability to recover pathways in the homologous language regions of the right hemisphere, in aphasic patients with left hemispheric lesions (left frontal and superior temporal lobes). His talk illustrated how a form of treatment through melodic intonation (Melodic Intonation training) can considerably improve speech production in aphasic patients. Each patient underwent 75 treatment sessions or so, of MIT, which enabled the individual to produce/articulate speech by actively engaging the language regions of the right hemisphere through musical elements such as intonation, rhythmic tapping, and continuous voicing.

No comments: