Friday, July 25, 2008

Amusia

One highly exciting topic in music cognition is that of amusia and the perception of prosody in amusics. In simplistic terms, amusia refers to tone deafness. "Tone deafness" here does not refer to our subjective use of that phrase in everyday life, such as when applying it to a poor singer (with normal music perception abilities). Its application is much more limited, and is determined objectively by means of tests such as the MBEA (Montreal Battery for the Evaluation of Amusia). These tests involve same v different judgment tasks w.r.t. melody and rhythm. A lot of significant work in this area has already been done by Isabelle Peretz and her colleagues. Peretz, Ayotte, Hyde, Zatorre, Penhune, Patel were some of the initial researchers who successfully narrowed down the problem of amusia to a pitch-perception related disorder.

Amusia in a lot of cases is congenital (suggesting a genetic basis) and is estimated to affect about 4% of the population (a surprisingly high figure!). Research seems to indicate that it is not an all-or-nothing disorder. Amusia can impair music perception abilities from a low to a high degree, and might be existing within a continuum. Amusia can also be acquired on account of brain damage. What makes amusia an interesting area of study is the nature of evidence from amusics showing partial impairment of linguistic prosody judgments in some cases, and the lack of it in some cases. This inconsistency with regard to the accompaniment of a linguistic prosody disorder along with impaired musical abilities has urged researchers to explore this area in further detail. Current research findings were presented in a talk given by Aniruddh Patel at the Neurosciences and Music conference in Montreal. A poster related to judgment of prosody in tonal languages was presented by Sebastien Nguyen et al. I will talk about some of these research findings in greater detail in my next blog.

Thursday, July 10, 2008

The Neurosciences and Music III conference - Part 3

Friday, June 27th was a much better day. The day started off with a two and a half hour poster session. Steven Mithen's keynote lecture was refreshing. It was nice to hear from an established archaeologist's perspective about how humans could have evolved as a musical species. The gist of his talk can be summarized by a few lines from his abstract. "New research in the study of hominin fossils, archaeological remains as well as in the fields of neuroscience, developmental psychology and musicology are allowing" insights into music's evolutionary history, suggesting that "communication by variations in pitch and rhythm most likely preceded that of speech, and may indeed have provided the foundation from which spoken language evolved."

Symposium 5 dealt with the topic of emotions and music. Maria Saccuman gave an interesting talk about musicality in infants. Saccuman et al. subjected 18 two-day old infants to an fMRI study, with three kinds of stimuli involving western tonal music excerpts; alterations of these excerpts; and excerpts with violations of tonal syntax. Their results indicated the existence of dedicated systems for processing music-like stimuli and sensitivity to syntactic violations of these stimuli. Daniel Levitin presented results from his lab suggesting how adolescents with autistic spectrum disorders (ASD) are more sensitive to structural features of music than to the expressive (emotive) features. It was interesting to compare this with Levitin's previous work on music perception in individuals with Williams syndrome who, unlike ASD individuals, display a strong sensitivity to music's expressive features.

While there were several more talks on Friday as well as on Saturday, which I am sure were of interest to people in various domains of music cognition, I would like to focus on the one talk that impacted me the most on Saturday, June 28th: Gottfried Schlaug's work on how singing helps aphasic patients, as a part of Symposium 6. This was by far the BEST talk of the entire conference for me. His presentation and results were evidence of music's multimodal nature, and its ability to recover pathways in the homologous language regions of the right hemisphere, in aphasic patients with left hemispheric lesions (left frontal and superior temporal lobes). His talk illustrated how a form of treatment through melodic intonation (Melodic Intonation training) can considerably improve speech production in aphasic patients. Each patient underwent 75 treatment sessions or so, of MIT, which enabled the individual to produce/articulate speech by actively engaging the language regions of the right hemisphere through musical elements such as intonation, rhythmic tapping, and continuous voicing.

Thursday, July 3, 2008

The Neurosciences and Music III conference - Part 2

Thursday, June 26th was a packed day with sixteen talks in total!! I made the mistake of attending all sixteen talks, and should have left out a few. So, as one can imagine, my brains were completely saturated by the end of the tenth talk.

The first symposium which consisted of five talks, was on "Rhythms in the brain: Basic science and clinical perspectives." I found Chen et al.'s work on the importance of the premotor cortex in music production, to be the most interesting of the five. Chen et al. subjected participants to various musical rhythm-related tasks that included passive listening, anticipating prior to a motor act, and committing a motor act. Their fMRI results suggested that in addition to using motor areas and the cerebellum for sequencing rhythmic actions, musicians use the prefrontal cortex to a greater extent (their hypothesis: prefrontal activity in musicians is related to their superior ability to organize musical rhythms wrt working memory). Their data also indicated that the posterior STG (superior temporal gyrus) and the premortor cortex are important mediating nodes for transforming auditory information into motor activity. In addition, their data suggested a direct mapping of auditory information to the motor system through auditory links to the ventral premotor cortex. However, the dorsal premotor cortex seemed to have indirect links, for processing higher-order information pertaining to musical rhythm.

The second symposium was short consisting of only two talks, on normal and impaired singing. I liked the second talk by Steven Brown. The first talk on poor pitch singing didn't really offer any new insights. By the end of the talk I had specific questions, but unfortunately, did not receive satisfactory answers. The questions I had were along the lines of: 1) Where do you draw the line between a poor singer and a tone deaf (amusic) person? ( I believe the Montreal Battery of Evaluation of Amusia, could provide answers to this question). 2) Assuming that a person with normal music perception/recognition abilities, is a good musician (plays an instrument at a fairly accomplished level), and a poor singer, what differentiates this person from a poor singer with music cognition deficiencies? Steven Brown's talk was compelling because it answered my second question by narrowing down the reasons for poor pitch singing (in people with normal music cognition abilities) to deficient or anomalous activation in the larynx motor cortex (in addition to other areas).

The third symposium was on musical training and induced cortical plasticity. All the speakers presented research findings which were generally in support of the notion that musical training and performance induce changes in the brain within limits. The fourth symposium was on music and memory. Emmanuel Bigand's talk was interesting. The attempt of his research was to find the minimum time necessary for activating musical and linguistic memories. Minimum time could also be interpreted as minimum amount of information necessary, in the time course of music recognition. His research suggested that even a slice of information as small as 50ms was enough to "bootstrap" memory for music.

The most interesting talk in this session was that of Isabelle Peretz. Peretz et al. attempted to find out neural correlates of the musical lexicon (storage areas for familiar melodies, or melodies stored in long-term memory). Participants listened to familiar melodies, unfamiliar melodies, and scrambled melodies. Subtraction from fMRI imaging data between familiar melody listening and unfamiliar melody listening suggested two things: 1) the supplementary motor area in the left hemisphere might be involved in "inner singing" or emulation, 2) the right superior temporal sulcus may be involved in the retrieval of information from the musical lexicon.

Tuesday, July 1, 2008

The Neurosciences and Music III conference - Part 1

I just got back from the the 3-day Neurosciences and Music-III conference in Montreal, June 26th-28th, 2008. Held, once every three years, this is THE conference to go to for researchers in music cognition interested in the neuroscience aspect of the domain. The conference was packed with seven symposia and two poster sessions. The most current research in various domains of music cognition and neuroscience was presented in these sessions by some of its biggest scientists.

The good part about attending a conference like this is the fact that you get to witness quality research directly from its source; without merely relying on published papers. In addition, every conference provides us with an opportunity to network with peers, and established researchers, thereby sowing the seeds for possible collaborative efforts. Now for the bad part....and this probably applies to most conferences. Too much information is presented in too short a time, hardly leaving the audience with enough time to process all the information. On the one hand I hated skipping some of the talks...especially after having traveled all the way to Montreal (and traveling to these conferences isn't cheap for a graduate student), but on the other hand I saw no other solution after having crossed the threshold beyond the point of information overload. So the conference always puts you in the position of having to choose the talks you want to listen to wisely.

I will attempt to summarize my version of the events on all 3 days of the conference in my next few blogs.