The functional similarities as well as the differences between the two domains of music and language have been topics of discussion among researchers for quite some time. The similarities between the two are seemingly obvious. Both share a hierarchical and temporal structure. Both share a syntax (although the perception of syntax is not as explicit in the case of music as it is for language...in addition to being subjective to each listener depending on his or her level of training and exposure). Time is critical to the perception and recognition of both spoken words as well as melodies.
Given these similarities, cognitive researchers have been tempted to believe that perception and recognition of both melodies as well as spoken words involve the same functional processes: the need for working memory to temporarily store sequential information and consolidate this information into a higher-order percept, the need for long term memory to influence perception through top-down contextual feedback, and a mechanism that enables the integration of these two processes.
In addition, both music and language share the same neural correlates for subcortical processing (from the outer ear upto the thalamus and in some cases upto the primary auditory cortex). They also share the same neural correlates for a large amount of cortical processing that includes areas such as the primary and secondary auditory cortices and perisylvian language areas (Broca's area and Wernicke's area). Also, musical syntax-based violations (both in melodic and in harmonic contexts) show activation of areas associated with syntax violations in language.
Sunday, September 14, 2008
Tuesday, August 5, 2008
Amusia...continued
Previous studies pertaining to British and French-Canadian amusics showed that about 30% of amusics have difficulty discriminating between statements and questions depending on the intonation based on the final rising (question) or falling (statement) pitch glide .
Patel et al. re-examined prosodic judgments in British and French-Canadian amusics by asking participants to make same vs. different judgments for two different kinds of stimuli. The first set of stimuli consisted of statement-question pairs, where the two sentences were exactly similar except for the last part where the intonation changed depending on whether the sentence was a statement or a question. So, a question would have a rising pitch glide on the last word (e.g. He likes to drive fast cars?) whereas a statement would have a slight dip (e.g. He likes to drive fast cars.). The second set of stimuli consisted of focus-shift pairs, where although the two sentences were similar, the intonation differed depending on the location of the salient pitch accent (e.g. Go in front of the bank I said. vs. Go in front of the bank I said.).
Their results indicated that some amusics had difficulty discriminating statement-question pairs. However, these amusics did not have any difficulty discriminating between two sentences in a focus-shift pair. This suggested that while amusics could detect pitch movements in speech, they had difficulty detecting the direction of pitch movements (based on their difficulty in judging the rise or fall in pitch glide on the same word in the statement-question pair).
These results are also consistent with results from a previous study conducted by Foxton et al., where British amusics were asked to judge the direction of a pure tone pitch glide. Foxton et al. manipulated the glide rate by keeping the glide duration constant and by increasing or decreasing the size of the glide. They found that amusics had difficulty when the pitch glides were smaller (the glide rate was lower). On average, the threshold for accurate direction judgment in amusics was about 2.2 semitones compared to 0.1 semitones in controls. When expressed as glide rates, these numbers correspond to 22 semitones/second vs. 1 semitone/second. These results offer various avenues for experimental and computational exploration.
Currently, work is being done on amusic prosody judgments in tonal languages such as Mandarin Chinese. It will be extremely interesting to see how work in this area progresses.
Patel et al. re-examined prosodic judgments in British and French-Canadian amusics by asking participants to make same vs. different judgments for two different kinds of stimuli. The first set of stimuli consisted of statement-question pairs, where the two sentences were exactly similar except for the last part where the intonation changed depending on whether the sentence was a statement or a question. So, a question would have a rising pitch glide on the last word (e.g. He likes to drive fast cars?) whereas a statement would have a slight dip (e.g. He likes to drive fast cars.). The second set of stimuli consisted of focus-shift pairs, where although the two sentences were similar, the intonation differed depending on the location of the salient pitch accent (e.g. Go in front of the bank I said. vs. Go in front of the bank I said.).
Their results indicated that some amusics had difficulty discriminating statement-question pairs. However, these amusics did not have any difficulty discriminating between two sentences in a focus-shift pair. This suggested that while amusics could detect pitch movements in speech, they had difficulty detecting the direction of pitch movements (based on their difficulty in judging the rise or fall in pitch glide on the same word in the statement-question pair).
These results are also consistent with results from a previous study conducted by Foxton et al., where British amusics were asked to judge the direction of a pure tone pitch glide. Foxton et al. manipulated the glide rate by keeping the glide duration constant and by increasing or decreasing the size of the glide. They found that amusics had difficulty when the pitch glides were smaller (the glide rate was lower). On average, the threshold for accurate direction judgment in amusics was about 2.2 semitones compared to 0.1 semitones in controls. When expressed as glide rates, these numbers correspond to 22 semitones/second vs. 1 semitone/second. These results offer various avenues for experimental and computational exploration.
Currently, work is being done on amusic prosody judgments in tonal languages such as Mandarin Chinese. It will be extremely interesting to see how work in this area progresses.
Friday, July 25, 2008
Amusia
One highly exciting topic in music cognition is that of amusia and the perception of prosody in amusics. In simplistic terms, amusia refers to tone deafness. "Tone deafness" here does not refer to our subjective use of that phrase in everyday life, such as when applying it to a poor singer (with normal music perception abilities). Its application is much more limited, and is determined objectively by means of tests such as the MBEA (Montreal Battery for the Evaluation of Amusia). These tests involve same v different judgment tasks w.r.t. melody and rhythm. A lot of significant work in this area has already been done by Isabelle Peretz and her colleagues. Peretz, Ayotte, Hyde, Zatorre, Penhune, Patel were some of the initial researchers who successfully narrowed down the problem of amusia to a pitch-perception related disorder.
Amusia in a lot of cases is congenital (suggesting a genetic basis) and is estimated to affect about 4% of the population (a surprisingly high figure!). Research seems to indicate that it is not an all-or-nothing disorder. Amusia can impair music perception abilities from a low to a high degree, and might be existing within a continuum. Amusia can also be acquired on account of brain damage. What makes amusia an interesting area of study is the nature of evidence from amusics showing partial impairment of linguistic prosody judgments in some cases, and the lack of it in some cases. This inconsistency with regard to the accompaniment of a linguistic prosody disorder along with impaired musical abilities has urged researchers to explore this area in further detail. Current research findings were presented in a talk given by Aniruddh Patel at the Neurosciences and Music conference in Montreal. A poster related to judgment of prosody in tonal languages was presented by Sebastien Nguyen et al. I will talk about some of these research findings in greater detail in my next blog.
Amusia in a lot of cases is congenital (suggesting a genetic basis) and is estimated to affect about 4% of the population (a surprisingly high figure!). Research seems to indicate that it is not an all-or-nothing disorder. Amusia can impair music perception abilities from a low to a high degree, and might be existing within a continuum. Amusia can also be acquired on account of brain damage. What makes amusia an interesting area of study is the nature of evidence from amusics showing partial impairment of linguistic prosody judgments in some cases, and the lack of it in some cases. This inconsistency with regard to the accompaniment of a linguistic prosody disorder along with impaired musical abilities has urged researchers to explore this area in further detail. Current research findings were presented in a talk given by Aniruddh Patel at the Neurosciences and Music conference in Montreal. A poster related to judgment of prosody in tonal languages was presented by Sebastien Nguyen et al. I will talk about some of these research findings in greater detail in my next blog.
Thursday, July 10, 2008
The Neurosciences and Music III conference - Part 3
Friday, June 27th was a much better day. The day started off with a two and a half hour poster session. Steven Mithen's keynote lecture was refreshing. It was nice to hear from an established archaeologist's perspective about how humans could have evolved as a musical species. The gist of his talk can be summarized by a few lines from his abstract. "New research in the study of hominin fossils, archaeological remains as well as in the fields of neuroscience, developmental psychology and musicology are allowing" insights into music's evolutionary history, suggesting that "communication by variations in pitch and rhythm most likely preceded that of speech, and may indeed have provided the foundation from which spoken language evolved."
Symposium 5 dealt with the topic of emotions and music. Maria Saccuman gave an interesting talk about musicality in infants. Saccuman et al. subjected 18 two-day old infants to an fMRI study, with three kinds of stimuli involving western tonal music excerpts; alterations of these excerpts; and excerpts with violations of tonal syntax. Their results indicated the existence of dedicated systems for processing music-like stimuli and sensitivity to syntactic violations of these stimuli. Daniel Levitin presented results from his lab suggesting how adolescents with autistic spectrum disorders (ASD) are more sensitive to structural features of music than to the expressive (emotive) features. It was interesting to compare this with Levitin's previous work on music perception in individuals with Williams syndrome who, unlike ASD individuals, display a strong sensitivity to music's expressive features.
While there were several more talks on Friday as well as on Saturday, which I am sure were of interest to people in various domains of music cognition, I would like to focus on the one talk that impacted me the most on Saturday, June 28th: Gottfried Schlaug's work on how singing helps aphasic patients, as a part of Symposium 6. This was by far the BEST talk of the entire conference for me. His presentation and results were evidence of music's multimodal nature, and its ability to recover pathways in the homologous language regions of the right hemisphere, in aphasic patients with left hemispheric lesions (left frontal and superior temporal lobes). His talk illustrated how a form of treatment through melodic intonation (Melodic Intonation training) can considerably improve speech production in aphasic patients. Each patient underwent 75 treatment sessions or so, of MIT, which enabled the individual to produce/articulate speech by actively engaging the language regions of the right hemisphere through musical elements such as intonation, rhythmic tapping, and continuous voicing.
Symposium 5 dealt with the topic of emotions and music. Maria Saccuman gave an interesting talk about musicality in infants. Saccuman et al. subjected 18 two-day old infants to an fMRI study, with three kinds of stimuli involving western tonal music excerpts; alterations of these excerpts; and excerpts with violations of tonal syntax. Their results indicated the existence of dedicated systems for processing music-like stimuli and sensitivity to syntactic violations of these stimuli. Daniel Levitin presented results from his lab suggesting how adolescents with autistic spectrum disorders (ASD) are more sensitive to structural features of music than to the expressive (emotive) features. It was interesting to compare this with Levitin's previous work on music perception in individuals with Williams syndrome who, unlike ASD individuals, display a strong sensitivity to music's expressive features.
While there were several more talks on Friday as well as on Saturday, which I am sure were of interest to people in various domains of music cognition, I would like to focus on the one talk that impacted me the most on Saturday, June 28th: Gottfried Schlaug's work on how singing helps aphasic patients, as a part of Symposium 6. This was by far the BEST talk of the entire conference for me. His presentation and results were evidence of music's multimodal nature, and its ability to recover pathways in the homologous language regions of the right hemisphere, in aphasic patients with left hemispheric lesions (left frontal and superior temporal lobes). His talk illustrated how a form of treatment through melodic intonation (Melodic Intonation training) can considerably improve speech production in aphasic patients. Each patient underwent 75 treatment sessions or so, of MIT, which enabled the individual to produce/articulate speech by actively engaging the language regions of the right hemisphere through musical elements such as intonation, rhythmic tapping, and continuous voicing.
Thursday, July 3, 2008
The Neurosciences and Music III conference - Part 2
Thursday, June 26th was a packed day with sixteen talks in total!! I made the mistake of attending all sixteen talks, and should have left out a few. So, as one can imagine, my brains were completely saturated by the end of the tenth talk.
The first symposium which consisted of five talks, was on "Rhythms in the brain: Basic science and clinical perspectives." I found Chen et al.'s work on the importance of the premotor cortex in music production, to be the most interesting of the five. Chen et al. subjected participants to various musical rhythm-related tasks that included passive listening, anticipating prior to a motor act, and committing a motor act. Their fMRI results suggested that in addition to using motor areas and the cerebellum for sequencing rhythmic actions, musicians use the prefrontal cortex to a greater extent (their hypothesis: prefrontal activity in musicians is related to their superior ability to organize musical rhythms wrt working memory). Their data also indicated that the posterior STG (superior temporal gyrus) and the premortor cortex are important mediating nodes for transforming auditory information into motor activity. In addition, their data suggested a direct mapping of auditory information to the motor system through auditory links to the ventral premotor cortex. However, the dorsal premotor cortex seemed to have indirect links, for processing higher-order information pertaining to musical rhythm.
The second symposium was short consisting of only two talks, on normal and impaired singing. I liked the second talk by Steven Brown. The first talk on poor pitch singing didn't really offer any new insights. By the end of the talk I had specific questions, but unfortunately, did not receive satisfactory answers. The questions I had were along the lines of: 1) Where do you draw the line between a poor singer and a tone deaf (amusic) person? ( I believe the Montreal Battery of Evaluation of Amusia, could provide answers to this question). 2) Assuming that a person with normal music perception/recognition abilities, is a good musician (plays an instrument at a fairly accomplished level), and a poor singer, what differentiates this person from a poor singer with music cognition deficiencies? Steven Brown's talk was compelling because it answered my second question by narrowing down the reasons for poor pitch singing (in people with normal music cognition abilities) to deficient or anomalous activation in the larynx motor cortex (in addition to other areas).
The third symposium was on musical training and induced cortical plasticity. All the speakers presented research findings which were generally in support of the notion that musical training and performance induce changes in the brain within limits. The fourth symposium was on music and memory. Emmanuel Bigand's talk was interesting. The attempt of his research was to find the minimum time necessary for activating musical and linguistic memories. Minimum time could also be interpreted as minimum amount of information necessary, in the time course of music recognition. His research suggested that even a slice of information as small as 50ms was enough to "bootstrap" memory for music.
The most interesting talk in this session was that of Isabelle Peretz. Peretz et al. attempted to find out neural correlates of the musical lexicon (storage areas for familiar melodies, or melodies stored in long-term memory). Participants listened to familiar melodies, unfamiliar melodies, and scrambled melodies. Subtraction from fMRI imaging data between familiar melody listening and unfamiliar melody listening suggested two things: 1) the supplementary motor area in the left hemisphere might be involved in "inner singing" or emulation, 2) the right superior temporal sulcus may be involved in the retrieval of information from the musical lexicon.
The first symposium which consisted of five talks, was on "Rhythms in the brain: Basic science and clinical perspectives." I found Chen et al.'s work on the importance of the premotor cortex in music production, to be the most interesting of the five. Chen et al. subjected participants to various musical rhythm-related tasks that included passive listening, anticipating prior to a motor act, and committing a motor act. Their fMRI results suggested that in addition to using motor areas and the cerebellum for sequencing rhythmic actions, musicians use the prefrontal cortex to a greater extent (their hypothesis: prefrontal activity in musicians is related to their superior ability to organize musical rhythms wrt working memory). Their data also indicated that the posterior STG (superior temporal gyrus) and the premortor cortex are important mediating nodes for transforming auditory information into motor activity. In addition, their data suggested a direct mapping of auditory information to the motor system through auditory links to the ventral premotor cortex. However, the dorsal premotor cortex seemed to have indirect links, for processing higher-order information pertaining to musical rhythm.
The second symposium was short consisting of only two talks, on normal and impaired singing. I liked the second talk by Steven Brown. The first talk on poor pitch singing didn't really offer any new insights. By the end of the talk I had specific questions, but unfortunately, did not receive satisfactory answers. The questions I had were along the lines of: 1) Where do you draw the line between a poor singer and a tone deaf (amusic) person? ( I believe the Montreal Battery of Evaluation of Amusia, could provide answers to this question). 2) Assuming that a person with normal music perception/recognition abilities, is a good musician (plays an instrument at a fairly accomplished level), and a poor singer, what differentiates this person from a poor singer with music cognition deficiencies? Steven Brown's talk was compelling because it answered my second question by narrowing down the reasons for poor pitch singing (in people with normal music cognition abilities) to deficient or anomalous activation in the larynx motor cortex (in addition to other areas).
The third symposium was on musical training and induced cortical plasticity. All the speakers presented research findings which were generally in support of the notion that musical training and performance induce changes in the brain within limits. The fourth symposium was on music and memory. Emmanuel Bigand's talk was interesting. The attempt of his research was to find the minimum time necessary for activating musical and linguistic memories. Minimum time could also be interpreted as minimum amount of information necessary, in the time course of music recognition. His research suggested that even a slice of information as small as 50ms was enough to "bootstrap" memory for music.
The most interesting talk in this session was that of Isabelle Peretz. Peretz et al. attempted to find out neural correlates of the musical lexicon (storage areas for familiar melodies, or melodies stored in long-term memory). Participants listened to familiar melodies, unfamiliar melodies, and scrambled melodies. Subtraction from fMRI imaging data between familiar melody listening and unfamiliar melody listening suggested two things: 1) the supplementary motor area in the left hemisphere might be involved in "inner singing" or emulation, 2) the right superior temporal sulcus may be involved in the retrieval of information from the musical lexicon.
Tuesday, July 1, 2008
The Neurosciences and Music III conference - Part 1
I just got back from the the 3-day Neurosciences and Music-III conference in Montreal, June 26th-28th, 2008. Held, once every three years, this is THE conference to go to for researchers in music cognition interested in the neuroscience aspect of the domain. The conference was packed with seven symposia and two poster sessions. The most current research in various domains of music cognition and neuroscience was presented in these sessions by some of its biggest scientists.
The good part about attending a conference like this is the fact that you get to witness quality research directly from its source; without merely relying on published papers. In addition, every conference provides us with an opportunity to network with peers, and established researchers, thereby sowing the seeds for possible collaborative efforts. Now for the bad part....and this probably applies to most conferences. Too much information is presented in too short a time, hardly leaving the audience with enough time to process all the information. On the one hand I hated skipping some of the talks...especially after having traveled all the way to Montreal (and traveling to these conferences isn't cheap for a graduate student), but on the other hand I saw no other solution after having crossed the threshold beyond the point of information overload. So the conference always puts you in the position of having to choose the talks you want to listen to wisely.
I will attempt to summarize my version of the events on all 3 days of the conference in my next few blogs.
The good part about attending a conference like this is the fact that you get to witness quality research directly from its source; without merely relying on published papers. In addition, every conference provides us with an opportunity to network with peers, and established researchers, thereby sowing the seeds for possible collaborative efforts. Now for the bad part....and this probably applies to most conferences. Too much information is presented in too short a time, hardly leaving the audience with enough time to process all the information. On the one hand I hated skipping some of the talks...especially after having traveled all the way to Montreal (and traveling to these conferences isn't cheap for a graduate student), but on the other hand I saw no other solution after having crossed the threshold beyond the point of information overload. So the conference always puts you in the position of having to choose the talks you want to listen to wisely.
I will attempt to summarize my version of the events on all 3 days of the conference in my next few blogs.
Monday, June 16, 2008
Review of "This is your brain on music" by Daniel Levitin - Part 4
I found chapters 4, 5, and 6 to be the most interesting parts of the book. Without giving much away, I will try to provide my concise review of these chapters. Some of the topics covered in chapter 4 are the role of functional units (called schemata) in our long-term memory stores which help us in perception and enable anticipation of incoming information and how composers violate them to create a sense of novelty in the listener; the role of neurotransmitters and receptors in providing the listener with emotions w.r.t expectation and satisfaction or violation of expectations; hemispheric specializations and their functional roles in the context of music and language; and the effect of musical training on hemispheric specialization. At the end of this chapter, Levitin provides us with a summarized, high-level hypothetical picture of the neural organization of the brain for music and speech.
Chapter 5 deals almost entirely with functional processes pertaining to memory and categorization. Cognitive science students/researchers should find this chapter appealing. In addition, frequency effects of melodies, and melodic invariance are also briefly mentioned in this chapter.
I found chapter 6 to be the most enjoyable chapter of the book. I hope that other readers would find the remaining chapters equally interesting unlike me, because herein lies the problem. After reading chapter 6, I felt like a sky diver who completed a jump in the middle of the day.....experienced an intense adrenaline rush....after which he/she had nothing better to look forward to for the rest of the day, and had to experience the remaining part of the day in lethargy. To do justice to the book, I will end my review with chapter 6, and hope that someone more deserving will inform readers about the remaining chapters. Hopefully other readers will hit their high notes at later points in the book. Music, to most people, is an emotional activity, and this chapter highlights it beautifully, while informing us about the neural correlates involved in causing those emotions. Cognitive psychologists have studied various cognitive processes, but most have shied away from studying emotion. I am glad that the role of emotion was treated on par with other cognitive processes in this book, more so in the context of a routine yet wonderful activity such as listening to music.
Chapter 5 deals almost entirely with functional processes pertaining to memory and categorization. Cognitive science students/researchers should find this chapter appealing. In addition, frequency effects of melodies, and melodic invariance are also briefly mentioned in this chapter.
I found chapter 6 to be the most enjoyable chapter of the book. I hope that other readers would find the remaining chapters equally interesting unlike me, because herein lies the problem. After reading chapter 6, I felt like a sky diver who completed a jump in the middle of the day.....experienced an intense adrenaline rush....after which he/she had nothing better to look forward to for the rest of the day, and had to experience the remaining part of the day in lethargy. To do justice to the book, I will end my review with chapter 6, and hope that someone more deserving will inform readers about the remaining chapters. Hopefully other readers will hit their high notes at later points in the book. Music, to most people, is an emotional activity, and this chapter highlights it beautifully, while informing us about the neural correlates involved in causing those emotions. Cognitive psychologists have studied various cognitive processes, but most have shied away from studying emotion. I am glad that the role of emotion was treated on par with other cognitive processes in this book, more so in the context of a routine yet wonderful activity such as listening to music.
Subscribe to:
Posts (Atom)