Abstract Details

Modularity Allows Classification of Human Brain Networks During Music and Speech Perception  Melia Bonomo , Christof Karmonik, Houston Methodist Research Institute / Weill Cornell Medical College; Anthony Brandt, Rice University; J Todd Frazier, Houston Methodist Hospital (Department of Physics and Astronomy, Rice University, Houston, TX )   C4

Introduction: Therapeutic music engagement is effective for improving cognitive health in patients suffering from neurological disease or trauma, however, little is known about the mechanism of action. Here, we investigated a means to quantify individual differences in functional brain activity while subjects listened to a variety of auditory pieces. Modularity was used to measure the degree to which functional activity within a group of brain regions was more highly correlated than activity between groups. Methods: Functional magnetic resonance image (fMRI) data of 25 healthy subjects were acquired at the MRI core of Houston Methodist Hospital (Philips Ingenia 3.0T scanner, 2.4s repetition time, 130 volumes) using a block design of six alternating 24s-intervals of silence and auditory stimulus. Subjects listened to a favorite song, unemotional speech (newscast by Walter Cronkite), emotional speech (Charlie Chaplin in The Great Dictator), culturally familiar music (instrumental piano by J.S. Bach), culturally unfamiliar music (Gagaku classical Japanese opera), and unfamiliar foreign speech (click language of the South African Xhosa tribe). The fMRI data were preprocessed and transformed into Talairach space, and voxel time series were averaged into Brodmann areas (BAs). Pearson correlations were calculated to determine the functional connectivity between pairs of BAs. Newman's algorithm (Newman, PNAS 103(23):8577-82, 2006) was used to assign BAs into modules and measure the whole-brain network modularity. Adaptability of the functional network was quantified as the change in modularity from a subject's modularity during their favorite song to each subsequent piece they listened to. Results: For 17 of the subjects, the modularity during their favorite song was either the highest or lowest of the auditory pieces that each of those subjects listened to. We divided subjects into 'high' and 'low' modularity groups based on if their modularity during their favorite song was higher or lower than the cohort average and found significant group-wise differences in network adaptability. When comparing module composition for the two groups, we found consistency in the network organization of BAs associated with hearing, vision, and bodily sensation. However, there were group-wise differences in the module allegiance of BAs involved in emotional processing and the default mode network across different auditory pieces. Conclusion: The differences seen between the high and low modularity groups may provide insight into why the effect of auditory-based therapeutic interventions varies across individual patients. Furthermore, our analysis of the modular organization of BAs begins to tackle the question of why listening to music is distinct from other auditory-based therapy enrichments, such as audiobooks, and help understand the mechanism of therapeutic action. The use of modularity as a classifier of functional brain activity during auditory processing paves the way for creating personalized music therapy interventions and understanding how music benefits the brain.