Bharath Chandrasekaran

  • Professor and Vice Chair for Research, Department of Communication Science and Disorders

Current Research

Dr. Chandrasekaran's research examines the neurobiological computations that underlie speech perception and learning, using an interdisciplinary, computational, and lifespan approach.

From a clinical perspective, Dr. Chandrasekaran's laboratory hopes to develop a richer understanding of the neurocognitive sources of individual differences in speech processing. This laboratory aims to develop optimized and neurobiologically-informed auditory training approaches for second language learning, learning impairments, and auditory processing deficits.

Using cognitive neuroscience approaches (fMRI, DTI, cortical and subcortical EEG and computational modeling), the laboratory focuses on:
1. Understanding brain dynamics during speech perception in challenging listening environments
2. Understanding the functional dynamics of neural circuitry underlying speech categorization.

 

Selected Current Publications

Lau JCY, Wong PCM, Chandrasekaran B. Interactive effects of linguistic abstraction and stimulus statistics in the online modulation of neural speech encoding. Atten Percept Psychophys. 2018 Dec 18. doi: 10.3758/s13414-018-1621-9.

Feng G, Yi HG, Chandrasekaran B. The Role of the Human Auditory Corticostriatal Network in Speech Learning. Cereb Cortex. 2018 Dec 7. doi:10.1093/cercor/bhy289.

Xie Z, Reetzke R, Chandrasekaran B. Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech. Neuroscience. 2018 Aug 1;384:64-75. doi: 10.1016/j.neuroscience.2018.05.023.

Reetzke R, Xie Z, Llanos F, Chandrasekaran B. Tracing the Trajectory of Sensory Plasticity across Different Stages of Speech Learning in Adulthood. Curr Biol. 2018 May 7;28(9):1419-1427.e4. doi: 10.1016/j.cub.2018.03.026.

Deng Z, Chandrasekaran B, Wang S, Wong PCM. Training-induced brain activation and functional connectivity differentiate multi-talker and single-talker speech training. Neurobiol Learn Mem. 2018 May;151:1-9. doi: 10.1016/j.nlm.2018.03.009. Epub 2018 Mar 10.

Feng G, Gan Z, Wang S, Wong PCM, Chandrasekaran B. Task-General and Acoustic-Invariant Neural Representation of Speech Categories in the Human Brain. Cereb Cortex. 2018 Sep 1;28(9):3241-3254. doi: 10.1093/cercor/bhx195. PubMed PMID: 28968658.

Llanos F, Xie Z, Chandrasekaran B. Hidden Markov modeling of frequency-following responses to Mandarin lexical tones. J Neurosci Methods. 2017 Nov 1;291:101-112. doi: 10.1016/j.jneumeth.2017.08.010. Epub 2017 Aug 12.

Yi HG, Xie Z, Reetzke R, Dimakis AG, Chandrasekaran B. Vowel decoding from single-trial speech-evoked electrophysiological responses: A feature-based machine learning approach. Brain Behav. 2017 Apr 26;7(6):e00665. doi: 10.1002/brb3.665. eCollection 2017 Jun.

Lam BPW, Xie Z, Tessmer R, Chandrasekaran B. The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise. J Speech Lang Hear Res. 2017 Jun 10;60(6):1662-1673. doi: 10.1044/2017_JSLHR-H-16-0133.

Maddox WT, Koslov S, Yi HG, Chandrasekaran B. Performance Pressure Enhances Speech Learning. Appl Psycholinguist. 2016 Nov;37(6):1369-1396. doi: 10.1017/S0142716415000600. Epub 2015 Dec 23.