Skip to content

Home


Research Focus

My research program seeks to examine speech as a whole-body, multimodal behavior—one that emerges from the coordinated interaction of vocal-tract articulation, facial expression, and postural control. Rather than treating speech as an isolated vocal-tract task, I seek to investigate how the body’s interconnected systems collectively shape the acoustic and articulatory structure of spoken language. Through work spanning articulatory imaging, biomechanical modeling, and large-scale speech datasets, I intend to study how speakers maintain communicative precision across languages, contexts, and modalities. My current research integrates ultrasound, real-time MRI, OpenFace, and EMG to reveal how bilinguals adapt their speech motor patterns, how postural and facial systems support segmental contrast, and how these embodied processes vary across linguistic communities.

Jahurul Islam

Jahurul Islam, Ph.D.
Whole-Body Phonetics | Speech Sciences | Data Science
Teacher | Researcher

Cross-Linguistic Phonetic Production and Perception

This line of work seeks to understand how multilingual speakers manage sound systems across languages when speech is viewed as an embodied motor behavior. I seek to extend insights from cross-linguistic experience studies to understand how language experience drives adaptations in whole-body articulatory settings, integrating tongue posture, lip configuration, facial movement, and postural support. Current projects explore how bilinguals maintain or reorganize articulatory contrast during code-switching, how language-specific posture influences vowel and rhotic production, and how perceptual cues interact with biomechanical constraints.

Articulatory Dynamics & Multimodal Speech Motor Control

This research investigates the multisystem biomechanics of speech using ultrasound, EMG, real-time MRI, and facial-tracking. I explore how vocal-tract gestures coordinate with facial and postural adjustments to achieve stable acoustic outputs. Current investigations include velum movement velocity and nasal gradience, tongue–jaw–larynx coupling in vowel production, lip biomechanics across languages, and anticipatory postural adjustments during speech. This work advances an integrated picture of speech as a coordinated system shaped by physiological constraints and multimodal motor strategies.

Computational Phonetics & Multimodal Speech Technology

I combine experimental phonetics with computation to build tools that reflect speech’s embodied, multimodal nature. My work includes forced aligners for under-resourced languages, cross-linguistic articulatory–acoustic modeling, and multimodal corpora incorporating facial and articulatory data. Recent contributions include Bangla-Align, an alignment toolkit for Bangla, and ongoing development of multimodal pipelines using OpenFace and ultrasound. These tools support reproducible research and broaden access to speech technology for diverse linguistic communities.

Education

  • Ph.D. in Linguistics
    Georgetown University, Washington, DC, USA, 2019
  • M.Sc. in Linguistics
    Georgetown University, Washington, DC, USA, 2017
  • M.A. in English (Linguistics)
    North Carolina State University, Raleigh, USA, 2014
  • M.A. in English
    Jahangirnagar University, Savar, Dhaka, Bangladesh, 2007
  • B.A. (Honors) in English
    Jahangirnagar University, Savar, Dhaka, Bangladesh, 2006

Professional history

  • Lecturer, Department of Linguistics
    University of British Columbia, Vancouver, Canada. Dec. 2019–present.
  • Assistant Professor, Department of English
    Jahangirnagar University, Savar, Dhaka, Bangladesh. Sep. 2019–Dec. 2019.
  • Assistant Professor (on leave), Department of English
    Jahangirnagar University, Savar, Dhaka, Bangladesh. Jul. 2012–Sep. 2019.
  • Lecturer, Department of English
    Jahangirnagar University, Savar, Dhaka, Bangladesh. Dec. 2009–Jul. 2011.
  • Lecturer, Department of English
    Comilla University, Kotbari, Cumilla, Bangladesh. Apr. 2009–Dec. 2009.

Learn about my recent works: