keyword
https://read.qxmd.com/read/38518889/audiovisual-speech-asynchrony-asymmetrically-modulates-neural-binding
#1
JOURNAL ARTICLE
Marc Sato
Previous psychophysical and neurophysiological studies in young healthy adults have provided evidence that audiovisual speech integration occurs with a large degree of temporal tolerance around true simultaneity. To further determine whether audiovisual speech asynchrony modulates auditory cortical processing and neural binding in young healthy adults, N1/P2 auditory evoked responses were compared using an additive model during a syllable categorization task, without or with an audiovisual asynchrony ranging from 240 ms visual lead to 240 ms auditory lead...
March 20, 2024: Neuropsychologia
https://read.qxmd.com/read/38488460/perceptual-uncertainty-explains-activation-differences-between-audiovisual-congruent-speech-and-mcgurk-stimuli
#2
JOURNAL ARTICLE
Chenjie Dong, Uta Noppeney, Suiping Wang
Face-to-face communication relies on the integration of acoustic speech signals with the corresponding facial articulations. In the McGurk illusion, an auditory /ba/ phoneme presented simultaneously with a facial articulation of a /ga/ (i.e., viseme), is typically fused into an illusory 'da' percept. Despite its widespread use as an index of audiovisual speech integration, critics argue that it arises from perceptual processes that differ categorically from natural speech recognition. Conversely, Bayesian theoretical frameworks suggest that both the illusory McGurk and the veridical audiovisual congruent speech percepts result from probabilistic inference based on noisy sensory signals...
March 2024: Human Brain Mapping
https://read.qxmd.com/read/38334251/neural-correlates-of-audiovisual-narrative-speech-perception-in-children-and-adults-on-the-autism-spectrum-a-functional-magnetic-resonance-imaging-study
#3
JOURNAL ARTICLE
Lars A Ross, Sophie Molholm, John S Butler, Victor A Del Bene, Tufikameni Brima, John J Foxe
Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative...
February 2024: Autism Research: Official Journal of the International Society for Autism Research
https://read.qxmd.com/read/38332159/audiovisual-integration-in-the-mcgurk-effect-is-impervious-to-music-training
#4
JOURNAL ARTICLE
Hsing-Hao Lee, Karleigh Groves, Pablo Ripollés, Marisa Carrasco
The McGurk effect refers to an audiovisual speech illusion where the discrepant auditory and visual syllables produce a fused percept between the visual and auditory component. However, little is known about how individual differences contribute to the McGurk effect. Here, we examined whether music training experience-which involves audiovisual integration-can modulate the McGurk effect. Seventy-three participants completed the Goldsmiths Musical Sophistication Index (Gold-MSI) questionnaire to evaluate their music expertise on a continuous scale...
February 8, 2024: Scientific Reports
https://read.qxmd.com/read/38300594/audiovisual-speech-perception-in-noise-in-younger-and-older-bilinguals
#5
JOURNAL ARTICLE
Alexandre Chauvin, Sophie Pellerin, Anna-Francesca Boatswain-Jacques, Jean-Louis René, Natalie A Phillips
Speech perception in noise becomes increasingly difficult with age. Similarly, bilinguals often have difficulty with speech perception in noise in their second language (L2) due to less developed language knowledge in L2. Little is known about older bilinguals, who experience age-related sensory and cognitive changes but have extensive L2 experience. Furthermore, while audiovisual (AV) speech cues and supportive sentence context facilitate speech perception in noise in native listeners, much less is known for bilingual listeners, particularly older bilinguals...
February 1, 2024: Psychology and Aging
https://read.qxmd.com/read/38192017/auditory-visual-and-cognitive-abilities-in-normal-hearing-adults-hearing-aid-users-and-cochlear-implant-users
#6
JOURNAL ARTICLE
Dorien Ceuleers, Hannah Keppler, Sofie Degeest, Nele Baudonck, Freya Swinnen, Katrien Kestens, Ingeborg Dhooge
OBJECTIVES: Speech understanding is considered a bimodal and bidirectional process, whereby visual information (i.e., speechreading) and also cognitive functions (i.e., top-down processes) are involved. Therefore, the purpose of the present study is twofold: (1) to investigate the auditory (A), visual (V), and cognitive (C) abilities in normal-hearing individuals, hearing aid (HA) users, and cochlear implant (CI) users, and (2) to determine an auditory, visual, cognitive (AVC)-profile providing a comprehensive overview of a person's speech processing abilities, containing a broader variety of factors involved in speech understanding...
January 9, 2024: Ear and Hearing
https://read.qxmd.com/read/38175327/visual-fixations-during-processing-of-time-compressed-audiovisual-presentations
#7
JOURNAL ARTICLE
Nicole D Perez, Michael J Kleiman, Elan Barenholtz
Time-compression is a technique that allows users to adjust the playback speed of audio recordings, but comprehension declines at higher speeds. Previous research has shown that under challenging auditory conditions people have a greater tendency to fixate regions closer to a speaker's mouth. In the current study, we investigated whether there is a similar tendency to fixate the mouth region for time-compressed stimuli. Participants were presented with a brief audiovisual lecture at different speeds, while eye fixations were recorded, and comprehension was tested...
January 4, 2024: Attention, Perception & Psychophysics
https://read.qxmd.com/read/38078311/impact-of-room-acoustics-and-visual-cues-on-speech-perception-and-talker-localization-by-children-with-mild-bilateral-or-unilateral-hearing-loss
#8
JOURNAL ARTICLE
Dawna Lewis, Sarah Al-Salim, Tessa McDermott, Andrew Dergan, Ryan W McCreery
INTRODUCTION: This study evaluated the ability of children (8-12 years) with mild bilateral or unilateral hearing loss (MBHL/UHL) listening unaided, or normal hearing (NH) to locate and understand talkers in varying auditory/visual acoustic environments. Potential differences across hearing status were examined. METHODS: Participants heard sentences presented by female talkers from five surrounding locations in varying acoustic environments. A localization-only task included two conditions (auditory only, visually guided auditory) in three acoustic environments (favorable, typical, poor)...
2023: Frontiers in Pediatrics
https://read.qxmd.com/read/38077093/evidence-for-a-causal-dissociation-of-the-mcgurk-effect-and-congruent-audiovisual-speech-perception-via-tms
#9
EunSeon Ahn, Areti Majumdar, Taraz Lee, David Brang
Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept known as the McGurk effect. This illusion has been widely used to study audiovisual speech integration, illustrating that auditory and visual cues are combined in the brain to generate a single coherent percept. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e...
November 27, 2023: bioRxiv
https://read.qxmd.com/read/37990611/abnormal-connectivity-and-activation-during-audiovisual-speech-perception-in-schizophrenia
#10
JOURNAL ARTICLE
Yoji Hirano, Itta Nakamura, Shunsuke Tamura
The unconscious integration of vocal and facial cues during speech perception facilitates face-to-face communication. Recent studies have provided substantial behavioural evidence concerning impairments in audiovisual (AV) speech perception in schizophrenia. However, the specific neurophysiological mechanism underlying these deficits remains unknown. Here, we investigated activities and connectivities centered on the auditory cortex during AV speech perception in schizophrenia. Using magnetoencephalography, we recorded and analysed event-related fields in response to auditory (A: voice), visual (V: face) and AV (voice-face) stimuli in 23 schizophrenia patients (13 males) and 22 healthy controls (13 males)...
November 22, 2023: European Journal of Neuroscience
https://read.qxmd.com/read/37972516/exploring-audiovisual-speech-perception-in-monolingual-and-bilingual-children-in-uzbekistan
#11
JOURNAL ARTICLE
Shakhlo Nematova, Benjamin Zinszer, Kaja K Jasinska
This study aimed to investigate the development of audiovisual speech perception in monolingual Uzbek-speaking and bilingual Uzbek-Russian-speaking children, focusing on the impact of language experience on audiovisual speech perception and the role of visual phonetic (i.e., mouth movements corresponding to phonetic/lexical information) and temporal (i.e., timing of speech signals) cues. A total of 321 children aged 4 to 10 years in Tashkent, Uzbekistan, discriminated /ba/ and /da/ syllables across three conditions: auditory-only, audiovisual phonetic (i...
November 14, 2023: Journal of Experimental Child Psychology
https://read.qxmd.com/read/37951157/competing-influence-of-visual-speech-on-auditory-neural-adaptation
#12
JOURNAL ARTICLE
Marc Sato
Visual information from a speaker's face enhances auditory neural processing and speech recognition. To determine whether auditory memory can be influenced by visual speech, the degree of auditory neural adaptation of an auditory syllable preceded by an auditory, visual, or audiovisual syllable was examined using EEG. Consistent with previous findings and additional adaptation of auditory neurons tuned to acoustic features, stronger adaptation of N1, P2 and N2 auditory evoked responses was observed when the auditory syllable was preceded by an auditory compared to a visual syllable...
November 9, 2023: Brain and Language
https://read.qxmd.com/read/37934877/perceptions-of-adults-with-hearing-loss-about-the-communication-difficulties-generated-by-the-covid-19-preventive-measures-a-qualitative-study
#13
JOURNAL ARTICLE
Loonan Chauvette, Alexis Pinsonnault-Skvarenina, Andréanne Sharp, Jean-Pierre Gagné, Adriana Bender Moreira Lacerda, Mathieu Hotton
PURPOSE: The COVID-19 pandemic led to the implementation of preventive measures that exacerbated communication difficulties for individuals with hearing loss. This study aims to explore the perception of adults with hearing loss about the communication difficulties caused by the preventive measures and about their experiences with communication 1 year after the adoption of these preventive measures. METHOD: Individual semistructured interviews were conducted via videoconference with six adults who have hearing loss from the province of Québec, Canada...
November 7, 2023: Journal of Speech, Language, and Hearing Research: JSLHR
https://read.qxmd.com/read/37860909/the-effect-of-face-orientation-on-audiovisual-speech-integration-in-infancy-an-electrophysiological-study
#14
JOURNAL ARTICLE
Magdalena Szmytke, Dianna Ilyka, Joanna Duda-Goławska, Zuzanna Laudańska, Anna Malinowska-Korczak, Przemysław Tomalski
Humans pay special attention to faces and speech from birth, but the interplay of developmental processes leading to specialization is poorly understood. We investigated the effects of face orientation on audiovisual (AV) speech perception in two age groups of infants (younger: 5- to 6.5-month-olds; older: 9- to 10.5-month-olds) and adults. We recorded event-related potentials (ERP) in response to videos of upright and inverted faces producing /ba/ articulation dubbed with auditory syllables that were either matching /ba/ or mismatching /ga/ the mouth movement...
November 2023: Developmental Psychobiology
https://read.qxmd.com/read/37850878/multilevel-modeling-of-gaze-from-listeners-with-hearing-loss-following-a-realistic-conversation
#15
JOURNAL ARTICLE
Martha M Shiell, Jeppe Høy-Christensen, Martin A Skoglund, Gitte Keidser, Johannes Zaar, Sergi Rotger-Griful
PURPOSE: There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation. METHOD: Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses...
November 9, 2023: Journal of Speech, Language, and Hearing Research: JSLHR
https://read.qxmd.com/read/37843038/the-development-of-audiovisual-speech-perception-in-mandarin-speaking-children-evidence-from-the-mcgurk-paradigm
#16
JOURNAL ARTICLE
Yi Weng, Yicheng Rong, Gang Peng
The developmental trajectory of audiovisual speech perception in Mandarin-speaking children remains understudied. This cross-sectional study in Mandarin-speaking 3- to 4-year-old, 5- to 6-year-old, 7- to 8-year-old children, and adults from Xiamen, China (n = 87, 44 males) investigated this issue using the McGurk paradigm with three levels of auditory noise. For the identification of congruent stimuli, 3- to 4-year-olds underperformed older groups whose performances were comparable. For the perception of the incongruent stimuli, a developmental shift was observed as 3- to 4-year-olds made significantly more audio-dominant but fewer audiovisual-integrated responses to incongruent stimuli than older groups...
October 16, 2023: Child Development
https://read.qxmd.com/read/37797728/a-systematic-review-on-speech-in-noise-perception-in-autism
#17
REVIEW
Diego Ruiz Callejo, Bart Boets
Individuals with autism spectrum disorder (ASD) exhibit atypical speech-in-noise (SiN) perception, but the scope of these impairments has not been clearly defined. We conducted a systematic review of the behavioural research on SiN perception in ASD, using a comprehensive search strategy across databases (Embase, Pubmed, Web of Science, APA PsycArticles, LLBA, clinicaltrials.gov and PsyArXiv). We withheld 20 studies that generally revealed intact speech perception in stationary noise, while impairments in speech discrimination were found in temporally modulated noise, concurrent speech, and audiovisual speech perception...
September 27, 2023: Neuroscience and Biobehavioral Reviews
https://read.qxmd.com/read/37759946/advances-in-understanding-the-phenomena-and-processing-in-audiovisual-speech-perception
#18
EDITORIAL
Kaisa Tiippana
The Special Issue entitled "Advances in Understanding the Phenomena and Processing in Audiovisual Speech Perception" attracted a variety of articles written by prominent authors in the field [...].
September 20, 2023: Brain Sciences
https://read.qxmd.com/read/37757989/a-representation-of-abstract-linguistic-categories-in-the-visual-system-underlies-successful-lipreading
#19
JOURNAL ARTICLE
Aaron R Nidiffer, Cody Zhewei Cao, Aisling O'Sullivan, Edmund C Lalor
There is considerable debate over how visual speech is processed in the absence of sound and whether neural activity supporting lipreading occurs in visual brain areas. Much of the ambiguity stems from a lack of behavioral grounding and neurophysiological analyses that cannot disentangle high-level linguistic and phonetic/energetic contributions from visual speech. To address this, we recorded EEG from human observers as they watched silent videos, half of which were novel and half of which were previously rehearsed with the accompanying audio...
November 15, 2023: NeuroImage
https://read.qxmd.com/read/37742092/slps-perceptions-of-language-learning-myths-about-children-who-are-dhh
#20
JOURNAL ARTICLE
Jena McDaniel, Hannah Krimm, C Melanie Schuele
This article reports on speech-language pathologists' (SLPs') knowledge related to myths about spoken language learning of children who are deaf and hard of hearing (DHH). The broader study was designed as a step toward narrowing the research-practice gap and providing effective, evidence-based language services to children. In the broader study, SLPs (n = 106) reported their agreement/disagreement with myth statements and true statements (n = 52) about 7 clinical topics related to speech and language development...
September 23, 2023: Journal of Deaf Studies and Deaf Education
keyword
keyword
159901
1
2
Fetch more papers »
Fetching more papers... Fetching...
Remove bar
Read by QxMD icon Read
×

Save your favorite articles in one place with a free QxMD account.

×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"

We want to hear from doctors like you!

Take a second to answer a survey question.