keyword
MENU ▼
Read by QxMD icon Read
search

Audiovisual speech perception

keyword
https://read.qxmd.com/read/30897109/perception-of-incongruent-audiovisual-english-consonants
#1
Kaylah Lalonde, Lynne A Werner
Causal inference-the process of deciding whether two incoming signals come from the same source-is an important step in audiovisual (AV) speech perception. This research explored causal inference and perception of incongruent AV English consonants. Nine adults were presented auditory, visual, congruent AV, and incongruent AV consonant-vowel syllables. Incongruent AV stimuli included auditory and visual syllables with matched vowels, but mismatched consonants. Open-set responses were collected. For most incongruent syllables, participants were aware of the mismatch between auditory and visual signals (59...
2019: PloS One
https://read.qxmd.com/read/30809665/cued-speech-enhances-speech-in-noise-perception
#2
Clémence Bayard, Laura Machart, Antje Strauß, Silvain Gerber, Vincent Aubanel, Jean-Luc Schwartz
Speech perception in noise remains challenging for Deaf/Hard of Hearing people (D/HH), even fitted with hearing aids or cochlear implants. The perception of sentences in noise by 20 implanted or aided D/HH subjects mastering Cued Speech (CS), a system of hand gestures complementing lip movements, was compared with the perception of 15 typically hearing (TH) controls in three conditions: audio only, audiovisual, and audiovisual + CS. Similar audiovisual scores were obtained for signal-to-noise ratios (SNRs) 11 dB higher in D/HH participants compared with TH ones...
February 27, 2019: Journal of Deaf Studies and Deaf Education
https://read.qxmd.com/read/30655099/multisensory-perception-and-attention-in-school-age-children
#3
Ayla Barutchu, Sarah Toohey, Mohit N Shivdasani, Joanne M Fifer, Sheila G Crewther, David B Grayden, Antonio G Paolini
Although it is well known that attention can modulate multisensory processes in adults and infants, this relationship has not been investigated in school-age children. Attention abilities of 53 children (ages 7-13 years) were assessed using three subscales of the Test of Everyday Attention for Children (TEA-Ch): visuospatial attention (Sky Search [SS]), auditory sustained attention (Score), and audiovisual dual task (SSDT, where the SS and Score tasks are performed simultaneously). Multisensory processes were assessed using the McGurk effect (a verbal illusion where speech perception is altered by vision) and the Stream-Bounce (SB) effect (a nonverbal illusion where visual perception is altered by sound)...
January 14, 2019: Journal of Experimental Child Psychology
https://read.qxmd.com/read/30635834/the-familiar-melody-advantage-in-auditory-perceptual-development-parallels-between-spoken-language-acquisition-and-general-auditory-perception
#4
Sarah C Creel
How do learners build up auditory pattern knowledge? Findings from children's spoken word learning suggest more robust auditory representations for highly familiar words than for newly learned words. This argues against spoken language learning as a process of simply acquiring a fixed set of speech sound categories, suggesting instead that specific words may be the relevant units. More generally, one might state this as the specific-learning hypothesis-that acquiring sound pattern knowledge involves learning specific patterns, rather than abstract pattern components...
January 11, 2019: Attention, Perception & Psychophysics
https://read.qxmd.com/read/30541680/audiovisual-speech-perception-and-language-acquisition-in-preterm-infants-a-longitudinal-study
#5
Masahiro Imafuku, Masahiko Kawai, Fusako Niwa, Yuta Shinya, Masako Myowa
BACKGROUND: Preterm infants have a higher risk of language delay throughout childhood. The ability to integrate audiovisual speech information is associated with language acquisition in term infants; however, the relation is still unclear in preterm infant. AIM AND METHODS: This study longitudinally investigated visual preference for audiovisual congruent and incongruent speech during a preferential looking task using eye-tracking in preterm and term infants at 6, 12, and 18 months of corrected age...
December 8, 2018: Early Human Development
https://read.qxmd.com/read/30458524/spontaneous-otoacoustic-emissions-reveal-an-efficient-auditory-efferent-network
#6
Viorica Marian, Tuan Q Lam, Sayuri Hayakawa, Sumitrajit Dhar
Purpose: Understanding speech often involves processing input from multiple modalities. The availability of visual information may make auditory input less critical for comprehension. This study examines whether the auditory system is sensitive to the presence of complementary sources of input when exerting top-down control over the amplification of speech stimuli. Method: Auditory gain in the cochlea was assessed by monitoring spontaneous otoacoustic emissions (SOAEs), which are by-products of the amplification process...
November 8, 2018: Journal of Speech, Language, and Hearing Research: JSLHR
https://read.qxmd.com/read/30418995/what-accounts-for-individual-differences-in-susceptibility-to-the-mcgurk-effect
#7
Violet A Brown, Maryam Hedayati, Annie Zanger, Sasha Mayn, Lucia Ray, Naseem Dillman-Hasso, Julia F Strand
The McGurk effect is a classic audiovisual speech illusion in which discrepant auditory and visual syllables can lead to a fused percept (e.g., an auditory /bɑ/ paired with a visual /gɑ/ often leads to the perception of /dɑ/). The McGurk effect is robust and easily replicated in pooled group data, but there is tremendous variability in the extent to which individual participants are susceptible to it. In some studies, the rate at which individuals report fusion responses ranges from 0% to 100%. Despite its widespread use in the audiovisual speech perception literature, the roots of the wide variability in McGurk susceptibility are largely unknown...
2018: PloS One
https://read.qxmd.com/read/30404498/leveraging-audiovisual-speech-perception-to-measure-anticipatory-coarticulation
#8
Melissa A Redford, Jeffrey E Kallay, Sergei V Bogdanov, Eric Vatikiotis-Bateson
A noninvasive method for accurately measuring anticipatory coarticulation at experimentally defined temporal locations is introduced. The method leverages work in audiovisual (AV) speech perception to provide a synthetic and robust measure that can be used to inform psycholinguistic theory. In this validation study, speakers were audio-video recorded while producing simple subject-verb-object sentences with contrasting object noun rhymes. Coarticulatory resistance of target noun onsets was manipulated as was metrical context for the determiner that modified the noun...
October 2018: Journal of the Acoustical Society of America
https://read.qxmd.com/read/30391756/modality-independent-recruitment-of-inferior-frontal-cortex-during-speech-processing-in-human-infants
#9
Nicole Altvater-Mackensen, Tobias Grossmann
Despite increasing interest in the development of audiovisual speech perception in infancy, the underlying mechanisms and neural processes are still only poorly understood. In addition to regions in temporal cortex associated with speech processing and multimodal integration, such as superior temporal sulcus, left inferior frontal cortex (IFC) has been suggested to be critically involved in mapping information from different modalities during speech perception. To further illuminate the role of IFC during infant language learning and speech perception, the current study examined the processing of auditory, visual and audiovisual speech in 6-month-old infants using functional near-infrared spectroscopy (fNIRS)...
November 2018: Developmental Cognitive Neuroscience
https://read.qxmd.com/read/30303762/cross-modal-phonetic-encoding-facilitates-the-mcgurk-illusion-and-phonemic-restoration
#10
Noelle M Abbott, Antoine J Shahin
In spoken language, audiovisual (AV) perception occurs when the visual modality influences encoding of acoustic features (e.g., phonetic representations) at the auditory cortex. We examined how visual speech (lip-movements) transforms phonetic representations, indexed by changes to the N1 auditory evoked potential (AEP). EEG was acquired while human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables, /ba/ and /wa/, presented in auditory-only, AV congruent or incongruent contexts, or in a context in which the consonants were replaced by white noise (noise-replaced)...
October 10, 2018: Journal of Neurophysiology
https://read.qxmd.com/read/30262826/multisensory-perception-reflects-individual-differences-in-processing-temporal-correlations
#11
Aaron R Nidiffer, Adele Diederich, Ramnarayan Ramachandran, Mark T Wallace
Sensory signals originating from a single event, such as audiovisual speech, are temporally correlated. Correlated signals are known to facilitate multisensory integration and binding. We sought to further elucidate the nature of this relationship, hypothesizing that multisensory perception will vary with the strength of audiovisual correlation. Human participants detected near-threshold amplitude modulations in auditory and/or visual stimuli. During audiovisual trials, the frequency and phase of auditory modulations were varied, producing signals with a range of correlations...
September 27, 2018: Scientific Reports
https://read.qxmd.com/read/30249803/the-motor-network-reduces-multisensory-illusory-perception
#12
Takenobu Murakami, Mitsunari Abe, Winnugroho Wiratman, Juri Fujiwara, Masahiro Okamoto, Tomomi Mizuochi-Endo, Toshiki Iwabuchi, Michiru Makuuchi, Akira Yamashita, Amanda Tiksnadi, Fang-Yu Chang, Hitoshi Kubo, Nozomu Matsuda, Shunsuke Kobayashi, Satoshi Eifuku, Yoshikazu Ugawa
Observing mouth movements has strikingly effects on the perception of speech. Any mismatch between sound and mouth movements will result in listeners perceiving illusory consonants (McGurk effect), whereas matching mouth movements assist with the correct recognition of speech sounds. Recent neuroimaging studies have yielded evidence that the motor areas are involved in speech processing, yet their contributions to multisensory illusion remain unclear. Using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in an event-related design, we aimed to identify the functional roles of the motor network in the occurrence of multisensory illusion in female and male brains...
September 24, 2018: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
https://read.qxmd.com/read/30131578/single-trial-plasticity-in-evidence-accumulation-underlies-rapid-recalibration-to-asynchronous-audiovisual-speech
#13
David M Simon, Aaron R Nidiffer, Mark T Wallace
Asynchronous arrival of audiovisual information at the peripheral sensory organs is a ubiquitous property of signals in the natural environment due to differences in the propagation time of light and sound. As these cues are constantly changing their distance from the observer, rapid adaptation to asynchronies is crucial for their appropriate integration. We investigated the neural basis of rapid recalibration to asynchronous audiovisual speech in humans using a combination of psychophysics, drift diffusion modeling, and electroencephalography (EEG)...
August 21, 2018: Scientific Reports
https://read.qxmd.com/read/30043353/brief-report-differences-in-multisensory-integration-covary-with-sensory-responsiveness-in-children-with-and-without-autism-spectrum-disorder
#14
Jacob I Feldman, Wayne Kuang, Julie G Conrad, Alexander Tu, Pooja Santapuram, David M Simon, Jennifer H Foss-Feig, Leslie D Kwakye, Ryan A Stevenson, Mark T Wallace, Tiffany G Woynaroski
Research shows that children with autism spectrum disorder (ASD) differ in their behavioral patterns of responding to sensory stimuli (i.e., sensory responsiveness) and in various other aspects of sensory functioning relative to typical peers. This study explored relations between measures of sensory responsiveness and multisensory speech perception and integration in children with and without ASD. Participants were 8-17 year old children, 18 with ASD and 18 matched typically developing controls. Participants completed a psychophysical speech perception task, and parents reported on children's sensory responsiveness...
July 24, 2018: Journal of Autism and Developmental Disorders
https://read.qxmd.com/read/29990508/adult-dyslexic-readers-benefit-less-from-visual-input-during-audiovisual-speech-processing-fmri-evidence
#15
Ana A Francisco, Atsuko Takashima, James M McQueen, Mark van den Bunt, Alexandra Jesse, Margriet A Groen
The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words...
August 2018: Neuropsychologia
https://read.qxmd.com/read/29888819/electrocorticography-reveals-continuous-auditory-and-visual-speech-tracking-in-temporal-and-occipital-cortex
#16
Cristiano Micheli, Inga M Schepers, Müge Ozker, Daniel Yoshor, Michael S Beauchamp, Jochem W Rieger
During natural speech perception, humans must parse temporally continuous auditory and visual speech signals into sequences of words. However, most studies of speech perception present only single words or syllables. We used electrocorticography (subdural electrodes implanted on the brains of epileptic patients) to investigate the neural mechanisms for processing continuous audiovisual speech signals consisting of individual sentences. Using partial correlation analysis, we found that posterior superior temporal gyrus (pSTG) and medial occipital cortex tracked both the auditory and the visual speech envelopes...
June 11, 2018: European Journal of Neuroscience
https://read.qxmd.com/read/29867686/visual-speech-perception-cues-constrain-patterns-of-articulatory-variation-and-sound-change
#17
Jonathan Havenhill, Youngah Do
What are the factors that contribute to (or inhibit) diachronic sound change? While acoustically motivated sound changes are well-documented, research on the articulatory and audiovisual-perceptual aspects of sound change is limited. This paper investigates the interaction of articulatory variation and audiovisual speech perception in the Northern Cities Vowel Shift (NCVS), a pattern of sound change observed in the Great Lakes region of the United States. We focus specifically on the maintenance of the contrast between the vowels /ɑ/ and /ɔ/, both of which are fronted as a result of the NCVS...
2018: Frontiers in Psychology
https://read.qxmd.com/read/29867415/audiovisual-temporal-perception-in-aging-the-role-of-multisensory-integration-and-age-related-sensory-loss
#18
REVIEW
Cassandra J Brooks, Yu Man Chan, Andrew J Anderson, Allison M McKendrick
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision...
2018: Frontiers in Human Neuroscience
https://read.qxmd.com/read/29862161/shifts-in-audiovisual-processing-in-healthy-aging
#19
Sarah H Baum, Ryan Stevenson
Purpose of Review: The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes...
September 2017: Current Behavioral Neuroscience Reports
https://read.qxmd.com/read/29780312/combining-behavioral-and-erp-methodologies-to-investigate-the-differences-between-mcgurk-effects-demonstrated-by-cantonese-and-mandarin-speakers
#20
Juan Zhang, Yaxuan Meng, Catherine McBride, Xitao Fan, Zhen Yuan
The present study investigated the impact of Chinese dialects on McGurk effect using behavioral and event-related potential (ERP) methodologies. Specifically, intra-language comparison of McGurk effect was conducted between Mandarin and Cantonese speakers. The behavioral results showed that Cantonese speakers exhibited a stronger McGurk effect in audiovisual speech perception compared to Mandarin speakers, although both groups performed equally in the auditory and visual conditions. ERP results revealed that Cantonese speakers were more sensitive to visual cues than Mandarin speakers, though this was not the case for the auditory cues...
2018: Frontiers in Human Neuroscience
keyword
keyword
159901
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"