Read by QxMD icon Read

Trends in Hearing

Abigail A Kressner, Tobias May, Torsten Dau
It has been suggested that the most important factor for obtaining high speech intelligibility in noise with cochlear implant (CI) recipients is to preserve the low-frequency amplitude modulations of speech across time and frequency by, for example, minimizing the amount of noise in the gaps between speech segments. In contrast, it has also been argued that the transient parts of the speech signal, such as speech onsets, provide the most important information for speech intelligibility. The present study investigated the relative impact of these two factors on the potential benefit of noise reduction for CI recipients by systematically introducing noise estimation errors within speech segments, speech gaps, and the transitions between them...
January 2019: Trends in Hearing
Dmitry I Nechaev, Olga N Milekhina, Alexander Ya Supin
Rippled-spectrum stimuli are used to evaluate the resolution of the spectro-temporal structure of sounds. Measurements of spectrum-pattern resolution imply the discrimination between the test and reference stimuli. Therefore, estimates of rippled-pattern resolution could depend on both the test stimulus and the reference stimulus type. In this study, the ripple-density resolution was measured using combinations of two test stimuli and two reference stimuli. The test stimuli were rippled-spectrum signals with constant phase or rippled-spectrum signals with ripple-phase reversals...
January 2019: Trends in Hearing
Min Zhang, Yu-Lan Mary Ying, Antje Ihlefeld
Informational masking (IM) can greatly reduce speech intelligibility, but the neural mechanisms underlying IM are not understood. Binaural differences between target and masker can improve speech perception. In general, improvement in masked speech intelligibility due to provision of spatial cues is called spatial release from masking. Here, we focused on an aspect of spatial release from masking, specifically, the role of spatial attention. We hypothesized that in a situation with IM background sound (a) attention to speech recruits lateral frontal cortex (LFCx) and (b) LFCx activity varies with direction of spatial attention...
January 2018: Trends in Hearing
Yang-Wenyi Liu, Xiaoting Cheng, Bing Chen, Kevin Peng, Akira Ishiyama, Qian-Jie Fu
Patients with single-sided deafness (SSD) often experience poor sound localization, reduced speech understanding in noise, reduced quality of life, and tinnitus. The present study aims to evaluate effects of tinnitus and duration of deafness on sound localization and speech recognition in noise by SSD subjects. Sound localization and speech recognition in noise were measured in 26 SSD and 10 normal-hearing (NH) subjects. Speech was always presented directly in front of the listener. Noise was presented to the deaf ear, in front of the listener, or to the better hearing ear...
January 2018: Trends in Hearing
Deborah A Hall, Harriet Smith, Alice Hibbert, Veronica Colley, Haúla F Haider, Adele Horobin, Alain Londero, Birgit Mazurek, Brian Thacker, Kathryn Fackrell
Subjective tinnitus is a chronic heterogeneous condition that is typically managed using intervention approaches based on sound devices, psychologically informed therapies, or pharmaceutical products. For clinical trials, there are currently no common standards for assessing or reporting intervention efficacy. This article reports on the first of two steps to establish a common standard, which identifies what specific tinnitus-related complaints ("outcome domains") are critical and important to assess in all clinical trials to determine whether an intervention has worked...
January 2018: Trends in Hearing
Lindsay DeVries, Julie G Arenberg
Speech understanding abilities are highly variable among cochlear implant (CI) listeners. Poor electrode-neuron interfaces (ENIs) caused by sparse neural survival or distant electrode placement may lead to increased channel interaction and reduced speech perception. Currently, it is not possible to directly measure neural survival in CI listeners; therefore, obtaining information about electrode position is an alternative approach to assessing ENIs. This information can be estimated with computerized tomography (CT) imaging; however, postoperative CT imaging is not often available...
January 2018: Trends in Hearing
Snezana A Filipović, Mark P Haggard, Helen Spencer, Goran Trajković
In children with normal cochlear acuity, middle ear fluid often abolishes otoacoustic emissions (OAEs), and negative middle ear pressure (NMEP) reduces them. No convincing evidence of beneficial pressure compensation on distortion product OAE (DPOAE) has yet been presented. Two studies aimed to document effects of NMEP on transient OAE (TEOAE) and DPOAE. In Study 1, TEOAE and DPOAE pass/fail responses were analyzed before and after pressure compensation in 50 consecutive qualifying referrals having NMEP from -100 to -299 daPa...
January 2018: Trends in Hearing
Erin M Picou, Gabrielle H Buono
The purpose of this study was to evaluate the relationship between emotional responses to sounds, hearing acuity, and isolation, specifically objective isolation (social disconnectedness) and subjective isolation (loneliness). It was predicted that ratings of valence in response to pleasant and unpleasant stimuli would influence the relationship between hearing loss and isolation. Participants included 83 adults, without depression, who were categorized into three groups (young with normal hearing, older with normal hearing, and adults with mild-to-moderately severe hearing loss)...
January 2018: Trends in Hearing
Thomas Koelewijn, José A P van Haastrecht, Sophia E Kramer
Previous research has shown the effects of task demands on pupil responses in both normal hearing (NH) and hearing impaired (HI) adults. One consistent finding is that HI listeners have smaller pupil dilations at low levels of speech recognition performance (≤50%). This study aimed to examine the pupil dilation in adults with a normal pure-tone audiogram who experience serious difficulties when processing speech-in-noise. Hence, 20 adults, aged 26 to 62 years, with traumatic brain injury (TBI) or cerebrovascular accident (CVA) but with a normal audiogram participated...
January 2018: Trends in Hearing
Emily J Watts, Kathryn Fackrell, Sandra Smith, Jacqueline Sheldrake, Haúla Haider, Derek J Hoare
Tinnitus is a prevalent complaint, and people with bothersome tinnitus can report any number of associated problems. Yet, to date, only a few studies, with different populations and relatively modest sample sizes, have qualitatively evaluated what those problems are. Our primary objective was to determine domains of tinnitus problem according to a large clinical data set. This was a retrospective analysis of anonymized clinical data from patients who attended a U.K. Tinnitus Treatment Center between 1989 and 2014...
January 2018: Trends in Hearing
Maike A S Tahden, Anja Gieseler, Markus Meis, Kirsten C Wagener, Hans Colonius
The aim of this study was to compare elderly individuals who are hearing impaired but inexperienced in using hearing aids (hearing aid non-users; HA-NU) with their aided counterparts (hearing aid users; HA-U) across various auditory and non-auditory measures in order to identify differences that might be associated with the low hearing aid uptake rate. We have drawn data of 72 HA-NU and 139 HA-U with a mild-to-moderate hearing loss, and matched these two groups on the degree of hearing impairment, age, and sex...
January 2018: Trends in Hearing
Raul Sanchez Lopez, Federica Bianchi, Michal Fereczkowski, Sébastien Santurette, Torsten Dau
Pure-tone audiometry still represents the main measure to characterize individual hearing loss and the basis for hearing-aid fitting. However, the perceptual consequences of hearing loss are typically associated not only with a loss of sensitivity but also with a loss of clarity that is not captured by the audiogram. A detailed characterization of a hearing loss may be complex and needs to be simplified to efficiently explore the specific compensation needs of the individual listener. Here, it is hypothesized that any listener's hearing profile can be characterized along two dimensions of distortion: Type I and Type II...
January 2018: Trends in Hearing
Chantel Ritter, Tara Vongpaisal
For cochlear implant (CI) users, degraded spectral input hampers the understanding of prosodic vocal emotion, especially in difficult listening conditions. Using a vocoder simulation of CI hearing, we examined the extent to which informative multimodal cues in a talker's spoken expressions improve normal hearing (NH) adults' speech and emotion perception under different levels of spectral degradation (two, three, four, and eight spectral bands). Participants repeated the words verbatim and identified emotions (among four alternative options: happy, sad, angry, and neutral) in meaningful sentences that are semantically congruent with the expression of the intended emotion...
January 2018: Trends in Hearing
Natalia Stupak, Monica Padilla, Robert P Morse, David M Landsberger
Cochlear-implant users who have experienced both analog and pulsatile sound coding strategies often have strong preferences for the sound quality of one over the other. This suggests that analog and pulsatile stimulation may provide different information or sound quality to an implant listener. It has been well documented that many implant listeners both prefer and perform better with multichannel analog than multichannel pulsatile strategies, although the reasons for these differences remain unknown. Here, we examine the perceptual differences between analog and pulsatile stimulation on a single electrode...
January 2018: Trends in Hearing
Matthew B Winn, Ashley N Moore
Contextual cues can be used to improve speech recognition, especially for people with hearing impairment. However, previous work has suggested that when the auditory signal is degraded, context might be used more slowly than when the signal is clear. This potentially puts the hearing-impaired listener in a dilemma of continuing to process the last sentence when the next sentence has already begun. This study measured the time course of the benefit of context using pupillary responses to high- and low-context sentences that were followed by silence or various auditory distractors (babble noise, ignored digits, or attended digits)...
January 2018: Trends in Hearing
Maarten van Beurden, Monique Boymans, Mirjam van Geleuken, Dirk Oetting, Birger Kollmeier, Wouter A Dreschler
Aversiveness of loud sounds is a frequent complaint by hearing aid users, especially when fitted bilaterally. This study investigates whether loudness summation can be held responsible for this finding. Two aspects of loudness summation should be taken into account: spectral loudness summation for broadband signals and binaural loudness summation for signals that are presented binaurally. In this study, the effect of different symmetrical hearing losses was studied. Measurements were obtained with the widely used technique of Adaptive Categorical Loudness Scaling...
January 2018: Trends in Hearing
Virginia Best, Jayaganesh Swaminathan, Norbert Kopčo, Elin Roverud, Barbara Shinn-Cunningham
The perception of simple auditory mixtures is known to evolve over time. For instance, a common example of this is the "buildup" of stream segregation that is observed for sequences of tones alternating in pitch. Yet very little is known about how the perception of more complicated auditory scenes, such as multitalker mixtures, changes over time. Previous data are consistent with the idea that the ability to segregate a target talker from competing sounds improves rapidly when stable cues are available, which leads to improvements in speech intelligibility...
January 2018: Trends in Hearing
Stephen C Rowland, Douglas E H Hartley, Ian M Wiggins
Listening to speech in the noisy conditions of everyday life can be effortful, reflecting the increased cognitive workload involved in extracting meaning from a degraded acoustic signal. Studying the underlying neural processes has the potential to provide mechanistic insight into why listening is effortful under certain conditions. In a move toward studying listening effort under ecologically relevant conditions, we used the silent and flexible neuroimaging technique functional near-infrared spectroscopy (fNIRS) to examine brain activity during attentive listening to speech in naturalistic scenes...
January 2018: Trends in Hearing
Maaike Van Eeckhoutte, Dimitar Spirrov, Jan Wouters, Tom Francart
In Part I, we investigated 40-Hz auditory steady-state response (ASSR) amplitudes for the use of objective loudness balancing across the ears for normal-hearing participants and found median across-ear ratios in ASSR amplitudes close to 1. In this part, we further investigated whether the ASSR can be used to estimate binaural loudness balance for listeners with asymmetric hearing, for whom binaural loudness balancing is of particular interest. We tested participants with asymmetric hearing and participants with bimodal hearing, who hear with electrical stimulation through a cochlear implant (CI) in one ear and with acoustical stimulation in the other ear...
January 2018: Trends in Hearing
Jacques A Grange, John F Culling, Barry Bardsley, Laura I Mackinney, Sarah E Hughes, Steven S Backhouse
Turning an ear toward the talker can enhance spatial release from masking. Here, with their head free, listeners attended to speech at a gradually diminishing signal-to-noise ratio and with the noise source azimuthally separated from the speech source by 180° or 90°. Young normal-hearing adult listeners spontaneously turned an ear toward the speech source in 64% of audio-only trials, but a visible talker's face or cochlear implant (CI) use significantly reduced this head-turn behavior. All listener groups made more head movements once instructed to explore the potential benefit of head turns and followed the speech to lower signal-to-noise ratios...
January 2018: Trends in Hearing
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"