Read by QxMD icon Read

Journal of Phonetics

Samantha Gordon Danner, Adriano Vilela Barbosa, Louis Goldstein
This study presents techniques for quantitatively analyzing coordination and kinematics in multimodal speech using video, audio and electromagnetic articulography (EMA) data. Multimodal speech research has flourished due to recent improvements in technology, yet gesture detection/annotation strategies vary widely, leading to difficulty in generalizing across studies and in advancing this field of research. We describe how FlowAnalyzer software can be used to extract kinematic signals from basic video recordings; and we apply a technique, derived from speech kinematic research, to detect bodily gestures in these kinematic signals...
November 2018: Journal of Phonetics
Shravan Vasishth, Bruno Nicenboim, Mary E Beckman, Fangfang Li, Eun Jong Kong
This tutorial analyzes voice onset time (VOT) data from Dongbei (Northeastern) Mandarin Chinese and North American English to demonstrate how Bayesian linear mixed models can be fit using the programming language Stan via the R package brms. Through this case study, we demonstrate some of the advantages of the Bayesian framework: researchers can (i) flexibly define the underlying process that they believe to have generated the data; (ii) obtain direct information regarding the uncertainty about the parameter that relates the data to the theoretical question being studied; and (iii) incorporate prior knowledge into the analysis...
November 2018: Journal of Phonetics
Angela Cooper, Ann Bradlow
The current study investigated the phonetic adjustment mechanisms that underlie perceptual adaptation in first and second language (Dutch-English) listeners by exposing them to a novel English accent containing controlled deviations from the standard accent (e.g. /i/-to-/ɪ/ yielding /krɪm/ instead of /krim/ for 'cream'). These deviations involved contrasts that either were contrastive or were not contrastive in Dutch. Following accent exposure with disambiguating feedback, listeners completed lexical decision and word identification tasks...
May 2018: Journal of Phonetics
D H Whalen, Wei-Rong Chen, Mark K Tiede, Hosung Nam
Speech, though communicative, is quite variable both in articulation and acoustics, and it has often been claimed that articulation is more variable. Here we compared variability in articulation and acoustics for 32 speakers in the x-ray microbeam database (XRMB; Westbury, 1994). Variability in tongue, lip and jaw positions for nine English vowels (/u, ʊ, æ, ɑ, ʌ, ɔ, ε, ɪ, i/) was compared to that of the corresponding formant values. The domains were made comparable by creating three-dimensional spaces for each: the first three principal components from an analysis of a 14-dimensional space for articulation, and an F1xF2xF3 space for acoustics...
May 2018: Journal of Phonetics
Arthur S Abramson, D H Whalen
Just over fifty years ago, Lisker and Abramson proposed a straightforward measure of acoustic differences among stop consonants of different voicing categories, voice onset time (VOT). Since that time, hundreds of studies have used this method. Here, we review the original definition of VOT, propose some extensions to the definition, and discuss some problematic cases. We propose a set of terms for the most important aspects of VOT and a set of Praat labels that could provide some consistency for future cross-study analyses...
July 2017: Journal of Phonetics
Melissa A Redford, Grace E Oh
The early acquisition of language-specific temporal patterns relative to the late development of speech motor control suggests a dissociation between the representation and execution of articulatory timing. The current study tested for such a dissociation in first and second language acquisition. American English-speaking children (5- and 8-year-olds) and Korean-speaking adult learners of English repeatedly produced real English words in a simple carrier sentence. The words were designed to elicit different language-specific vowel length contrasts...
July 2017: Journal of Phonetics
Eun Jong Kong, Jan Edwards
This study examined individual differences in categorical perception and the use of multiple acoustic cues in the perception of the stop voicing contrast. Goals were to investigate whether gradiency of speech perception was related to listeners' differential sensitivity to acoustic cues and to individual differences in executive function. The experiment included two speech perception tasks (visual analogue scaling [VAS] and anticipatory eye movement [AEM]) administered to 30 English-speaking adults in two separate experimental sessions...
November 2016: Journal of Phonetics
Cynthia G Clopper, Terrin N Tamati, Janet B Pierrehumbert
Lexical processing is slower and less accurate for unfamiliar dialects than familiar dialects. The goal of the current study was to test the hypothesis that dialect differences in lexical processing reflect differences in lexical encoding strength across dialects. Lexical encoding (i.e., updating the cognitive lexical representation to reflect the current token) was distinguished from lexical recognition (i.e., mapping the incoming acoustic signal to the target lexical category) in a series of lexical processing tasks with Midland and Northern American English...
September 2016: Journal of Phonetics
Aaron D Mitchel, Chip Gerfen, Daniel J Weiss
One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception...
May 2016: Journal of Phonetics
James W Dias, Theresa C Cook, Lawrence D Rosenblum
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum...
May 2016: Journal of Phonetics
Stefan A Frisch, Sylvie M Wodzinski
Velar-vowel coarticulation in English, resulting in so-called velar fronting in front vowel contexts, was studied using ultrasound imaging of the tongue during /k/ onsets of monosyllabic words with no coda or a labial coda. Ten native English speakers were recorded and analyzed. A variety of coarticulation patterns that often appear to contain small differences in typical closure location for similar vowels was found. An account of the coarticulation pattern is provided using a virtual target model of stop consonant production where there are two /k/ allophones in English, one for front vowels and one for non-front vowels...
May 2016: Journal of Phonetics
Anne Pycha, Delphine Dahan
We investigate the hypothesis that duration and spectral differences in vowels before voiceless versus voiced codas originate from a single source, namely the reorganization of articulatory gestures relative to one another in time. As a test case, we examine the American English diphthong /aɪ/, in which the acoustic manifestations of the nucleus /a/ and offglide /ɪ/ gestures are relatively easy to identify, and we use the ratio of nucleus-to-offglide duration as an index of the temporal distance between these gestures...
May 2016: Journal of Phonetics
Argyro Katsika
This study aims at examining and accounting for the scope of the temporal effect of phrase boundaries. Previous research has indicated that there is an interaction between boundary-related lengthening and prominence such that the former extends towards the nearby prominent syllable. However, it is unclear whether this interaction is due to lexical stress and/or phrasal prominence (marked by pitch accent) and how far towards the prominent syllable the effect extends. Here, we use an electromagnetic articulography (EMA) study of Greek to examine the scope of boundary-related lengthening as a function of lexical stress and pitch accent separately...
March 2016: Journal of Phonetics
Andrew R Plummer, Mary E Beckman
Moulin-Frier et al. (2016) proffer a conceptual framework and computational modeling architecture for the investigation of the emergence of phonological universals for spoken languages. They validate the framework and architecture by testing to see whether universals such as the prevalence of triangular vowel systems that show adequate dispersion in the F1-F2-F3 space can fall out of simulations of referential communication between social agents, without building principles such as dispersion directly into the model...
November 1, 2015: Journal of Phonetics
Melissa A Redford
Speaking is an intentional activity. It is also a complex motor skill; one that exhibits protracted development and the fully automatic character of an overlearned behavior. Together these observations suggest an analogy with skilled behavior in the non-language domain. This analogy is used here to argue for a model of production that is grounded in the activity of speaking and structured during language acquisition. The focus is on the plan that controls the execution of fluent speech; specifically, on the units that are activated during the production of an intonational phrase...
November 1, 2015: Journal of Phonetics
Jessamyn Schertz, Taehong Cho, Andrew Lotto, Natasha Warner
The current work examines native Korean speakers' perception and production of stop contrasts in their native language (L1, Korean) and second language (L2, English), focusing on three acoustic dimensions that are all used, albeit to different extents, in both languages: voice onset time (VOT), f0 at vowel onset, and closure duration. Participants used all three cues to distinguish the L1 Korean three-way stop distinction in both production and perception. Speakers' productions of the L2 English contrasts were reliably distinguished using both VOT and f0 (even though f0 is only a very weak cue to the English contrast), and, to a lesser extent, closure duration...
September 1, 2015: Journal of Phonetics
Daniel Fogerty
The present study investigated how non-linguistic, indexical information about talker identity interacts with contributions to sentence intelligibility by the time-varying amplitude (temporal envelope) and fundamental frequency (F0). Young normal-hearing adults listened to sentences that preserved the original consonants but replaced the vowels with a single vowel production. This replacement vowel selectively preserved amplitude or F0 cues of the original vowel, but replaced cues to phonetic identity. Original vowel duration was always preserved...
September 2015: Journal of Phonetics
Erik C Tracy, Sierra A Bainter, Nicholas P Satariano
While numerous studies have demonstrated that a male speaker's sexual orientation can be identified from relatively long passages of speech, few studies have evaluated whether listeners can determine sexual orientation when presented with word-length stimuli. If listeners are able to distinguish between self-identified gay and heterosexual male speakers of American English, it is unclear whether they form their judgments based on a phoneme, such as a vowel or consonant, or multiple phonemes, such as a vowel and a consonant...
September 2015: Journal of Phonetics
Bozena Pajak, Roger Levy
The end-result of perceptual reorganization in infancy is currently viewed as a reconfigured perceptual space, "warped" around native-language phonetic categories, which then acts as a direct perceptual filter on any non-native sounds: naïve-listener discrimination of non-native-sounds is determined by their mapping onto native-language phonetic categories that are acoustically/articulatorily most similar. We report results that suggest another factor in non-native speech perception: some perceptual sensitivities cannot be attributed to listeners' warped perceptual space alone, but rather to enhanced general sensitivity along phonetic dimensions that the listeners' native language employs to distinguish between categories...
September 1, 2014: Journal of Phonetics
Eva Reinisch, David R Wozny, Holger Mitterer, Lori L Holt
Listeners use lexical or visual context information to recalibrate auditory speech perception. After hearing an ambiguous auditory stimulus between /aba/ and /ada/ coupled with a clear visual stimulus (e.g., lip closure in /aba/), an ambiguous auditory-only stimulus is perceived in line with the previously seen visual stimulus. What remains unclear, however, is what exactly listeners are recalibrating: phonemes, phone sequences, or acoustic cues. To address this question we tested generalization of visually-guided auditory recalibration to 1) the same phoneme contrast cued differently (i...
July 1, 2014: Journal of Phonetics
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"