Read by QxMD icon Read

Visual Cognition

Xiaoli Zhang, Julie D Golomb
The image on our retina changes every time we make an eye movement. To maintain visual stability after saccades, specifically to locate visual targets, we may use nontarget objects as "landmarks". In the current study, we compared how the presence of nontargets affects target localization after saccades and during sustained fixation. Participants fixated a target object, which either maintained its location on the screen (sustained-fixation trials), or displaced to trigger a saccade (saccade trials)...
2018: Visual Cognition
Jonathan P Batten, Tim J Smith
Music has been shown to entrain movement. One of the body's most frequent movements, saccades, are arguably subject to a timer that may also be susceptible to musical entrainment. We developed a continuous and highly-controlled visual search task and varied the timing of the search target presentation, it was either gaze-contingent, tap-contingent, or visually-timed. We found: (1) explicit control of saccadic timing is limited to gross duration variations and imprecisely synchronized; (2) saccadic timing does not implicitly entrain to musical beats, even when closely aligned in phase; (3) eye movements predict visual onsets produced by motor-movements (finger-taps) and externally-timed sequences, beginning fixation prior to visual onset; (4) eye movement timing can be rhythmic, synchronizing to both motor-produced and externally timed visual sequences; each unaffected by musical beats...
2018: Visual Cognition
Mathias Benedek, David Daxberger, Sonja Annerer-Walcher, Jonathan Smallwood
Facial cues provide information about affective states and the direction of attention that is important for human social interaction. The present study examined how this capacity extends to judging whether attention is internally or externally directed. Participants evaluated a set of videos and images showing the face of people focused externally on a task, or internally while they performed a task in imagination. We found that participants could identify the focus of attention above chance in videos, and to a lesser degree in static images, but only when the eye region was visible...
2018: Visual Cognition
Kara J Blacker, Steven M Weisberg, Nora S Newcombe, Susan M Courtney
Spatial working memory (WM) seems to include two types of spatial information, locations and relations. However, this distinction has been based on small-scale tasks. Here, we used a virtual navigation paradigm to examine whether WM for locations and relations applies to the large-scale spatial world. We found that navigators who successfully learned two routes and also integrated them were superior at maintaining multiple locations and multiple relations in WM. However, over the entire spectrum of navigators, WM for spatial relations, but not locations, was specifically predictive of route integration performance...
2017: Visual Cognition
Nonie J Finlayson, Julie D Golomb
Visual cognition in our 3D world requires understanding how we accurately localize objects in 2D and depth, and what influence both types of location information have on visual processing. Spatial location is known to play a special role in visual processing, but most of these findings have focused on the special role of 2D location. One such phenomena is the spatial congruency bias (Golomb, Kupitz, & Thiemann, 2014), where 2D location biases judgments of object features but features do not bias location judgments...
2017: Visual Cognition
Elliot Collins, Eva Dundas, Yafit Gabay, David C Plaut, Marlene Behrmann
A recent theoretical account posits that, during the acquisition of word recognition in childhood, the pressure to couple visual and language representations in the left hemisphere (LH) results in competition with the LH representation of faces, which consequently become largely, albeit not exclusively, lateralized to the right hemisphere (RH). We explore predictions from this hypothesis using a hemifield behavioral paradigm with words and faces as stimuli, with concurrent ERP measurement, in a group of adults with developmental dyslexia (DD) or with congenital prosopagnosia (CP), and matched control participants...
2017: Visual Cognition
Sven Panis, Katrien Torfs, Celine R Gillebert, Johan Wagemans, Glyn W Humphreys
Multiple accounts have been proposed to explain category-specific recognition impairments. Some suggest that category-specific deficits may be caused by a deficit in recurrent processing between the levels of a hierarchically organized visual object recognition system. Here, we tested predictions of interactive processing theories on the emergence of category-selective naming deficits in neurologically intact observers and in patient GA, a single case showing a category-specific impairment for natural objects after a herpes simplex encephalitis infection...
2017: Visual Cognition
Jordana S Wynn, Michael B Bone, Michelle C Dragan, Kari L Hoffman, Bradley R Buchsbaum, Jennifer D Ryan
Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or "scanpath" elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes...
January 2, 2016: Visual Cognition
Andrea Bocincova, Amanda E van Lamsweerde, Jeffrey S Johnson
There is considerable debate regarding the ability to trade mnemonic precision for capacity in working memory (WM), with some studies reporting evidence consistent with such a trade-off and others suggesting it may not be possible. The majority of studies addressing this question have utilized a standard approach to analyzing continuous recall data in which individual-subject data from each experimental condition is fitted with a probabilistic model of choice. Estimated parameter values related to different aspects of WM (e...
2016: Visual Cognition
Carl Erick Hagmann, Mary C Potter
Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities...
2016: Visual Cognition
Jianhong Shen, Thomas J Palmeri
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models...
2016: Visual Cognition
Brett C Bays, Nicholas B Turk-Browne, Aaron R Seitz
Statistical learning refers to the extraction of probabilistic relationships between stimuli and is increasingly used as a method to understand learning processes. However, numerous cognitive processes are sensitive to the statistical relationships between stimuli and any one measure of learning may conflate these processes; to date little research has focused on differentiating these processes. To understand how multiple processes underlie statistical learning, here we compared, within the same study, operational measures of learning from different tasks that may be differentially sensitive to these processes...
2016: Visual Cognition
Alexander J Kirkham, Steven P Tipper
In spatial compatibility tasks, when the spatial location of a stimulus is irrelevant it nevertheless interferes when a response is required in a different spatial location. For example, response with a left key-press is slowed when the stimulus is presented to the right as compared to the left side of a computer screen. However, in some conditions this interference effect is not detected in reaction time (RT) measures. It is typically assumed that the lack of effect means the irrelevant spatial code was not analysed or that the information rapidly decayed before response...
September 14, 2015: Visual Cognition
Henryk Bukowski, Jari K Hietanen, Dana Samson
Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3)...
September 14, 2015: Visual Cognition
Andrew K Mackenzie, Julie M Harris
Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle...
July 3, 2015: Visual Cognition
Francesco Marini, Berry van den Berg, Marty G Woldorff
When attending for impending visual stimuli, cognitive systems prepare to identify relevant information while ignoring irrelevant, potentially distracting input. Recent work (Marini et al., 2013) showed that a supramodal distracter-filtering mechanism is invoked in blocked designs involving expectation of possible distracter stimuli, although this entails a cost (distraction-filtering cost) on speeded performance when distracters are expected but not presented. Here we used an arrow-flanker task to study whether an analogous cost, potentially reflecting the recruitment of a specific distraction-filtering mechanism, occurs dynamically when potential distraction is cued trial-to-trial (cued distracter-expectation cost)...
February 1, 2015: Visual Cognition
Amandine Lassalle, Roxane J Itier
Recent gaze cueing studies using dynamic cue sequences have reported increased attention orienting by gaze with faces expressing fear, surprise or anger. Here, we investigated whether the type of dynamic cue sequence used impacted the magnitude of this effect. When the emotion was expressed before or concurrently with gaze shift, no modulation of gaze-oriented attention by emotion was seen. In contrast, when the face cue averted gaze before expressing an emotion (as if reacting to the object after first localizing it), the gaze orienting effect was clearly increased for fearful, surprised and angry faces compared to neutral faces...
January 1, 2015: Visual Cognition
Tashina Graves, Howard E Egeth
When participants search for a shape (e.g., a circle) among a set of homogenous shapes (e.g., triangles) they are subject to distraction by color singletons that are more salient than the target. However, when participants search for a shape among heterogeneous shapes, the presence of a non-target color singleton does not slow responses to the target. Attempts have been made to explain these results from both bottom-up and top-down perspectives. What both accounts have in common is that they do not predict the occurrence of attentional capture on typical feature search displays...
2015: Visual Cognition
Daniel A Gajewski, Courtney P Wallin, John W Philbeck
Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance...
2015: Visual Cognition
David A Ross, Isabel Gauthier
Holistic processing is a hallmark of face processing. There is evidence that holistic processing is strongest for faces at identification distance, 2 - 10 meters from the observer. However, this evidence is based on tasks that have been little used in the literature and that are indirect measures of holistic processing. We use the composite task- a well validated and frequently used paradigm - to measure the effect of viewing distance on holistic processing. In line with previous work, we find a congruency x alignment effect that is strongest for faces that are close (2m equivalent distance) than for faces that are further away (24m equivalent distance)...
2015: Visual Cognition
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"