Research Interests
I study the influences on when, where, and how we allocate our visual attention. My research thus far has investigated how our tasks or goals, memory content, and visual environment can influence the movements of overt attention. I primarily use eye-tracking and behavioral methods in my research.
Current Research Topics
Perceptual stability across eye movements. When we move our eyes, the visual system temporarily suppresses incoming visual information (saccadic suppression). However, we rarely perceive this loss of information as the brain seamlessly integrates the pre-saccade view with the post-saccade view. How does the brain link this information to create a stable perception? My research has shown that visual working memory plays an important role in this process (Cronin & Irwin, 2018), perhaps by helping establish object correspondence across an eye movement (Cronin & Irwin, in prep). I am now working to understand how perceptual stability operates in more complex stimuli (real-world scenes).
Semantic guidance in scenes. When we move our eyes through a scene, what do we choose to look at? The standard models of attention in scenes emphasize low-level visual saliency (brightness, unique colors and textures, etc.) as a key driver of attention. However, the semantic information available to the viewer plays an extremely important role in how eye movements are deployed through a scene (see Henderson, et al., 2019 for a review). Further, this semantic information interacts with the viewer's task and prior experiences to influence where they will look. I am currently involved in several projects in this line of research. With these projects, I am seeking to determine how quickly semantic information in scenes can guide attention both covertly and overtly, how visual working memory is involved in the allocation of attention to meaningful scene regions, and how semantic information interacts with prior experiences to guide attention during visual search.
Overt attention during visual search. How do we move our eyes to efficiently search for something we are looking for? Are there situations where it is better to not move our eyes (at least at first) during search? My research on visual search suggests that search occurs over two stages: the first, a parallel processing stage contributes logarithmically to reaction time as a function of the amount of target-dissimilar information in the visual field. The second stage, focused attention, contributes linearly as a function of the amount of target-similar information in the visual field (Buetti, Cronin, Wang, Madison, and Lleras, 2016; Cronin, Buetti, & Lleras, in prep). My research has also investigated how these two stages of processing interact with eye movements (Cronin, Buetti, & Lleras, in prep).
Working memory and overt attention. How are working memory and saccadic eye movements related? Much work, including my own, suggests that working memory is an important mechanism for controlling when and where the eyes move. I am continuing to probe this relationship with several current projects (Cronin, Peacock, & Henderson, in prep).
Semantic guidance in scenes. When we move our eyes through a scene, what do we choose to look at? The standard models of attention in scenes emphasize low-level visual saliency (brightness, unique colors and textures, etc.) as a key driver of attention. However, the semantic information available to the viewer plays an extremely important role in how eye movements are deployed through a scene (see Henderson, et al., 2019 for a review). Further, this semantic information interacts with the viewer's task and prior experiences to influence where they will look. I am currently involved in several projects in this line of research. With these projects, I am seeking to determine how quickly semantic information in scenes can guide attention both covertly and overtly, how visual working memory is involved in the allocation of attention to meaningful scene regions, and how semantic information interacts with prior experiences to guide attention during visual search.
Overt attention during visual search. How do we move our eyes to efficiently search for something we are looking for? Are there situations where it is better to not move our eyes (at least at first) during search? My research on visual search suggests that search occurs over two stages: the first, a parallel processing stage contributes logarithmically to reaction time as a function of the amount of target-dissimilar information in the visual field. The second stage, focused attention, contributes linearly as a function of the amount of target-similar information in the visual field (Buetti, Cronin, Wang, Madison, and Lleras, 2016; Cronin, Buetti, & Lleras, in prep). My research has also investigated how these two stages of processing interact with eye movements (Cronin, Buetti, & Lleras, in prep).
Working memory and overt attention. How are working memory and saccadic eye movements related? Much work, including my own, suggests that working memory is an important mechanism for controlling when and where the eyes move. I am continuing to probe this relationship with several current projects (Cronin, Peacock, & Henderson, in prep).
Collaborations with Industry
From March 2016-December 2017 I collaborated with Sandia National Labs in conjunction with the University of Illinois' Simulator Lab. The data I collected was used to develop a model of overt attention in data visualizations. One manuscript describing our findings has been accepted at IEEE VisComm. A second manuscript is currently under review.
In the summer of 2014, I conducted research for the Sensory Sciences Division of McDonald's Corporation to ensure the best possible methods were being used to maintain excellent consumer experience across changes in the supply chain.
In the summer of 2014, I conducted research for the Sensory Sciences Division of McDonald's Corporation to ensure the best possible methods were being used to maintain excellent consumer experience across changes in the supply chain.