1. K-State home
  2. »Psychological Sciences
  3. »Visual Cognition Laboratory

Visual Cognition Laboratory

Visual Cognition Laboratory

Welcome to the Home of Dr. Les Loschky's KSU Visual Cognition Laboratory!

Dr. Loschky is currently accepting applications for graduate students for 2019, after the successful graduation of 3 PhD students in Summer 2018. 

Two major themes in the lab across both basic and applied research domains are:

The Visual Cognition Lab conducts research on scene perception and its real world applications, spanning the traditional areas of perception and cognition. Our lab's research philosophy is that good basic research should always be capable of suggesting applications for real-world scenarios, and good applied research should always add information to a theory.

Our basic research studies how people perceive, attend to, understand, and remember scenes and the objects in them. Our research investigates the time course for perceiving and creating a mental representation of a scene. First, how can we view a scene and grasp its category very quickly (within the first tenth of a second), easily distinguishing between an office versus a hallway or a parking lot versus a park?  Do our expectations of what scene we will see next influence our ability to rapidly recognize that scene?  Next, as we observe such a scene, what causes us to look at certain objects and ignore others, and how does that affect our memory?  When the scenes we are looking at form a narrative, such as in a picture story or movie, how does our understanding affect what we pay attention to, and how does what we pay attention to affect our understanding?  We have researched all of these questions, and have developed the Scene Perception & Event Comprehension Theory (SPECT) as a framework for understand how people's eye movements and attention influence their understanding of what they see, and how our understanding influences what we look at and remember.

Our applied work focuses on key questions in human factors related to attention, ranging from driving and transportation safety to Physics education.  We are interested in how introducing a cognitive load, such as performing two difficult tasks at the same time, affects the perception while driving, and their comprehension and learning while studying. Our research also investigates how adding a cognitive load can affect our visual field – that is, does focusing on a different task make people less likely to notice something right in front of them?  For this, using eye tracking, we have developed a new measure of how much information in a driver's visual field they can pay attention to at any given moment while driving, namely their Useful Field of View (UFOV).  We have shown how a driver's breadth of attention dynamically changes while driving depending on their level of cognitive load (e.g., driving while listening and thinking hard about something).  We have also shown how a person's UFOV also changes based on what they see and understanding going on around them.  For example, people's UFOV seems to spread wider when they are not sure what is going to happen next, and gets narrower once they understand what is going on.  This work can be applied to improving transportation safety by creating better dynamic tests of a person's UFOV in driving situations, and can be used to train people to improve their UFOV.

In the area of Physics Education, we have been studying how paying attention to the wrong information can lead students to incorrectly solve Physics problems, but guiding their attention to the important information can help them correctly solve problems and learn more.  We are now applying machine learning to predict if learners need such attention guidance based on what they are looking from moment to moment at while solving a physics problem.  We hope to use these ideas to build smarter computerized on-line instruction that incorporates eye tracking and machine learning to guide learner's attention to the important information when they need it.  We are also studying how a person's eye movements can be used to determine whether their learning materials are causing them difficulty in learning (i.e., cognitive load), or if the materials are promoting learning through the right level of challenge.

Current and previous funding sources for our lab include the Office of Naval Research (ONR) and the National Science Foundation (NSF).

Lab News

3 Vis Cog Lab PhD Students Successfully Defended their Dissertations in the Summer of 2018!!

Congratulations to all 3!!!  And many thanks to all the undergrad RAs who helped them!!!

Happily, all 3 are now gainfully employed!!

L-R: John HustonJared PetersonRyan Ringer

The 3 Musketeers!  John Hutson, Jared Peterson, Ryan Ringer

Dr. Loschky and collaborators have been awarded a $1.2 million grant from the National Science Foundation (NSF) to conduct research on using visual cues to facilitate problem solving for math and physics problems.


 

 larson graduation

Adam Larson (center) graduated with his Ph.D. in May 2012. He worked as a Post-Doctoral Research Associate in the Visual Cognition Lab from 2012 – 2013, and is now an Assistant Professor of Psychology at the University of Findlay.


 

 shanteau fellowship

Undergraduate research assistant Allison Coy (far right) was a recipient of the Doreen Shanteau Undergraduate Research Award in October 2012. She has been conducting her research over the last year on the phenomenon of visual crowding in peripheral vision.


 

Tyler Freeman graduated with his Ph.D. in May 2012. He worked at the Army Research Institute in Fort Leavenworth, Kansas, and is now a Research Associate at ICF International.