Visual Cognition Laboratory
Welcome to the Home of Dr. Les Loschky's KSU Visual Cognition Laboratory!
Dr. Loschky is currently accepting applications for graduate students for 2020, after the successful graduation of 3 PhD students in Summer 2018.
Two major themes in the lab across both basic and applied research domains are:
- Scene perception and event comprehension (see new book chapter on the Scene Perception & Event Comprehension Theory (SPECT))
- Scene perception from central to peripheral vision (see special issue Les Loschky has Guest-Edited for the Journal of Vision on this topic)
The Visual Cognition Lab conducts research on scene perception and its real world applications, spanning the traditional areas of perception and cognition. Our lab's research philosophy is that good basic research should always be capable of suggesting applications for real-world scenarios, and good applied research should always add information to a theory.
Our basic research studies how people perceive, attend to, understand, and remember scenes and the objects in them. Our research investigates the time course for perceiving and creating a mental representation of a scene. First, how can we view a scene and grasp its category very quickly (within the first tenth of a second), easily distinguishing between an office versus a hallway or a parking lot versus a park? Do our expectations of what scene we will see next influence our ability to rapidly recognize that scene? Next, as we observe such a scene, what causes us to look at certain objects and ignore others, and how does that affect our memory? When the scenes we are looking at form a narrative, such as in a picture story or movie, how does our understanding affect what we pay attention to, and how does what we pay attention to affect our understanding? We have researched all of these questions, and have developed the Scene Perception & Event Comprehension Theory (SPECT) as a framework for understand how people's eye movements and attention influence their understanding of what they see, and how our understanding influences what we look at and remember.
Our applied work focuses on key questions in human factors related to attention, ranging from driving and transportation safety to Physics education. We are interested in how introducing a cognitive load, such as performing two difficult tasks at the same time, affects the perception while driving, and their comprehension and learning while studying. Our research also investigates how adding a cognitive load can affect our visual field – that is, does focusing on a different task make people less likely to notice something right in front of them? For this, using eye tracking, we have developed a new measure of how much information in a driver's visual field they can pay attention to at any given moment while driving, namely their Useful Field of View (UFOV). We have shown how a driver's breadth of attention dynamically changes while driving depending on their level of cognitive load (e.g., driving while listening and thinking hard about something). We have also shown how a person's UFOV also changes based on what they see and understanding going on around them. For example, people's UFOV seems to spread wider when they are not sure what is going to happen next, and gets narrower once they understand what is going on. This work can be applied to improving transportation safety by creating better dynamic tests of a person's UFOV in driving situations, and can be used to train people to improve their UFOV.
In the area of Physics Education, we have been studying how paying attention to the wrong information can lead students to incorrectly solve Physics problems, but guiding their attention to the important information can help them correctly solve problems and learn more. We are now applying machine learning to predict if learners need such attention guidance based on what they are looking from moment to moment at while solving a physics problem. We hope to use these ideas to build smarter computerized on-line instruction that incorporates eye tracking and machine learning to guide learner's attention to the important information when they need it. We are also studying how a person's eye movements can be used to determine whether their learning materials are causing them difficulty in learning (i.e., cognitive load), or if the materials are promoting learning through the right level of challenge.