1. K-State home
  2. »Arts and Sciences
  3. »Psychological Sciences
  4. »Research

Department of Psychological Sciences

Lester Loschky

loschkyContact Information

Office: BH 471

Phone: 532-6882

E-mail: loschky@ksu.edu

Curriculum Vitae (CV)

ResearchGate Profile

Google Scholar Profile

Visual Cognition Laboratory

Associate Director, Center for Cognitive and Neurobiological Approaches to Plasticity (CNAP)

Funding Sources

Office of Naval Research (ONR)

National Science Foundation (NSF): Grant website

Dr. Loschky is currently accepting applications for graduate students for 2019, after the successful graduation of 3 PhD students in Summer 2018. 

Major Research Themes

  • Scene perception and event comprehension
  • Scene perception from central to peripheral vision (see special issue I have Guest-Edited for the Journal of Vision on this topic)

Research Interests

My work is concerned with scene perception, from both a perceptual and a cognitive viewpoint, and its real world applications. As we look around us, our eyes move about three times per second, presenting us with an ever-changing collage of real world scenes. As you look around at these scenes, you see innumerable objects. However, much research shows that you will likely only pay attention to and remember a few such objects in any given scene. My research investigates how we perceive, attend to, and remember scenes and the people, animals, and objects in them. The scope of this research can best be understood in terms of the time course of perception and mental representation. First, how can you view a scene and grasp its category within the first tenth of a second, easily distinguishing an office versus a parking lot versus a street? Next, as you proceed to observe such a scene, what causes you to look at certain people and objects and ignore others? Then, what effect does attending to a particular person or object in a scene have on later memory for that person or object versus others in the scene? Then, how do we understand people’s actions, and infer related events, in both static scenes and film clips?  I have been developing a theory, together with my colleagues, which puts all of the processes described above together, called the Scene Perception & Event Comprehension Theory (SPECT). This theory connects most of the work I have done throughout my career, and guides my research. 

My research philosophy is that good basic research should always be capable of suggesting applications, and good applied research should always inform theories of perception and cognition. From an applied perspective, answering these questions is important for a wide range of application areas, including designing better human-computer interfaces and artificial vision systems. My applied work in human-computer interaction has investigated issues related to gaze-contingent displays. These are computer displays, like virtual reality or simulators, that use an eye tracker to identify where the viewer is looking, and then change the image relative to that location. For example, in research funded by the Office of Naval Research (ONR), we are developing a new dynamic measure of how much information a person can process from their visual field (the useful field of view, or UFOV) on a moment-by-moment basis. This research uses gaze-contingent displays to put perceptual targets at varying distances from a viewer’s center of vision as they look around a dynamic scene (e.g., in simulated driving). Important applications of this work include evaluation of the breadth of a person’s UFOV under varying levels of stress or cognitive load (e.g., driving under relaxed circumstances vs. during a battle) and training (e.g., to increase the breadth of people’s UFOV under stress). However, this applied work to develop a dynamic measure of the UFOV will allow us to ask and answer important theoretical questions about the nature of visual attention and its relationship to cognitive, emotional, and physical demands. Other applied research in our lab is investigating the use of visual cueing in learning Physics and Mathematics, which can inform theories of the role of visual attention in problem solving and learning. Likewise, my basic research on scene perception frequently involves collaborations with researchers in electrical and computer engineering with the applied goals of informing artificial vision systems and improving automated image searching on the internet.


Student Involvement

I am currently working with a number of graduate and undergraduate students on several of the above research topics in visual cognition. My philosophy for working with students is to provide them with guidance in carrying out research while challenging them to contribute their own ideas and viewpoints. Undergraduate students who are interested in carrying out research on such topics can apply to be a PSYCH 599 research assistant in my lab. As a research assistant, students will have a chance to experience the entire cycle of research, from reading articles on a topic we are investigating, to generating research questions and hypotheses, to designing, preparing, and carrying out experiments, to analyzing the data, writing up the results, and presenting it at a conference, or even submitting it for publication in a scientific journal. The activities that an individual research assistant participates in depend on their level of motivation and commitment. Such experience is very valuable for understanding how research in graduate school is conducted, and can greatly strengthen a graduate school application.

Graduate students who are interested in working with me can either work on one of my on-going research projects or propose their own topic of research, depending on their level of experience and motivation. Graduate students will also gain valuable experience in supervising undergraduate research assistants in the lab. Support for graduate students comes from grant money when available, or departmental graduate teaching assistantships. Students who contribute significantly to our research will have ample opportunity to co-author publications resulting from it.

I am currently accepting applications for graduate students for 2019, after the successful graduation of 3 PhD students in Summer 2018.

Students interested in working with me can contact me by phone (785-532-6882) or e-mail (loschky@ksu.edu).

Recent Graduate Students

  • Jared J. Peterson, Ph.D. (2018), M.S. (2016), Kansas State University. B.S. University of Wisconsin, LaCrosse. ResearchGate profile.   Currently a Researcher at the Army Research Institute (ARI), Leavenworth, KS.  Jared did his MS thesis and dissertation on the interaction between visual resolution and task-relevance in guiding visual selective attention.  He is currently revising a manuscript based on his MS thesis for resubmission to a top ranking journal, and preparing another manuscript from his Ph.D. dissertation for submission to a journal.  

Representative Publications (*indicates student co-author; click on underlined citations to go to those articles)

Loschky, L. C., *Hutson, J. P., *Smith, M. E., Smith, T. J., & Magliano, J. P. (2018). Viewing Static Visual Narratives Through the Lens of the Scene Perception and Event Comprehension Theory (SPECT).  in J. Laubrock, J. Wildfeuer, & A. Dunst (Eds.), The Empirical Study of Comics, Rutledge.

*Hutson, J. P., Magliano, J. P., Smith, T. J., & Loschky, L. C. (2017). What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film. Cognitive Research Principles & Implications, 2(1), 46, 1-30. 10.1186/s41235-017-0080-5

*Ringer, R.V., *Throneburg, Z.,  Johnson, A.P., Kramer, A.F., & Loschky, L.C. (2016). Impairing the Useful Field of View in natural scenes: Tunnel vision versus general interference. Journal of Vision, 16(2):7, 1-25. doi: 10.1167/16.2.7.

Loschky, L.C., *Larson, A.M., Magliano, J.P., & Smith, T.J. (2015).  What would Jaws do? The tyranny of film and the relationship between gaze and higher-level narrative film comprehension. PLoS ONE 10(11): e0142474. doi:10.1371/journal.pone.0142474

*Rouinfar, A., *Agra, E., *Larson, A. M., Rebello, N. S., & Loschky, L. C. (2014). Linking attentional processes and conceptual problem solving: Visual cues facilitate the automaticity of extracting relevant information from diagrams. [Original Research]. Frontiers in Psychology, 5. doi: 10.3389/fpsyg.2014.01094

Loschky, L.C., & *Larson, A.M. (2010). The natural/man-made distinction is made prior to basic-level distinctions in scene gist processing. Visual Cognition, 18(4), 513-536.

Loschky, L.C., Hansen, B.C., Sethi, A. & *Pydimarri, T. (2010). The role of higher-order image statistics in masking scene gist recognition. Attention, Perception & Psychophysics, 72(2), 427-444.

*Larson, A.M. & Loschky, L.C. (2009). The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision, 9(10):6, 1-16, http://journalofvision.org/9/10/6/, doi:10.1167/9.10.6.

Loschky, L.C., McConkie, G.W., Yang, J. & Miller, M.E. (2005).  The limits of visual resolution in natural scene viewing. Visual Cognition, 12(6), 1057-1092.

Zelinsky, G.J. & Loschky, L.C. (2005).  Eye movements serialize memory for objects in scenes. Perception and Psychophysics, 67(4), 676-690