Lester Loschky

loschkyContact Information

Office: BH 471

Phone: 532-6882

E-mail: loschky@ksu.edu

Curriculum Vita

Google Scholar Profile

Visual Cognition Laboratory

Current Funding Sources

Office of Naval Research (ONR)

National Science Foundation (NSF): Grant website

Research Interests

My work is concerned with scene perception, from both a perceptual and a cognitive viewpoint, and its real world applications. As we look around us, our eyes move about three times per second, presenting us with an ever-changing collage of real world scenes. As you look around at these scenes, you see innumerable objects. However, much research shows that you will likely only pay attention to and remember a few such objects in any given scene. My research investigates how we perceive, attend to, and remember scenes and the people, animals, and objects in them. The scope of this research can best be understood in terms of the time course of perception and mental representation. First, how can you view a scene and grasp its category within the first tenth of a second, easily distinguishing an office versus a parking lot versus a street? Next, as you proceed to observe such a scene, what causes you to look at certain people and objects and ignore others? Then, what effect does attending to a particular person or object in a scene have on later memory for that person or object versus others in the scene? Finally, how do we understand people’s actions, and related events, in both static scenes and film clips?

My research philosophy is that good basic research should always be capable of suggesting applications, and good applied research should always inform theories of perception and cognition. From an applied perspective, answering these questions is important for a wide range of application areas, including designing better human-computer interfaces and artificial vision systems. My applied work in human-computer interaction has investigated issues related to gaze-contingent displays. These are computer displays, like virtual reality or simulators, that use an eye tracker to identify where the viewer is looking, and then change the image relative to that location. For example, in research funded by the Office of Naval Research (ONR), we are developing a new dynamic measure of how much information a person can process from their visual field (the useful field of view, or UFOV) on a moment-by-moment basis. This research uses gaze-contingent displays to put perceptual targets at varying distances from a viewer’s center of vision as they look around a dynamic scene (e.g., in simulated driving). Important applications of this work include evaluation of the breadth of a person’s UFOV under varying levels of stress or cognitive load (e.g., driving under relaxed circumstances vs. during a battle) and training (e.g., to increase the breadth of people’s UFOV under stress). However, this applied work to develop a dynamic measure of the UFOV will allow us to ask and answer important theoretical questions about the nature of visual attention and its relationship to cognitive, emotional, and physical demands. Other applied research in our lab is investigating the use of visual cueing in learning Physics and Mathematics, which can inform theories of the role of visual attention in problem solving and learning. Likewise, my basic research on scene perception frequently involves collaborations with researchers in electrical and computer engineering with the applied goals of informing artificial vision systems and improving automated image searching on the internet.

 

Student Involvement

I am currently working with a number of graduate and undergraduate students on several of the above research topics in visual cognition. My philosophy for working with students is to provide them with guidance in carrying out research while challenging them to contribute their own ideas and viewpoints. Undergraduate students who are interested in carrying out research on such topics can apply to be a PSYCH 599 research assistant in my lab. As a research assistant, students will have a chance to experience the entire cycle of research, from reading articles on a topic we are investigating, to generating research questions and hypotheses, to designing, preparing, and carrying out experiments, to analyzing the data, writing up the results, and presenting it at a conference, or even submitting it for publication in a scientific journal. The activities that an individual research assistant participates in depend on their level of motivation and commitment. Such experience is very valuable for understanding how research in graduate school is conducted, and can greatly strengthen a graduate school application.

Graduate students who are interested in working with me can either work on one of my on-going research projects or propose their own topic of research, depending on their level of experience and motivation. Graduate students will also gain valuable experience in supervising undergraduate research assistants in the lab. Support for graduate students comes from grant money when available, or departmental graduate teaching assistantships. Students who contribute significantly to our research will have ample opportunity to co-author publications resulting from it.

Students interested in working with me can contact me by phone (785-532-6882) or e-mail (loschky@ksu.edu).


Recent Graduate Students

Representative Publications (*indicates student co-author; click on underlined citations to go to those articles)

*Rouinfar, A., *Agra, E., Larson, A. M., Rebello, N. S., & Loschky, L. C. (2014). Linking attentional processes and conceptual problem solving: Visual cues facilitate the automaticity of extracting relevant information from diagrams. [Original Research]. Frontiers in Psychology, 5. doi: 10.3389/fpsyg.2014.01094

Loschky, L.C., & *Larson, A.M. (2010). The natural/man-made distinction is made prior to basic-level distinctions in scene gist processing. Visual Cognition, 18(4), 513-536.

Loschky, L.C., Hansen, B.C., Sethi, A. & *Pydimarri, T. (2010). The role of higher-order image statistics in masking scene gist recognition. Attention, Perception & Psychophysics, 72(2), 427-444.

*Larson, A.M. & Loschky, L.C. (2009). The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision, 9(10):6, 1-16, http://journalofvision.org/9/10/6/, doi:10.1167/9.10.6.

Loschky, L.C. & Wolverton, G.S. (2007). How Late Can You Update Gaze-contingent Multi-resolutional Displays Without Detection? ACM Transactions on Multimedia Computing, Communications and Applications, 3(4): 25, 1-10.

Loschky, L.C., McConkie, G.W., Yang, J. & Miller, M.E. (2005).  The limits of visual resolution in natural scene viewing. Visual Cognition, 12(6), 1057-1092.

Zelinsky, G.J. & Loschky, L.C. (2005).  Eye movements serialize memory for objects in scenes. Perception and Psychophysics, 67(4), 676-690