Recognizing the Gist of a Scene
People can recognize the meaning of a scene, or the "gist," during their first eye fixation on that scene; for example, they can recognize that it is a beach, a dining room, or a street. Our own research has shown that viewers can recognize the gist of a scene at over 80% accuracy after as little as 36 milliseconds of uninterrupted processing time (click the above image to see an example). This raises a few questions: how are we able to recognize images so rapidly, and what information do we use to recognize them? Answering these questions is important for our understanding of scene perception, because research has shown that the gist of a scene uses our prior knowledge associated with the scene's category (e.g., that beaches have water, sand, palm trees, and possibly sunbathers). This knowledge strongly influences where we pay attention, may help us recognizing objects in the scene, and plays a big role in determining what information we remember from a scene. At its core, research on scene gist recognition explores the interface between perception and cognition—a problem that has proved extremely challenging to workers in both cognitive psychology and artificial intelligence. Such research can be applied to designing artificial intelligence systems capable of recognizing the categories of scenes.
We have carried out a number of studies on scene gist recognition over the past several years, and these are described below.
The Roles of Central vs. Peripheral Vision in Scene Gist Recognition
An interesting question is: which region of the visual field is most useful for recognizing the gist of a scene, central vision (the fovea and parafovea), based on its higher visual acuity and importance for object recognition, or the periphery, based on its larger size and how lower spatial frequencies are useful for scene gist recognition? (Here are links to a YouTube video describing the results of a study on this topic, and also a newspaper article covering it by United Press International.)
We have done a number of studies investigating this issue. In these studies, scenes were presented in two experimental conditions: a "Window" condition with a circular region showing the central portion of a scene but with peripheral information hidden, or a "Scotoma" condition with the central portion of a scene hidden and only the peripheral information available (Loschky & Larson, 2009). Results indicated that the periphery was more useful than central vision for maximum performance (about equal to seeing the entire image!). Nevertheless, central vision was more efficient for scene gist recognition than the periphery on a per-pixel basis. A critical radius of 7.4º was found where the Window and Scotoma performance curves crossed, producing equal performance. This value was compared to predicted critical radii from cortical magnification functions on the assumption that equal V1 activation would produce equal performance. However, these predictions were systematically smaller than the empirical critical radius, suggesting that the utility of central vision for gist recognition is less predicted by V1 cortical magnification.
In addition to scene gist recognition varying over space according to central versus peripheral vision, other studies in our lab have investigated how scene gist recognition varies over time. Scene gist is recognized within a single fixation. However, we have investigated whether gist recognition varies over time within that one fixation. A related issue is whether attentional focus affects scene gist recognition (Evans & Triesman, 2005; Li, et al., 2001). Our previous research showed that both central and peripheral information can produce equal scene gist recognition, provided there is roughly twice as much area in the periphery. However, those studies did not vary processing time (through masking) or manipulate attention. Therefore, we presented "Window" or "Scotoma" conditions using a critical radius, such that both window and scotoma images produced equal gist accuracy when unmasked (i.e., unlimited processing time). We briefly presented images for 24 ms each and varied processing time via the target-to-mask stimulus onset asynchrony (SOA). Our results have shown that at very short SOAs, central information produces better gist recognition than peripheral information, though with unlimited processing time in a single fixation (i.e., no-mask), performance is equal for central and peripheral information. Other research from our lab has also supported this idea, finding that central vision is better at processing scene category early (during the first 100 ms of viewing a scene), while peripheral vision becomes increasingly useful after that time (Larson, Freeman, Ringer, & Loschky, 2013). This indicates that spatiotemporal dynamics of attention play an important role and affect gist recognition, setting spatiotemporal limits on how quickly real-world scenes can be comprehended.
These results are consistent with a zoom-out hypothesis of covert attention where attention is first focused at the center of vision and then rapidly spreads outward, and this affects scene gist recognition.
What Categorical Level of Scene Gist is Perceived First?
What level of categorization occurs first in scene gist processing, the basic level (a beach versus a city) or the superordinate level (a "natural" scene versus a "man-made" scene)? The Spatial Envelope model of scene classification and human gist recognition (Oliva & Torralba, 2001) assumes that the superordinate distinction is made prior to basic level distinctions. This assumption contradicts the claim that categorization occurs at the basic level before the superordinate level (Rosch et al., 1976). We carried out a study to test this assumption of the Spatial Envelope model by making viewers categorize briefly flashed, masked scenes after varying amounts of processing time. The results showed that early stages of processing (SOA < 72ms) produced greater sensitivity to the superordinate distinction than basic level distinctions, and also that basic level distinctions crossing the superordinate natural/man-made boundary are treated as a superordinate distinction (Loschky & Larson, 2010). Both results support the assumption of the Spatial Envelope model, and challenge the idea of basic level primacy.
What Information is Used to Recognize the Gist of a Scene?
What information do people use to rapidly categorize a scene as a "beach," a "street," a "mountain," etc.? Some prominent computational theories of scene gist recognition have proposed the counter-intuitive and provocative hypothesis that the unlocalized amplitude spectrum of images (their spatial frequencies and orientations) provides most of the important information for categorizing a scene, without regard to their location in the image. In simple terms, this suggests that for recognizing a beach scene, it is more important to know that there is a strong horizontal and a strong diagonal than to know that the horizontal (the horizon) is above the diagonal (the water line). However, our studies with human subjects suggest that while the spatial frequencies and orientations of an image certainly play some role in recognizing it, they are not enough to categorize a scene by themselves — localized information is necessary for that (Loschky et al., 2007; Loschky & Larson, 2008). The importance of localization therefore suggests that the layout of a scene (the scene's global configuration) is probably very important in recognizing its gist.
|White Noise Mask||RISE Mask||Recognizable Mask|
A related topic that we have investigated is the masking of scene gist. Visual masking is when one stimulus interferes with the processing of another stimulus (click the appropriate thumbnail above to see a demonstration). Masking is an important tool for studying the time course of visual processing, and it has a history of over 100 years in the field of Psychology. However, very little is known about the masking of complex stimuli like scene images, or relatively high level perceptual tasks such as scene gist recognition. We have compared the effects of low level spatial masking (i.e., masking by spatial frequencies and orientations) with the effects of higher level "conceptual masking" (i.e., masking by meaning) (Loschky et al., 2010). Previous research has shown that recognition memory for a scene is more strongly masked by a recognizable scene (i.e., a scene masking another scene) than by meaningless noise, and this has been used to argue for the existence of conceptual masking. A key hypothesis we have tested is that conceptual masking effects are actually due to the greater visual similarity between any given pair of scenes versus any given scene compared with random noise. Our results do not rule out the existence of conceptual masking of scene gist, because pure visual similarity, in terms of spatial frequencies and orientations, cannot explain all of the masking produced by a recognizable scene mask. However, our results also show that a good proportion of what has been called conceptual masking (namely, the greater masking produced by a recognizable scene compared to the masking produced by white noise) can actually be produced by an unrecognizable noise image that shares many statistical properties with a scene. Other research has indicated that the effects of masking on rapid scene categorization vary depending on the Fourier spectral properties of the masks (Hansen & Loschky, 2013). Such research holds the potential to expand our understanding of both scene gist processing and the masking of complex stimuli.
Larson, A. M., Freeman, T. E., Ringer, R. V., & Loschky, L. C. (2013, November 18). The Spatiotemporal Dynamics of Scene Gist Recognition. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication. doi: 10.1037/a0034986
Hansen, B. C., & Loschky, L. C. (2013). The contribution of amplitude and phase spectra defined scene statistics to the masking of rapid scene categorization. Journal of Vision, 13(13), 1–21. doi: 10.1167/13.13.21
Larson, A.M. & Loschky, L.C. (2009). The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision, 9(10):6, 1-16, http://journalofvision.org/9/10/6/, doi:10.1167/9.10.6.
Loschky, L.C., & Larson, A. M. (2008). Localized information is necessary for scene categorization, including the Natural/Man-made distinction. Journal of Vision, 8(1):4, 1-9, http://journalofvision.org/8/1/4/, doi:10.1167/8.1.4.
Loschky, L.C., Sethi, A., Simons, D.J., Pydimari, T., Ochs, D., & Corbeille, J. (2007). The importance of information localization in scene gist recognition. Journal of Experimental Psychology: Human Perception and Performance, 33(6), 1431-1450.
Related Conference Presentations
Larson, A. M., Hendry, J., & Loschky, L. C. (2012, May). Scene Gist Meets Event Perception: The Time Course of Scene Gist and Event Recognition. Poster presented at the 12th annual meeting of the Vision Sciences Society, Naples, FL.
Ramkumar, P., Pannasch, S., Hansen, B. C., Larson, A.M. & Loschky, L.C. (2011, December). How does the brain represent visual scenes? A neuromagnetic scene categorization study. Poster presented at the Neural Information Processing Systems–Workshop on Machine Learning and Interpretation in Neuroimaging, Sierra Nevada, Spain.
Kirkpatrick, K., Ghormley, D., Guevara, M., Garcia, A., Sears, T., Hansen, B.C., & Loschky, L.C. (2010, May). Scene gist categorization in pigeons. Talk presented at the Annual Meeting of the Society for Quantitative Analysis of Behavior, San Antonio, TX.
Larson, A.M., Loschky, L.C., Ringer, R., & Kridner, C. (2010, May). Attention modulates gist performance between central and peripheral vision. Poster presented at the 10th Annual Meeting of the Vision Sciences Society, Naples, FL.
Larson, A.M., Loschky, L.C., Pollack, W., Bjerg, A., Hilburn, S., & Smercheck, S. (2009, May). Variation in scene gist recognition over time in central versus peripheral vision. Poster presented at the 9th Annual Meeting of the Vision Sciences Society, Naples, FL.
Loschky, L.C., Hansen, B.C., Fintzi, A., Bjerg, A., Ellis, K., Freeman, T., Hilburn, S., & Larson, A. (2009, May). Basic level scene categorization is affected by unrecognizable category-specific image features. Poster presented at the 9th Annual Meeting of the Vision Sciences Society, Naples, FL.
Larson, A.M., Loschky, L.C., Matz, E., Smerchek, S., Weber, P., & Berger, L. (2008, May). The Roles of Central versus Peripheral Visual Information in Recognizing Scene Gist. Poster presented at the 8th Annual Meeting of the Vision Sciences Society, Naples, FL.
Loschky, L.C., Larson, A.M., Smerchek, S., & Finan, S. (2008, May). The Superordinate Natural/Man-made Distinction is Perceived Before Basic Level Distinctions in Scene Gist Recognition. Poster presented at the 8th Annual Meeting of the Vision Sciences Society, Naples, FL.
Loschky, L.C., Simons, D.J., Smerchek, S., Matz, E., Bilyeu, B., & Artman, L. (2007, May). Is Unlocalized Amplitude Information of Any Use for Scene Gist Recognition? Poster presented at the 7th Annual Meeting of the Vision Sciences Society, Sarasota, FL.
Loschky, L.C., Sethi, A., Simons, D.J., Pydimarri, T.N., Forristal, N., Corbeille, J. & Gibb, K. (2006, May). The Roles Of Amplitude And Phase Information In Scene Gist Recognition And Masking. Talk presented at the 6th Annual Meeting of the Vision Sciences Society, Sarasota, FL.
Loschky, L.C., Sethi, A., Simons, D.J., Ochs, D., Corbeille, J. & Gibb, K. (2005, November). Using Visual Masking To Explore The Nature Of Scene Gist. Poster presented at the 46th Annual Meeting of the Psychonomic Society, Toronto, Canada.