Content-Based Image Retrieval
Suppose you want to find a picture of a particular scene… for example, a beach. If you do an internet search using the word “beach,” only images that someone has labeled with the word “beach” will come up. But there are innumerable web-accessible digital images being created every day, including personal snapshots and images from webcams, photo journalists, television stations, satellites, etc. Most of these images will not be individually labeled by people, or their labels may not reflect the scene’s category (e.g., a picture labeled “Sue and Bill” may show them at a beach). However, our ability to find images on the internet can be greatly aided by content-based image retrieval; that is, computer programs that retrieve images that have been classified (labeled) based on their image characteristics (color, shapes, spatial frequencies, etc.). Creating image classification algorithms that categorize scenes just like humans is a very difficult problem in computer science, so our research on how people use specific information to recognize the gist of scene can be very useful in creating such algorithms.
We are currently collaborating on this topic with colleagues at the University of Illinois, Amit Sethi and Thomas Huang from the Department of Electrical and Computer Engineering, and Dan Simons from the Department of Psychology.