The human brain works in mysterious ways, especially when it comes to processing what we see with our eyes. Sometimes, things just “click” in the brain. Visual puzzles that may have stumped you before suddenly become clear as day. But what’s going on inside our heads to make this revelation possible?
A new study published in Nature Communications has discovered what happens in the brain when a person has a light-bulb moment after seeing something that changes their visual perception. Researchers found that this experience — what they call “one-shot perceptual learning” — occurs in the high-level visual cortex. This part of the brain stores past images, or “priors,” that can be accessed to recognize a visual that was previously hard to comprehend.
“This study yielded a directly testable theory on how priors act up during hallucinations, and we are now investigating the related brain mechanisms in patients with neurological disorders to reveal what goes wrong,” said co-senior study author Biyu He, an associate professor of neurology, neuroscience, and radiology at the NYU Grossman School of Medicine, in a statement.
Making Sense of Mooney Images
Humans are occasionally tested with tricky visual tests, like pictures that don’t make any sense at first glance. Imagine, for example, being told to identify the contents of a fuzzy black-and-white picture that looks like nothing but a jumble of unidentifiable splotches. Chances are, you’ll be hard-pressed to come up with an answer on what’s being shown.
But then, the same picture is restored to higher quality, revealing a Dalmatian dog. All of a sudden, you’re hit with an “aha moment.” Now that you know what the original image is showing, any time you see the distorted version in the near future, you’ll know without hesitation that it’s a Dalmatian. This is what happens when you’re tested with distorted “Mooney images,” which are commonly used as cognitive tests.
Read More: When the Mind Goes Blank — What Happens When Your Brain Briefly Goes Offline
Image Storage in the Brain
Tests with Mooney images — like the Dalmatian example — show how humans can fathom confusing visuals with the right context. This ability harkens back to the days of our ancestors, who needed to recognize and avoid threats through visual perception.
Scientists previously thought this rapid process was facilitated by the hippocampus, a crucial component of our brain that plays a role in learning and memory. Recent studies, however, have proven that one-shot perceptual learning relies on hippocampus-independent cortical mechanisms, according to the study.
The researchers involved with the new study have instead raised the high-level visual cortex (HLVC) as the source of one-shot perceptual learning.
“Our work revealed, not just where priors are stored, but also the brain computations involved,” said Dr. He in the statement.
For the study, they used fMRI to track changes in brain cell activity of participants who were shown several Mooney images (both the blurred and clear versions).
The researchers also observed signaling strength along nerve pathways through behavioral tests, in which they altered the size, position, and orientation of the Mooney images to see how each change affected recognition rates; changes in image size didn’t impact recognition, but rotating or changing the image’s position partially decreased learning.
According to the researchers, this indicated that “perceptual priors encode previously seen patterns but not more abstract concepts.”
Ultimately, the researchers confirmed with statistical models that the neural coding pathways in the high-level visual cortex matched the properties of the priors examined in the behavioral tests.
AI's Improving Perception
After conducting the various tests, the researchers built an AI model to emulate image processing in the high-level visual cortex. The model stored image information in one module, and then accessed stored data to better recognize incoming image data in another module.
While previous AI models haven’t been able to fully replicate humans’ one-shot perceptual learning, the researchers say that future AI technology could bridge the gap.
“We now anticipate the development of AI models with humanlike perceptual mechanisms that classify new objects or learn new tasks with few or no training examples. This is more evidence of a growing convergence between computational neuroscience and advances in AI,” said co-senior author Eric Oermann, an assistant professor of neurosurgery at NYU Langone, in the release.
Read More: Alzheimer’s-Related Memory Concerns Linked to Glitches in the Brain’s Replay Process
Article Sources
Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:
- This article references information from a study published in Nature Communications: Neural and computational mechanisms underlying one-shot perceptual learning in humans

1 hour ago
2
English (US)