Rotating visual anagrams offer new insights into brain function

Researchers at Johns Hopkins University have developed a novel set of images called “visual anagrams” that reveal different objects when rotated, providing a powerful new tool to investigate the complexities of human perception. These mind-bending pictures, created with the help of artificial intelligence, are designed to test how our brains process high-level concepts such as an object’s size, animacy, and even emotional connotation, independent of the image’s basic physical properties.

Unlike classic optical illusions such as the duck-rabbit drawing, where the two interpretations have different outlines and features, these visual anagrams are constructed from the exact same set of pixels. For instance, an image may appear as a bear in its upright orientation but transforms into a butterfly when turned 90 degrees. This innovative method allows scientists to isolate specific perceptual effects, ensuring that the brain’s response is due to the high-level interpretation of the image rather than any change in its fundamental visual data, a challenge that has long complicated the study of vision.

Overcoming Classic Research Hurdles

Scientists have long used ambiguous images to explore the brain’s interpretive functions. However, traditional stimuli came with built-in limitations. When comparing the perception of two different objects, such as a large bear and a small butterfly, the differences in shape, color, texture, and other characteristics create confounding variables. It becomes difficult to determine if the brain is reacting to the object’s identity or simply its low-level visual features. This has made it nearly impossible to study certain cognitive effects in isolation.

Visual anagrams solve this problem by ensuring the visual input remains identical across conditions. The image of the bear and the butterfly is composed of the same pixels, differing only in orientation. This approach provides a controlled environment to probe the secrets of perception. According to Tal Boger, a doctoral student and first author of the research, the special quality of these images is that they force the brain to perceive the same information in entirely different ways, opening new doors for understanding how we transform sensory input into meaningful understanding.

Harnessing AI to Create Illusions

Diffusion Models at Work

To create these unique stimuli, the research team utilized a sophisticated form of artificial intelligence known as a diffusion model. This method involves training an AI on vast datasets of images to understand how to generate new pictures from textual prompts. The researchers adapted this technology to produce single images that satisfy two different prompts simultaneously, with the interpretation switching upon a geometric transformation like a rotation. The process allows for the creation of various illusions, including images that change when flipped, skewed, or have their colors inverted.

Isolating Perceptual Qualities

The primary goal of this technology is to create a toolkit for testing specific, high-level brain functions. By keeping the pixel data constant, researchers can confidently measure how the brain reacts to abstract concepts. For example, they can create anagrams pairing an animate object with an inanimate one, such as a dog and a truck, to study how the brain distinguishes between living and non-living things. Other potential pairings could explore reactions to different emotional cues or movements, all while eliminating visual variables that previously clouded the results.

Initial Experiments Yield Insights

In one of the first experiments using these tools, the Johns Hopkins team tested the established principle that the brain processes large and small objects differently. Participants were shown visual anagrams featuring pairs of animals with significant real-world size differences, such as an elephant and a rabbit or a bear and a butterfly. They were then asked to adjust the image to their “ideal size.”

The results were consistent and revealing. Even though the bear and butterfly images were physically identical apart from their orientation, participants consistently chose to make the bear version larger than the butterfly version. This finding demonstrates that the brain’s stored knowledge about an object’s true size strongly influences perception, overriding the immediate sensory information. The study confirmed that this classic size effect persists even when all low-level visual cues are held constant, clarifying which perceptual phenomena are truly based on high-level cognitive processing.

A Sharper Tool for Neuroscience

The development of visual anagrams offers significant opportunities for future brain research. Scientists can pair these novel images with advanced neuroimaging techniques like fMRI, MEG, or EEG to pinpoint the specific brain regions responsible for encoding abstract properties. By showing a participant a rotating anagram, researchers could observe how brain activity shifts as the interpretation changes from a bear to a butterfly, thereby mapping the neural circuits that process concepts like size and animacy separately from those that handle basic visual input.

This method also provides a new benchmark for testing and improving computational vision systems. By challenging AI models with stimuli where the label changes but the pixels do not, developers can build more robust systems that better mimic the complex, layered process of human perception. The research provides a versatile and generalizable framework for generating illusions to probe a wide range of cognitive functions.

Leave a Reply

Your email address will not be published. Required fields are marked *