Shared brain structure creates common visual perceptions


New research reveals that while every human brain is wired uniquely, we perceive the world in remarkably similar ways due to a shared relational structure in our neural responses. A study conducted by a joint team from Reichman University and the Weizzmann Institute of Science found that even though different neurons may activate from person to person when viewing the same image, the overall patterns of brain activity maintain a consistent relationship with each other. This breakthrough in understanding the brain’s “representational code” explains how shared human experience and communication are possible.

This discovery of a common neural framework moves beyond theoretical models by using direct recordings of human brain activity, offering profound insights into the nature of perception. The findings not only illuminate a foundational question in neuroscience but also have significant implications for the development of artificial intelligence, suggesting that building more human-like AI may depend on replicating this relational coding. By uncovering this invariant property of the visual cortex, the research provides a key to deciphering how the brain organizes information, allowing different individuals to form a consensus reality from sensory input.

Observing the Brain in Real Time

To understand how different brains could produce similar perceptions, researchers needed to overcome the low-resolution limitations of many brain-imaging methods. The team gained a rare opportunity to observe neural activity directly by working with epilepsy patients who had electrodes implanted in their brains for medical reasons. These intracranial recordings provided a high-fidelity window into the brain’s operations as 19 patients participated in a visual recognition task. This methodology allowed scientists to see the live activation of neurons as individuals viewed various images, capturing the raw data of perception as it happened.

The study, published in Nature Communications, was led by graduate student Ofer Lipman and supervised by a team of professors from both Reichman University and the Weizmann Institute. By analyzing data from 244 contacts in the high-order visual cortex, the team could compare neural coding schemes across different individuals with unprecedented precision. This direct access to live neural recordings was critical to moving beyond simulations or inferences, providing concrete evidence of the brain’s coding mechanisms at the cellular level.

A Shared Relational Code

Unique Neurons, Universal Patterns

The investigation revealed a surprising paradox: while the specific neurons that fired in response to an image, such as a cat, varied widely among individuals, the relationships between the activity patterns were remarkably consistent. For example, if one person’s brain response to a cat was neurally more similar to its response to a dog than to an elephant, that same relational structure held true for other participants. The individual notes of neural activity were different, but the melody was the same.

This finding suggests that the brain does not rely on a rigid, one-to-one mapping of neurons to specific concepts. Instead, it employs a more flexible system based on “relational coding.” The researchers found this coding scheme was the most consistent representational method across all individuals, far surpassing other potential models like raw activation patterns or linear codes. It is this preserved relationship between patterns that allows for a shared perceptual experience despite the underlying neural idiosyncrasies of each person’s brain.

Implications for Artificial Intelligence

The discovery that the human brain uses a relational structure to process visual information has direct implications for the field of artificial intelligence. Many current AI models, such as artificial neural networks, also exhibit person-to-person variability in their internal operations, much like the individual human brains in the study. Understanding the principles of the brain’s shared coding language can offer a blueprint for designing more robust and efficient artificial networks.

Ofer Lipman, a lead author of the study, explains that this research brings science one step closer to deciphering the brain’s fundamental language for storing and organizing information. This knowledge creates a synergistic relationship between neuroscience and AI development. Insights from the human brain can inspire more intelligent AI, and in turn, artificial networks can serve as powerful models for generating deeper knowledge of human cognition. The study is part of a broader effort to compare information representation in natural and artificial networks, which promises to accelerate progress in both fields.

The Foundation of Shared Experience

The research addresses the fundamental question of how humans are able to communicate and cooperate effectively. The ability of different individuals to see the world in a largely similar manner is an essential basis for these complex social interactions. By identifying relational coding as the central neural mechanism underlying shared perceptual content, the study provides a biological explanation for our common ground of experience. It explains how two people looking at the same scene, like a dog running on a beach, can both arrive at the same description despite possessing physically distinct brains.

This shared perceptual framework is what allows for stable, consistent communication and understanding between people. The study underscores that behind every simple, shared observation lies a vast and complex neural code that science is only now beginning to decipher. The discovery of this invariant relational structure marks a significant step forward in understanding how our brains build a collective reality from individual sensory experiences.

Leave a Reply

Your email address will not be published. Required fields are marked *