Research Team Uses AI to Understand Musical Instincts

Music is often called the universal language, as it is a common element in all cultures. But how does the human brain process music without any formal training? A KAIST research team has used an artificial neural network model to answer this question.

Artificial Neural Network Model Mimics Auditory Cortex

The research team, led by Professor Hawoong Jung from the Department of Physics, used AudioSet, a large-scale collection of sound data provided by Google, to train the artificial neural network to learn various sounds. They found that some neurons in the network responded selectively to music, while ignoring other sounds such as animals, nature, or machines. These neurons showed similar behavior to those in the human auditory cortex, the part of the brain responsible for processing musical information.

The researchers also discovered that these music-selective neurons encoded the temporal structure of music, meaning that they responded less to music that was cut into short intervals and rearranged. This property was not limited to a specific genre of music, but emerged across 25 different genres including classical, pop, rock, jazz, and electronic.

Musical Ability May Be an Innate Brain Function

The study suggests that musical ability may be an innate cognitive function that emerges spontaneously from the human brain without special learning. The researchers hypothesize that this function may have evolved as an adaptation to better process natural sounds, such as those of birds or water. They also found that suppressing the activity of the music-selective neurons impaired the accuracy of the network for other natural sounds, indicating that the neural function for music helps process other sounds as well.

The study was published in the journal Nature Communications on January 16, 2024. It is one of the first studies to use an artificial neural network model to investigate the origin and universality of musical instincts.

Leave a Comment