Scientists first learned of the images of thoughts


Neuroscientists have recreated a dynamic visual image that comes to mind while watching a randomly selected video clips. In the future, this technology will help doctors see on the screen hallucinating patients, and rich eccentrics — to lay out their dreams on YouTube.

Fantastic result was obtained in the University of California at Berkeley subjects showed different clips from YouTube (fragments of films, trailers, etc.), while magnetic resonance imaging recorded in detail the activity of cells in different parts of the visual cortex. With this information, the scientists were able to reconstruct the images viewed in color and dynamics (see the impressive roll at the end of the material).

The quality of the restored clips may seem unimportant, but it is far better and more accurate than what I managed to get "out of my head" of a person in the first experiment of this type.

Scientists have created a new program recognizing mental images themselves alternately acted as test subjects because they had to spend in the imager for hours at a time.

For a start, researchers recorded the pattern of brain activity when viewing a number of trailers of Hollywood films. Biologists have constructed a three-dimensional computer model of the brain with groups of cells (voxels) and recorded as each voxel responds to changes in the form and motion of objects on the screen. So managed to get a rough correspondence of visual information and the process of its processing in the cortex of the brain.



For testing and fine-tuning algorithm, scientists fed him thousands of hours taken randomly from millions of videos from all of the same service YouTube, yielding the opposite result — a simulated brain activity that would be observed if the person would have looked through these videos.

Images from the clips and to the right — they are the same, but the lessons from the head of the observer





Finally, the algorithm again drew. When watching the people in the video test computer tomograph picked from the web 100 clips that are most likely to have to call it a picture of cell activity. Next, second by second program mixed the footage from these videos, getting blurred the resulting film, coincides well with the fact that people saw in reality.

(More experience can be found in an article in Current Biology and precs release from the university.)

"So we are opening a window into the movies in our minds going," — says one of the authors of Jack Gallant (Jack Gallant). By the way, Gallant is known to us by experience with pattern recognition using brain scans seen photographs.

Scientists explain why the recognition of thoughts using magnetic imaging difficult to implement. One problem — the scanner records the changes in blood flow through the crust, and they are much slower than the neural signals are changed.

That's why earlier this trick could only rotate with static pictures. "We have solved this problem by creating a two-stage model that separately describes the nerve cells and blood flow signals," — said Shinji Nishimoto (Shinji Nishimoto), lead author of the study.

To the practical application of the new technology may take a decade. But even in this, its raw form it can help neuroscientists better understand what happens in the human mind.

Like this post? Please share to your friends: