Using fMRI to reconstruct approximations of visual cortex data

Unbelievable.

From UC Berkeley:

Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.

As yet, the technology can only reconstruct movie clips people have already viewed. However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.

This is nascent technology, and it’s incredibly limited at the moment. They built a database of 18 million seconds of Youtube videos and what sorts of fMRI patterns they’d expect them to create on your visual cortex. Then they compared watching the clip on the left to that database, processing each second of fMRI data against the database, rebuilding an approximation of what they believe the patient would have seen by overlaying the top-100 seconds as selected by a computer algorithm. The output is uncannily accurate with shapes and positions. It also seems to get faces, though I’m sure in the Youtube selection, people’s faces make up a significant proportion of the random data.

No, we can’t read people’s minds, or record your dreams about Summer Glau. At least not yet.

{advertisement}
Using fMRI to reconstruct approximations of visual cortex data
{advertisement}

One thought on “Using fMRI to reconstruct approximations of visual cortex data

Comments are closed.