Soon, we’ll be reading your minds!


i-e88a953e59c2ce6c5e2ac4568c7f0c36-rb.png

No, not really, but this is still a cool result: investigators have used an MRI to read images off the visual cortex. They presented subjects with some simple symbols and letters, scanned their brains, and read off the image from the data — and it was even legible! Here are some examples of, first, the images presented to the subjects, then a set of individual patterns from the cortex read in single measurements, and then, finally, the average of the single scans. I think you can all read the word “neuron” in there.

i-9aa74c98855b488f5ea5b2e0e3539ee2-mri_signals.jpeg
Reconstructed visual images. The reconstruction results of all trials for two subjects are shown with the presented images from the figure image session. The
reconstructed images are sorted in ascending order of the mean square error. For the purpose of illustration, each patch is depicted by a homogeneous square,
whose intensity represents the contrast of the checkerboard pattern. Each reconstructed image was produced from the data of a single trial, and no postprocessing was applied. The mean images of the reconstructed images are presented at the bottom row. The same images of the alphabet letter ”n” are displayed in
the rightmost and leftmost columns.

Before you get all panicky and worry that now the CIA will be able to extract all of those sexy librarian fantasies out of your brain by aiming a gadet at your head, relax. This is an interesting piece of work, but it has some serious limitations.

  • This only works because they are scanning the part of the visual cortex that exhibits retinotopy — a direct mapping of the spatial arrangement of the retina (and thus, of any images falling on it) onto a patch of the brain at the back of your head. This won’t work for just about any other modality, except probably touch, and I doubt it will work for visualization/cognition/memory, which are all much more derived and much more complexly stored. Although I’d really like to know if someone closes their eyes and merely imagines a letter “E”, for instance, whether there isn’t some activation of the visual cortex.

  • The process was time consuming. Subjects were first recorded while staring at random noise for 6 seconds in 22 trials. This was necessary to get an image of the background noise of the brain, wwhich was subtracted from subsequent image measurements. The brain is a noisy place, and the letter pattern is superimposed on a lot of background variation. Then, finally, the subject has to fixate on the test image for 12 seconds.

  • Lastly, a fair amount of math has to be flung at the scan to extract the contrast information. This is probably the least of the obstacles, since computational power seems to increase fairly rapidly.

Give this research some more time, though, and I can imagine some uses for being able to record specific aspects of brain states. I’d be more interested in a device that can read pre-motor cortex though — I’d like to get rid of this clumsy keyboard someday.


Miyawaki Y, Uchida H, Yamashita O, Sato M-a, Morito Y, Tanabe HC, Sadato N, Kamitani Y (2008) Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoder. Neuron 60(5):915-929.

Comments

  1. Greg says

    I too want to know if simply imagining something could be scanned. It may be that shortly down the line, we could create a feedback loop to let people create and perfect any image by concentrating on what it looks like, then reviewing what the computer ‘sees’.

    This technology is pretty awesome, and I can’t wait to hear more about it.

  2. says

    It’s sort of the reverse of experimentally using cameras and implanted electrodes in the “retinotopic map” to give vision to blind people, which was done quite some time ago.

    Very cool to be able to read it without any harm or direct contact, however.

    Glen D
    http://tinyurl.com/6mb592

  3. says

    Neat. What impresses me is they seem to have gotten a fairly clear “e” despite that being perhaps the most “complex” of the shapes: It has six corners and two small essentially interior regions (three if you count the white central crossbar). The next most “complex” shape is the dark square in the middle (3rd from left), with eight corners but only one interior region, which is reasonably large and also invariant if rotated 90deg. Both, to me, look to be consistently the least-well-resolved. Not surprising, I suppose, but to be able to get what seems to recognizable as an “e” and a dark-square-in-a-light-border is amazing.

    What I would have liked to see are the images without knowing what they are supposed to be. Would I then still see/reconstruct the originals?

  4. Mike P says

    Although I’d really like to know if someone closes their eyes and merely imagines a letter “E”, for instance, whether there isn’t some activation of the visual cortex.

    Kosslyn did some work like this, and the answer seems to be yes, especially when doing mental object rotations.

  5. JM Inc. says

    You and me both, PZ! There’s nothing stupider than having to push all my important information through these little wands on the ends of my arms. What moron designed that one? Oh wait…

  6. Eric says

    “Although I’d really like to know if someone closes their eyes and merely imagines a letter “E”, for instance, whether there isn’t some activation of the visual cortex.”

    I vaguely remember hearing about a study that checked exactly that question, and yes – there is some activation of the visual cortex.

    “Proc Natl Acad Sci U S A. 1993 December 15; 90(24): 11802-11805.
    PMCID: PMC48072
    Activation of human primary visual cortex during visual recall: a magnetic resonance imaging study.
    D Le Bihan, R Turner, T A Zeffiro, C A Cuénod, P Jezzard, and V Bonnerot”

    Indeed- a quick search shows that my memory (at least) is working!

    cheers-

    Eric

  7. says

    Comments on NDEs in 3, 2, 1….

    DO you mean Non-Destructive Examination (seems appropriate, at least in name), or Near Death Experiences, or something else…?

  8. Brownian, OM says

    So, is this how Dr. Venkman was able to help the pretty research subject guess the correct symbol in Ghostbusters?

    I lent out my copy of This Is Your Brain On Music, but in it Levitin suggests that sounds are processed by the brain in such a way that melodies can similarly be extracted with electrodes in the right places, and that it didn’t make much of a difference whether the subject was actually listening to music or just remembering it.

  9. says

    Before you get all panicky and worry that now the CIA will be able to extract all of those sexy librarian fantasies out of your brain by aiming a gadet at your head, relax. This is an interesting piece of work, but it has some serious limitations.

    Yes, yes, yes, they can already do it!!! If not, it’ll only be be months, not years, before they can.

    What’s a gadet?

  10. Matt, Sexual Jihadist says

    Dream recordings on the way!

    Aha, one post and someone already suggests it. Honestly, someone is probably already getting ready to patent a DC Mini right now.

    That’s just about the coolest thing I can imagine, short of letting others actually share your dreams. Hooray for science!

  11. Sili says

    Wouldn’t a tinfoil hat obstruct the MRI?

    I’d imagine that the “Imagine an E” exercise would be very variable.

    When I try to picture something I never have anything resembling a verisimiliar ‘visual’ experience – I don’t see stuff floating before my eyes. But that, I think, is the experience for some people (artistic types?).

    It’d be interesting to see if there is indeed an encephalic difference between ‘visual’ and ‘non-visual’ thinkers.

  12. says

    “I’d be more interested in a device that can read pre-motor cortex though — I’d like to get rid of this clumsy keyboard someday.”

    I agree, but I bet there are a few people who would like to get rid of their clumsy wheelchairs too.

  13. llewelly says

    What impresses me is they seem to have gotten a fairly clear “e” despite that being perhaps the most “complex” of the shapes

    ‘e’ is a very important letter in the English language. I would like to know if ‘e’ would be as well resolved in the brains of people who don’t know English – especially those who don’t know any language using the Latin alphabet.

  14. LightningRose says

    Tinfoil hats! Tinfoil hats! Tinfoil hats!

    Gitcher tinfoil hats right here! Only $5!

    Order today, before the Guvmint knows you want one and arrests you for it!

  15. dNorrisM says

    OTP, but everybody draw a “Q” on your forehead and tell us if it is mirrored WRT an observer. HT to SCIAM a few months ago.

    PS: Interesting article.

  16. Marshall says

    I heard about this project a few months ago, and it’s incredibly interesting. I’d like to note that there are a LOT of improvements that can obviously be made from it. For starters, their algorithm to extract the contrast levels are the simplest possible. They looked at the most naive correlations and used simple additive processes to come up with the contrast images. A better approach will be to come up with some nonlinear system (or something more complicated at least) to better pull out the images.

    Our retinotopic map is also reproduced some 47 times throughout the brain (that we know of). Reading multiple maps may or may not be feasible, but it would definitely help a lot.

  17. Jeeves says

    There was an episode of 60 Minutes from a month or two ago that dealt with this technology. In fact, if I remember correctly, with the electrodes attached to the patient’s head (usually paralyzed or with ALS), they were able to write complete sentences using nothing but their minds. It takes a great deal of time, because the computer has to pick out each letter individually, but its possible.

    http://www.cbsnews.com/stories/2005/09/08/podcast_60min/main828230.shtml

    Go to Jobs, Narcotics, Brain Power 11/3/08

  18. Doug says

    Last night on the Science Channel, a woman who is blind, is the subject of an experiment to inject images into the brain via the visual cortex. This is the inverse of what this article is about and seems to be much more useful.

  19. says

    As cool as this seems, I have to admit a certain amount of skepticism to the notion that parts of my brain actually draw pictures of what I see. That seems a little to pat, a little too convenient. Is there any chance of pareidolia here?

  20. says

    I see good uses for this. I also see the potential for a huge amount of quackery, but I think that’s outweighed by the the good uses. If it is possible to get decent images out of internally-visualized shapes, that could be a truly wonderful development in assistive technology for people with motor or speech impairments. The brain-noise filtration could probably be calibrated much better on a personal device over the long term (especially if the user could get immediate feedback on what the machine was “seeing”).

  21. DonHomer says

    Something very similar to this is discussed a bit in “How the Mind Works” by Steven Pinker I believe (A book I found fascinating even though I only have a very very very basic idea of what goes on in that big pile of goop)

  22. Jay says

    The logic for recognizing the images can be enhanced with a simple fuzzy logic algorithm (non-linear, as was suggested) or it can use a learning neural network that requires positive and negative stimulus.

    Fun stuff, wish I could have stuck with it, but RL intrudes.

  23. CJO says

    I have to admit a certain amount of skepticism to the notion that parts of my brain actually draw pictures of what I see. That seems a little to pat, a little too convenient.

    As PZ points out in his post:

    This only works because they are scanning the part of the visual cortex that exhibits retinotopy — a direct mapping of the spatial arrangement of the retina (and thus, of any images falling on it) onto a patch of the brain at the back of your head.

    So the answer is “yes and no.” Only at the most basic level of processing in the primary visual cortex is anything analagous to “drawing a picture” happening, and, there, only to the extent that it’s a map of photoreceptor activity on the retina.

  24. says

    I wonder if this can be used to give back the power of sight to the blinds… That would be an interesting development. Kind of Geordi La Forge from Star Trek.

  25. Brownian, OM says

    I don’t see stuff floating before my eyes.

    The other day I was thinking about the nature of the voice in my head (ViMH). While it generally shares the linguistic qualities of my voice such as diction and incorporates variations of pitch and loudness to distinguish questions and exclamations from statements, it has no readily-discernible qualities of timbre or pitch. In this way it contrasts with the voices of singers (or even my own voice) when remembering a song or a phrase, which are generally remembered with high fidelity. (For instance, I can remember and differentiate the variations of “D’oh!” exclaimed by Homer in that one montage). But the VIMH is not nearly as memorable. Of course, I can seamlessly read using the ViMH, even while saccading around the page, and it seems to pick up wherever my eyes are without a problem. But if I try to read in my head using say, Homer’s voice, I can only do so as long as I read no faster than relatively quick human speech. Any faster than that, and it devolves into the generic ViMH again.

    And really people: do I have to be the first to bring up 1984’s Dreamscape with Dennis Quaid, Max von Sydow, Christopher Plummer, and Kate Capshaw?

  26. Charlie Foxtrot says

    Heh heh heh… librarians… yeah…

    …wait…

    PZ! How did you know about that!!!

    (Hmmm…What’s this round, sucker-like mark on my temple?)

  27. says

    Pretty cool, though looking at the figure without reading the paper is a little misleading. It makes it look as if the fMRI activation pattern is actually arranged in a manner that reproduces what the eyes see. Actually, people have been using similar pattern classification techniques with fMRI for several years now (some cool things with memory – Lewis-Peacock, J. Neurosci. – 2008).

    @#25, I was thinking the same thing… how much better can it get if they take all those other retinotopic maps into account (or the whole brain for that matter).

  28. says

    @#36, unlikely. You’d need to train the system on a bunch of stimuli first since for each person the activity pattern is different. So, unless you scanned the blind person before they went blind, it won’t work (and it certainly won’t work with people who were born blind).

  29. mayhempix says

    I love this stuff.

    Soon we can all be brains in glass jars with tentacles for amusement purposes like having sex and throwing shoes.

  30. says

    It makes it look as if the fMRI activation pattern is actually arranged in a manner that reproduces what the eyes see

    I was wondering about that, since the retinotopic map is known to be quite “distorted” by comparision with our field of vision. So how could the letters come out so well?

    Thanks for finding out what’s really going on.

    Glen D
    http://tinyurl.com/6mb592

  31. Hank Fox says

    They tried this with President Bush several times, and got nothing but background noise. Well, one trial returned the image of a shoe, but they threw out that result.

  32. RickrOll says

    Seems like the sort of thing that would benefit from the genetic algorithms/ genetic programming that was mentioned in the “Evolving the Mona Lisa” thread, so that it could quickly reach high fidelity transcription of mental imagry, in regards to giving sight to those without or to reconstructing imaginary/ remembered images.

    It almost seemed inevitable that this would happen. But DAMN that was quick!

  33. RickrOll says

    “Yes! YES! PUNCH THE KEYS FOR GOD’S SAKE!”
    /intenral voice, commanding my future laptop to type itself

  34. Peter Ashby says

    Yes, it is cool that they can do it with a scanner, but the guys recording from individual visual cortex neurons in cats were doing this sort of thing at a coarser level some time ago. For eg there are neurons that react to left slope diagonal lines and others that do right slope diagonals. Some that do horizontal lines etc, etc. Can you see how that relates to the above images and why they chose them?

    Those into computer image making will recognise the building blocks of an image there. So they are imaging in the scanner collections of cells mapped across the visual field. So I expect this is more a breakthrough in scanner sensitivity and the computer crunching and algorithms for data extraction than a real biological breakthrough in terms of understanding. The guys with recording electrodes have done this, they are just limited in terms of the area of cortex they can do at once, hence the scanner.

  35. says

    Hey PZ, did you know you’ve got WorldVision advertising in your side bar? Little girl praying and everything. Ick.

    But then, it’s nice to see them wasting advertising bucks on you. ;)

  36. Doc Bill says

    Sexy librarian.

    That’s not me, right?

    I mean, maybe once or twice but, seriously, no more than that.

    And I never, never thought of Sarah Palin as a “librarian” unless “librarians” wear French maid outfits.

    But, seriously, no more than once. Or twice. Certainly less than a handful.

  37. AnthonyK says

    So they look into the brain and find “neuron”; well what did they expect? If they’d looked somwhere else they would’ve got “synapse” or “dendron”. This is no more than evidence that the almighty has labelled his creation deep down.
    Scientists, step away. Nothing to be seen here.

  38. Nomad says

    For those who want the DNI (direct neural interface) for their computers… well, there is one on the market now. I don’t want to gamble with including a link to see if it works or not, so just google the “OCZ neural impulse actuator”.

    The word is that it’s still a very primitive device, no big surprise there, I don’t want to give the impression that you can just plug this thing in and control your PC with your mind… I don’t honestly thing it’s anything more than a nifty gimmick at this stage, it supposedly does work, but only at the level of mostly picking up on nerve impulses coming out of the brain, rather than the brain itself. You can use it to control a few key binds, training it to recognize a certain type of impulse to deliver a certain keystroke. It’s not like you think “a” and it types the letter A.

    Still I had to mention it. All this neural mapping stuff is impressing me. In my sci-fi vision of the future such things were usually assumed to come later. We aren’t getting flying cars and cities in space, but I’m starting to think we’re on track to get computers that can be controlled “telepathically”. At least with the assistance of an implant or some sort of clever headwear.

  39. noahpoah says

    This won’t work for just about any other modality

    There is evidence of tonotopic mapping in early auditory cortex, maybe something roughly analogous could work for audition.

  40. The Adamant Atheist says

    This is an exciting development. I love when scientific understanding fills in gaps where gods or spirits once lurked.

    Increasingly, there are fewer dark areas in which the enemies of reason can find refuge for their wild assumptions.

  41. says

    but it has some serious limitations.

    Like needing a machine that’s the size of a dump truck to take the readings. I’m not worried about the CIA using this for the foreseeable future.

    Nevertheless, it’s interesting. Being able to match any neural activity to particular thoughts or actions is potentially interesting for lots of reasons, including potential use for prosthetics. Maybe some day I’ll be able to think the letter ‘e’ and some computer will type it for me. This could vastly improve my typing abilities.

  42. John C. Randolph says

    It’s very interesting work. It reminds me of a customer I had at the NIMH many years ago who was studying the effects of brain injury on pattern recognition.

    The company I was working for at the time provided him with a system to display images with specific amounts of white noise added, so he could run experiments to see how much signal/noise the test subjects needed before they could identify the image.

    -jcr

  43. says

    What impresses me is they seem to have gotten a fairly clear “e” despite that being perhaps the most “complex” of the shapes

    ‘e’ is a very important letter in the English language. I would like to know if ‘e’ would be as well resolved in the brains of people who don’t know English — especially those who don’t know any language using the Latin alphabet.

    Good point. That’s both an excellent variation on my desire/wish to have not seen first what the images are supposed to be, and probably much more importantly, an interesting subject in itself. More generally, show the subjects glyphs from a language they don’t know (with a few fake ones, perhaps, thrown in as checks). Numerous other variations and tests can be imagined.

  44. tielserrath says

    I’m interested in the way the horizontal lines show up better than the vertical, particularly clear in the ‘plus sign’ cross. This fits in with what I was taught – that the brain focuses on horizontal lines better than vertical ones. That’s why you turn a chest xray on its side to look for a subtle pneumothorax.

    Of course the horizontal line in the pictures here may be appearing clearer to me for the same reason…

  45. Dave Wisker says

    Hopefully they will soon be able to figure out that old dormroom bong party question about seeing the same colors. I’m on pins and needles here.

  46. Leon says

    This is exciting though! Consider the possibility of using this to post pictures of criminal suspects–no more crude composite sketches.

  47. Nova says

    Although I’d really like to know if someone closes their eyes and merely imagines a letter “E”, for instance, whether there isn’t some activation of the visual cortex.

    According to Steven Pinker’s How the Mind Works all images in your head, including imagined or remembered ones, are splashed onto the visual cortex. It also details an experiment with a monkey in which this exact same thing was done with the monkey looking at a bullseye and the image being extracted from its brain. However in those days they had to remove the monkeys brain and inject the visual cortex with a radioactive isotope of glucose to see the image.

  48. DustWolf says

    Hooking this up to a synthesizer could work a lot better, since tones are more approximate. Then I could be a musician.

  49. Raj says

    I think PZs description of the training session as ‘Subjects were first recorded while staring at random noise for 6 seconds in 22 trials. This was necessary to get an image of the background noise of the brain, wwhich was subtracted from subsequent image measurements’ is incorrect.

    If I understand the paper correctly, then I think that the point of the random noise recordings was to build a model of how fMRI responses relate to visual input, rather than to ‘get an image of the background noise of the brain’.

  50. says

    tielserrath (#65),
    You make an astute observation here; one that fits with what we know about the retina and primary visual cortex. Acuity falls off much more quickly in human vision in the vertical than the horizontal dimension. I haven’t read the full paper yet, but I wonder if the authors mention this.
    Here’s a blog citation to recent psychophysical wowrk…
    http://primaryvisualcortex.wordpress.com/tag/fovea/

  51. RickrOll says

    “However in those days they had to remove the monkeys brain and inject the visual cortex with a radioactive isotope of glucose to see the image.”- NOVA #69

    Wow. Glad we’ve progressed past that! Though even if we didn’t, i can see the benefit of analyzing the brains of Christians or Mediums, ect. for what “God” supposedly looks like. And using a similar tech for auditory recognition would then be able to tell us what God looks like. Pretty soon we will have enough evidence to put God on trial after all! ;) HAhahahha

    And this would be a fascinating study for people with synesthesia (sp). hopefully the technology will continue to miniaturize to the point where we can give detailed and accurate witness testimony. “God of the Gaps” would soon enough have nowhere to hide. It would merely be a question of whether or not we wanted to remember and share the information, at such a point. But hey, this is like 2050 i’m talking about- being rather optimistic.

    I’m dissapointed this thread didn’t get more attention. That and the “Let’s See NASA Change.” Fascinating; and the latter- vitally important. *sigh*