I Get Email: fMRI and Autism


I did receive one response to my talk at SkepTech that wasn’t entirely positive.

Good afternoon Stephanie,

I was one of the attendees at skeptech in Minneapolis last weekend, and had asked you a question as to whether MRI’s have contributed to discovering autism in children at a young age (or at all, for that matter). I remember your reply, where you said that MRI’s weren’t advanced enough to make those kinds of detections without the need for physically splitting someone’s head open and investigating.

Curiousity got the best of me, and I decided to look around on the internet. I found the following scientific paper, “State-dependant changes of connectivity patterns and functional brain network topology in Autism Spectrum Disorder” on arxiv.org, a reputable source containing a library of scientific papers available to the public. Within this paper, another paper in 2007 by Alexander et al. discussed that “structural MRI studies have reported abnormal developmental trajectory of brain growth, with evidence of poorly organizated white matter”.

My questions to you are:

  • What is your reaction to this new information?
  • Given that your website shows no credentials of neuroscience or anything related to which you may have degrees in, why would you attend a conference and answer questions so confidently without knowing more about these subjects and the science behind these matters?

Thank you for your time. I hope you have a great week!

All right. Let’s go through my mental processes on this one, shall we? My first reaction is to look at the paper. Doing so, I notice a lot of this:

Functional data were preprocessed using statistical parametric mapping software (SPM5;
http://fil.ion.ucl.ac.uk/spm). The first 4 volumes of each run were discarded to allow for longitudinal relaxation time equilibration. EPI images from all sessions were slice-time corrected and aligned to the first volume of the first session of scanning to correct for head movement between scans. There was no excessive motion in any of the scans (lower than 3 mm). A mean image was created using the realigned volumes. T1-weighted structural images were first co-registered to the mean EPI image of each participant. Normalization parameters between the co-registered T1 and the standard MNI T1 template were then calculated, and applied to the anatomy and all EPI volumes. Data were then smoothed using an 8 mm full-width-at-half-maximum isotropic Gaussian kernel to accommodate for inter-subject differences in anatomy.

This isn’t the sort methodology I have the background to critique. I know very little about the details of how fMRI studies are done or interpreted.

However, that doesn’t mean that I know nothing about them. That’s because these studies are very much like other studies using massive amounts of data from individual subjects. The same statistical math applies. The same “green jelly bean” problem applies.

So the first thing I do is look at the number of test subjects (n) from the study. Here, n = 24. That is…small. Tiny, in fact. (This is currently a big problem in neuroscience in general.) So when I see a figure like this that tells me it’s showing correlations in multiple brain regions, I get very, very cautious. How many false positives are we looking at here? I’m not a statistician, but I’d want a very good one to reassure me that these tests were done correctly.

Graphic showing several brains under several conditions, each with several spots "lit up" to show significant findings.That caution on all these individual tests, however, doesn’t necessarily mean that the overall results of the study are incorrect. It doesn’t mean that we can’t diagnose autism via fMRI. As a positive sign in fMRI’s favor, the results of this study are in line with several other studies cited by the authors. Replication can quickly start to settle my unease over multiple testing.

Then I wonder why I’m not hearing more about this from the science-loving autism community. I do follow them. I guess this kind of finding could have slipped under my radar. They could be unfairly dismissing all fMRI results. They could just be cautious after having been promised answers so many times. I don’t know. So I go looking.

I see scholarly articles, but I’m still not in a position to usefully take in jargon-laden technical information on fMRI. I see various institutions that do scanning telling me this is a great way to diagnose autism. Then I see this, a bit of science reporting from Nature:

But three studies published in 2012 have come to the same conclusion: head motion leads to systematic biases in fMRI-based analyses of functional connectivity2, 3, 4. Specifically, motion makes it appear as if long-range connections are weaker than they really are, and that short-range connections are stronger than they really are.

This bias affects all functional connectivity analyses, but it is particularly insidious for studies of autism. That’s because it would lead to precisely the patterns that have been observed in fMRI scans of children with autism, and because children with autism typically move more than unaffected children do.

Ah. Ouch. The study above doesn’t use any of the suggested means of correcting for this problem. I wouldn’t expect it to, since it came out the same year as these studies, but it does mean that we’re looking at a behavioral test for autism here, even if the equipment used was very expensive. That also means that any test based on these results is likely to misdiagnose other children who squirm as autistic.

So now I’m back to not knowing whether fMRI has any usefulness for diagnosing problems in the brain that aren’t well localized and continuing to be skeptical of any fMRI results that aren’t well regarded by people who both understand the problems fMRI studies have and understands the language in which the studies are written far better than I do.

Now, on to your second question. Why did I attend this conference? Because I was asked and because the organizers and I agreed on a topic that we felt I could do justice based on my recent work–not based on credentials I would have earned decades ago and might not have used in any useful way since.

(As a side note, I don’t generally flash what credentials I do have around for a few reasons. The first is that I want people to follow my arguments, not accept or reject them based on an old degree or two. The second is that I want people to feel comfortable challenging those arguments, preferably in an evidence-based way. The third is that I find it fascinating who rejects my arguments based on not being able to tell whether I have credentials.)

As for speaking confidently, let’s look at what I said in the session. You asked whether brain imaging/scanning technology could diagnose conditions like autism. My response:

Not right now. Generally, you’re going to be talking about the functional MRIs? [nod] They’re not really there yet and I don’t know that we know enough about how brains differ in doing what they do to really predict whether they’re going to get there. It may be that the kind of results we get from that are so different on a person-by-person basis that they never become diagnostically useful. They’re very pretty, but they suffer from the fact that when we look at a picture, we tend to believe it more.* So at this point, take fMRIs with a big grain of salt.

I expanded on this more in the hallway later when you insisted that imaging technology would reach the point of being diagnostic. This was when I noted that fMRI will generally be better dealing with conditions that strongly affects specific regions of the brain. (Larger, more localized changes will look less like noise on scans.) It was when I noted that depending on the underlying mechanism, we may not reach the scale of imaging we need for certain conditions.

The only things I spoke “so confidently” of are the fact that we have a lot of unanswered questions about where brain imaging will go and that our society’s confidence in fMRI is entirely out of proportion to the maturity and current usefulness of the tool. Neither of these are anything one has to be an expert in neuroscience to know. One just has to pay attention to the scientific press amplifying the concerns of scientists in the field. That’s what I did. I have no problems relaying the ideas of experts to a wider audience. Confidently even.

Still, those two links are going to give you much more information on why fMRI is to be taken with that grain of salt than talking to me in the hallway will. I hope that helps.

*For those who haven’t watched the video, this is a callback to a problem I mention in my talk, which is that we tend to believe more that something is a singular “thing” once we’ve given it a name.

Comments

  1. says

    Also, be wary of arxiv. It is NOT peer-reviewed, and the quality of the papers there vary from great to batfucking nuts. In particular, a lot of the biology/biomedical papers there seem to be things that couldn’t pass peer review — too many biomedically-focused researchers seem to see it as an escape hatch, a place to stuff work that is otherwise unpublishable but can still be listed on their CVs.

  2. bruce says

    Your asterisk * above may be related to http://en.wikipedia.org/wiki/Reification_(fallacy) the concept of reification, which can be a fallacy if not properly established.
    My favorite example of this was Stephen J Gould’s discussion of the zebra. People assume that zebras are a thing. Not true! Zebras are species of horses which evolved stripes. This arose several times independently. But most people talk as if zebras were one split from horses, which then diverged. That would be a grouping of common descent, but genetics apparently has shown that zebras as a whole are not such a grouping.
    If a label refers to something that isn’t real, it can still be useful, but only after it is established as to what is meant there. Avoiding the cases where reification can be a fallacy can be a challenge. So thanks for getting it right.

  3. sinethetaprime says

    I was with the person who asked you this question throughout the weekend, there’s no possibility that you spoke to this person outside of your individual presentation. Are you mixing people up that you have spoken to?

  4. says

    That could be. One of the people who asked a question during the session followed up right after the session on my answer to this question. I assumed they were the same person, but I can’t verify from the video.

  5. DeepThought says

    I do lots of fMRIs myself.

    Some of your objections are valid and some are not. In short:

    Small sample size is not a valid objection, as long as the statistical tests are properly performed. You do need a larger effect size to detect a statistically significant result with a smaller sample size.

    The multiple comparison problem is not a valid objection, as long as there is a valid correction for multiple comparisons (which is included in all neuroimaging software these days, including the one used in the paper, a very much used one, spm).

    Head motion could indeed be a confound, if it was not controlled for in the analysis. The exact impact of head motion is still under debate in the neuroimaging community, but it does allow one to take results with a grain of salt if not properly controlled for in the analyses.

    The biggest problem is this: your questioner does not seem to understand the difference between merely showing a statistically significant effect (a difference between average values of two groups) and the ability to use that difference to accurately classify subjects based on data (which is what diagnosis is), for which you would need a much larger effect size. For this we need not simple t-tests or correlations, but machine learning. There are some things which can be accurately classified using fMRI using these techniques. I’m not sure whether autism is one of them, and it’s too late for me to look it up now.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>