(Update 27 4 2009: For a methodological problem which could cast doubt on some (but not all) of the kind of research that I discuss below, see this newer post.)
In the last couple of weeks we've seen not one but two reports about "reading minds" through brain imaging. First, two Canadian scientists claimed to be able to tell which flavor of drink you prefer (Decoding subjective preference from single-trial near-infrared spectroscopy signals). Then a pair of Nashville neuroimagers said that they could tell which of two pictures you were thinking about through fMRI (Decoding reveals the contents of visual working memory in early visual areas); you can read more about this one here. Can it be true? And if so, how does it work?
Although this kind of "mind reading" with brain scanners strikes us as exciting and mysterious, it would be much more surprising if it turned out to be impossible. That would mean that Descartes was right (probably). There's nothing surprising about the fact that mental states can be read using physical measurements, such as fMRI. If you prefer one thing to another, something must be going on in your brain to make that happen. Likewise if you're thinking about a certain picture, activity somewhere in your brain must be responsible.
But how do we find the activity that's associated with a certain mental state? It's actually pretty straightforward - in the sense that it relies upon brute computational force rather than sophisticated neurobiological theories. The trick is data-mining, which I've written about before. Essentially, you take a huge set of measurements of brain activity, and search through them in order to find those which are related to the mental state of interest.
The goal in other words is pattern classification: the search for some pattern of neural activity which is correlated with, say, enjoying a certain drink, or thinking about a bunch of horizontal lines. To find such a pattern, you measure activity over an area of the brain while people are in two different mental states: you then search for some set of variables which differ between these two states.
If this succeeds, you can end up with an algorithm - a "pattern classifier" - which can take a set of activity signals and tell you which mental state it is associated with. Or if you want to be a bit more sensationalist: it can read minds! But importantly, just because it works doesn't mean that anyone knows how it works.
Here's a pic from the first paper showing the neural activity associated with preferring two different drinks (actually pictures of drinks on a screen, not real drinks.) X's are the activity measured when the person preferred the first out of two drinks, and O's are when they preferred the second. The 2D "space" represents activity levels in two different measures of neural activity. A spot in the top left corner means that "Feature 2" activity was high while "Feature 1" activity was low.
You can see that the X's and the O's tend to be in different parts of the space - X's tend to be in the top left and O's in the bottom right. That's not a hard-and-fast rule but it's true most of the time. So if you drew an imaginary line down the middle you could do a pretty good job of distinguishing between the X's and the O's. This is what a pattern classifier does. It searches through a huge set of pictures like this and looks for the ones where you can draw such a line.
The second paper uses what's in essence a similar method to discriminate between the neural activity in the visual areas of the brain associated with remembering two different pictures. Indeed, the technique is fast becoming very popular with neuroimagers. (One attractive thing about it is that you can point a pattern classifier at some data that you collected for entirely seperate reasons - two publications for the price of one...) But this doesn't mean that we can read your mind. We just have computer programs that can do it for us - and only if they are are specially (and often time-consumingly) "trained" to discriminate between two very specific states of mind.
Being able to put someone in an MRI scanner and work out what they are thinking straight off the bat is a neuroimager's pipe dream and will remain so for a good while yet.
Sheena Luu, Tom Chau (2009). Decoding subjective preference from single-trial near-infrared spectroscopy signals Journal of Neural Engineering, 6 (1) DOI: 10.1088/1741-2560/6/1/016003
Stephanie Harrison, Frank Tong (2009). Decoding reveals the contents of visual working memory in early visual areas Nature
No comments:
Post a Comment