Journal Club

Highlighting recently published papers selected by Academy members

Journal Club: Giving the brain something to do highlights the circuitry of smarts

Data based on people carrying out a mental task in the MRI amplified the differences between people’s brains, and better predicted their results on intelligence tests. Image credit: Shutterstock/alexialex

Data based on people carrying out a mental task in the MRI amplified the differences between people’s brains, and better predicted their results on intelligence tests than did resting-state scans. Image credit: Shutterstock/alexialex

Scientists conducting neuroimaging studies hope that their MRI scans will offer up patterns that predict traits related to intelligence, personality, disease, or even offer insights into a patient’s clinical symptoms or their chances of responding to a drug. But according to a recent study, MRI researchers may, in some cases, be using a subpar methodological approach.

Neuroscientist Abigail Greene, at Yale University in New Haven, Connecticut, reports in Nature Communications that in the case of predicting intelligence, the best way to map relevant brain circuitry is to give the brain a task to complete. Such a task-based approach contrasts with what neuroscientists often ask their subjects to do—which is essentially nothing at all. MRI researchers frequently create “resting-state” images (See Core Concept: Resting-state connectivity). Greene, an MD/PhD student in the laboratory of Todd Constable, suspected that the brain doing a task would better help her predict intelligence. That’s exactly what she found—data based on people carrying out a mental task in the MRI amplified the differences between people’s brains, and better predicted their results on intelligence tests.

Greene used brain images from the Human Connectome Project. Project researchers had collected images of people both at resting state and while doing various tasks. For example, a gambling experiment asked the participants to guess whether a hidden card showed a number less or greater than five. They received $1 for guessing right, but lost 50 cents for guessing wrong.

In addition, outside the scanner, the Human Connectome Project subjects completed a brief intelligence test based on Raven’s matrices—geometric diagrams, arranged three by three, with one diagram missing. Test-takers must deduce the missing shape. The test is meant to test fluid intelligence, which involves abstract thinking, pattern identification, and problem-solving.

The Constable lab had previously developed a computer program that identifies the likely connections between brain areas, then uses them to predict behaviors. Greene applied this to correlate brain activity with test scores from 515 subjects from the Human Connectome Project. The program uses 268 standard brain regions, and asks how often pairs of regions tend to be active at the same time, indicating they might work together. Machine learning then discerns which pairings, as a group, best predict intelligence test scores.

Greene found that if she used resting-state images, the brain scans could explain less than 6% of the variation in intelligence scores. But if she switched to scans from specific tasks—the gambling one worked best—she could get that number above 12%.

The researchers repeated the analysis in a 571-person dataset from the Philadelphia Neurodevelopmental Cohort. That one didn’t use a gambling task, but did include a working memory task; participants had to tell whether a picture was the same as, or different from, one they’d previously been shown. Analysis of those scans explained more than 20% of the variability in an intelligence test based on verbal reasoning. “With task-based models, we think we’re capturing more of the fluid intelligence-related circuit,” says Greene. “Using tasks may be a better way to reveal those kinds of differences.”

She compares the task-based assessments to a cardiac stress test. In a person at rest, signs of heart disease might not be obvious, but they can become clear when that person is on a treadmill. Similarly, certain tasks seem to tax the intelligence circuit, making it light up more.

The results are not surprising, but provide the first direct evidence that tasks work better than resting state to predict a trait, says Julien Dubois, a postdoc at Cedars-Sinai Medical Center, in Los Angeles who has also modeled characteristics such as intelligence and personality based on MRI data. “It might be useful to step away, a little bit, from resting-state data and start looking at more active tasks,” says Dubois.

He suggests that by using a more in-depth intelligence test than Raven’s matrices, the researchers might find their models predict intelligence even better. Greene is also now building models based on the brain at work on several different tasks, instead of just one, in the hopes of improving her algorithms.

The results, though, do not sound the death knell for resting-state experiments. Those studies are simple to perform, and make it easy to share data between imaging centers, a difficult prospect if centers ask subjects to perform different tasks. Plus, the variety of things people might think about while merely resting allows scientists to study a wide range of mental traits; task-specific data might predict only certain traits. Resting-state analysis also works for populations that might not be able to take on specific tasks, such as children.

Categories: Applied Physical Sciences | Journal Club | Neuroscience and tagged | | | | |
Print Email Comment

Leave a Comment

Your email address will not be published. Required fields are marked *