Every time you open your eyes, visual information flows into your
brain, which interprets what you’re seeing. Now, for the first time, MIT
neuroscientists have non-invasively mapped this flow of information in
the human brain with unique accuracy, using a novel brain-scanning
technique.
This technique, which combines two existing technologies, allows
researchers to identify precisely both the location and timing of human
brain activity. Using this new approach, the MIT researchers scanned
individuals’ brains as they looked at different images and were able to
pinpoint, to the millisecond, when the brain recognizes and categorizes
an object, and where these processes occur.
“This method gives you a visualization of ‘when’ and ‘where’ at the
same time. It’s a window into processes happening at the millisecond and
millimeter scale,” says Aude Oliva, a principal research scientist in
MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Oliva is the senior author of a paper describing the findings in Nature Neuroscience.
Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios
Pantazis, a research scientist at MIT’s McGovern Institute for Brain
Research, is also an author of the paper.
When and where
Until now, scientists have been able to observe the location or
timing of human brain activity at high resolution, but not both, because
different imaging techniques are not easily combined. The most commonly
used type of brain scan, functional magnetic resonance imaging (fMRI),
measures changes in blood flow, revealing which parts of the brain are
involved in a particular task. However, it works too slowly to keep up
with the brain’s millisecond-by-millisecond dynamics.
Another imaging technique, known as magnetoencephalography (MEG),
uses an array of hundreds of sensors encircling the head to measure
magnetic fields produced by neuronal activity in the brain. These
sensors offer a dynamic portrait of brain activity over time, down to
the millisecond, but do not tell the precise location of the signals.
To combine the time and location information generated by these two
scanners, the researchers used a computational technique called
representational similarity analysis, which relies on the fact that two
similar objects (such as two human faces) that provoke similar signals
in fMRI will also produce similar signals in MEG. This method has been
used before to link fMRI with recordings of neuronal electrical activity
in monkeys, but the MIT researchers are the first to use it to link
fMRI and MEG data from human subjects.
In the study, the researchers scanned 16 human volunteers as they
looked at a series of 92 images, including faces, animals, and natural
and man-made objects. Each image was shown for half a second. “We wanted to measure how visual information flows through the brain.
It’s just pure automatic machinery that starts every time you open your
eyes, and it’s incredibly fast,” Cichy says. “This is a very complex
process, and we have not yet looked at higher cognitive processes that
come later, such as recalling thoughts and memories when you are
watching objects.”
Each subject underwent the test multiple times — twice in an fMRI
scanner and twice in an MEG scanner — giving the researchers a huge set
of data on the timing and location of brain activity. All of the
scanning was done at the Athinoula Martinos Imaging Center at the
McGovern Institute.
Millisecond by millisecond
By analyzing this data, the researchers produced a timeline of the
brain’s object-recognition pathway that is very similar to results
previously obtained by recording electrical signals in the visual cortex
of monkeys, a technique that is extremely accurate but too invasive to
use in humans.
About 50 milliseconds after subjects saw an image, visual information
entered a part of the brain called the primary visual cortex, or V1,
which recognizes basic elements of a shape, such as whether it is round
or elongated. The information then flowed to the inferotemporal cortex,
where the brain identified the object as early as 120 milliseconds.
Within 160 milliseconds, all objects had been classified into categories
such as plant or animal.
The MIT team’s strategy “provides a rich new source of evidence on
this highly dynamic process,” says Nikolaus Kriegeskorte, a principal
investigator in cognition and brain sciences at Cambridge Univ.
“The combination of MEG and fMRI in humans is no surrogate for
invasive animal studies with techniques that simultaneously have high
spatial and temporal precision, but Cichy et al. come closer to
characterizing the dynamic emergence of representational geometries
across stages of processing in humans than any previous work. The
approach will be useful for future studies elucidating other perceptual
and cognitive processes,” says Kriegeskorte, who was not part of the
research team.
The MIT researchers are now using representational similarity
analysis to study the accuracy of computer models of vision by comparing
brain scan data with the models’ predictions of how vision works.
Using this approach, scientists should also be able to study how the
human brain analyzes other types of information such as motor, verbal or
sensory signals, the researchers say. It could also shed light on
processes that underlie conditions such as memory disorders or dyslexia,
and could benefit patients suffering from paralysis or
neurodegenerative diseases.
“This is the first time that MEG and fMRI have been connected in this
way, giving us a unique perspective,” Pantazis says. “We now have the
tools to precisely map brain function both in space and time, opening up
tremendous possibilities to study the human brain.”
Writers and Athletes Brains
The more we know about the workings of the brain, the more surprising some of the conclusions are. Here is something from the New York Times:
Neuroscientists are finding surprising similarities in the brain activity of writers and athletes, reports Carl Zimmer in The New York Times (6/28/14). The scientists, "led by Martin Lotze of the University of Greifswald in Germany," are using "scanners to track the brain activity of both experienced and novice writers." Dr. Lotze started by having 28 novice writers "copy some text" to provide a baseline, and then gave them a few lines from a story to finish on their own — giving them time to brainstorm a bit first, and then write.
Neuroscientists are finding surprising similarities in the brain activity of writers and athletes, reports Carl Zimmer in The New York Times (6/28/14). The scientists, "led by Martin Lotze of the University of Greifswald in Germany," are using "scanners to track the brain activity of both experienced and novice writers." Dr. Lotze started by having 28 novice writers "copy some text" to provide a baseline, and then gave them a few lines from a story to finish on their own — giving them time to brainstorm a bit first, and then write.
He and his team found that the
vision-processing part of the brain lit up during brainstorming,
perhaps because they were "seeing the scenes they wanted to write." This
changed when the trials turned to more experienced writers, whose
"brains appeared to work differently even before they set pen to paper."
Their brains activated "regions involving speech," rather than vision.
One theory is "that the novices are watching their stories like a film
inside their heads, while the writers are narrating it with an inner
voice."
Also unlike the novices, the brains of the experienced writers lit up "a region called the caudate nucleus,"
which "plays an essential role in the skill that comes with practice,
including activities like … playing basketball," or a musical
instrument. However, Harvard psychologist Steven Pinker
is skeptical that the experiments provide a clear picture of
creativity, pointing out, among other things, that the "very nature of
creativity can make it different from one person to the next."
"Creativity is a perversely difficult thing to study," he says.
Subscribe to:
Posts (Atom)