Much of our memory's work takes place while we're asleep, reports The Economist
(2/2/13). A paper by Robert Stickgold of Harvard University and Matthew
Walker of the University of California, Berkeley, proposes that "sleep
acts as a form of triage -- first choosing what to retain and then
selecting how it will be retained." What they found was that "sleep does
indeed help people discard information they have been told to forget."
They also found that sleep "helps guide memories intended to be
retained down particular paths -- remembering patterns, for example, as
opposed to facts." Two studies found that only babies who had the
opportunity to take a nap could later recall patterns of fake grammar to
which they had been earlier exposed. A separate paper, by Matthew
Walker and Bryce Mander, meanwhile looked "into the matter of
forgetting, by comparing the process in the young and the old."
Not surprisingly, they found that older people (those in their 60s and
70s) did not retain nonsensical word pairs as well as younger people
(those in their teens and 20s), both when tested immediately and after
some sleep (the oldsters did even worse after sleep). Of course, older
people don't need to remember as much as the younger, given that "they
are already familiar with so much of what they experience. So it may be
that their inability to form new memories is not a bug, but a feature."
SyncSense
Creativity, Meet Science
Brain-scan Technique Sheds Light on Vision
Every time you open your eyes, visual information flows into your
brain, which interprets what you’re seeing. Now, for the first time, MIT
neuroscientists have non-invasively mapped this flow of information in
the human brain with unique accuracy, using a novel brain-scanning
technique.
This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.
“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Oliva is the senior author of a paper describing the findings in Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.
When and where
Until now, scientists have been able to observe the location or timing of human brain activity at high resolution, but not both, because different imaging techniques are not easily combined. The most commonly used type of brain scan, functional magnetic resonance imaging (fMRI), measures changes in blood flow, revealing which parts of the brain are involved in a particular task. However, it works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics.
Another imaging technique, known as magnetoencephalography (MEG), uses an array of hundreds of sensors encircling the head to measure magnetic fields produced by neuronal activity in the brain. These sensors offer a dynamic portrait of brain activity over time, down to the millisecond, but do not tell the precise location of the signals.
To combine the time and location information generated by these two scanners, the researchers used a computational technique called representational similarity analysis, which relies on the fact that two similar objects (such as two human faces) that provoke similar signals in fMRI will also produce similar signals in MEG. This method has been used before to link fMRI with recordings of neuronal electrical activity in monkeys, but the MIT researchers are the first to use it to link fMRI and MEG data from human subjects.
In the study, the researchers scanned 16 human volunteers as they looked at a series of 92 images, including faces, animals, and natural and man-made objects. Each image was shown for half a second. “We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast,” Cichy says. “This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.”
Each subject underwent the test multiple times — twice in an fMRI scanner and twice in an MEG scanner — giving the researchers a huge set of data on the timing and location of brain activity. All of the scanning was done at the Athinoula Martinos Imaging Center at the McGovern Institute.
Millisecond by millisecond
By analyzing this data, the researchers produced a timeline of the brain’s object-recognition pathway that is very similar to results previously obtained by recording electrical signals in the visual cortex of monkeys, a technique that is extremely accurate but too invasive to use in humans.
About 50 milliseconds after subjects saw an image, visual information entered a part of the brain called the primary visual cortex, or V1, which recognizes basic elements of a shape, such as whether it is round or elongated. The information then flowed to the inferotemporal cortex, where the brain identified the object as early as 120 milliseconds. Within 160 milliseconds, all objects had been classified into categories such as plant or animal.
The MIT team’s strategy “provides a rich new source of evidence on this highly dynamic process,” says Nikolaus Kriegeskorte, a principal investigator in cognition and brain sciences at Cambridge Univ.
“The combination of MEG and fMRI in humans is no surrogate for invasive animal studies with techniques that simultaneously have high spatial and temporal precision, but Cichy et al. come closer to characterizing the dynamic emergence of representational geometries across stages of processing in humans than any previous work. The approach will be useful for future studies elucidating other perceptual and cognitive processes,” says Kriegeskorte, who was not part of the research team.
The MIT researchers are now using representational similarity analysis to study the accuracy of computer models of vision by comparing brain scan data with the models’ predictions of how vision works.
Using this approach, scientists should also be able to study how the human brain analyzes other types of information such as motor, verbal or sensory signals, the researchers say. It could also shed light on processes that underlie conditions such as memory disorders or dyslexia, and could benefit patients suffering from paralysis or neurodegenerative diseases.
“This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective,” Pantazis says. “We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain.”
This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.
“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Oliva is the senior author of a paper describing the findings in Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.
When and where
Until now, scientists have been able to observe the location or timing of human brain activity at high resolution, but not both, because different imaging techniques are not easily combined. The most commonly used type of brain scan, functional magnetic resonance imaging (fMRI), measures changes in blood flow, revealing which parts of the brain are involved in a particular task. However, it works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics.
Another imaging technique, known as magnetoencephalography (MEG), uses an array of hundreds of sensors encircling the head to measure magnetic fields produced by neuronal activity in the brain. These sensors offer a dynamic portrait of brain activity over time, down to the millisecond, but do not tell the precise location of the signals.
To combine the time and location information generated by these two scanners, the researchers used a computational technique called representational similarity analysis, which relies on the fact that two similar objects (such as two human faces) that provoke similar signals in fMRI will also produce similar signals in MEG. This method has been used before to link fMRI with recordings of neuronal electrical activity in monkeys, but the MIT researchers are the first to use it to link fMRI and MEG data from human subjects.
In the study, the researchers scanned 16 human volunteers as they looked at a series of 92 images, including faces, animals, and natural and man-made objects. Each image was shown for half a second. “We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast,” Cichy says. “This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.”
Each subject underwent the test multiple times — twice in an fMRI scanner and twice in an MEG scanner — giving the researchers a huge set of data on the timing and location of brain activity. All of the scanning was done at the Athinoula Martinos Imaging Center at the McGovern Institute.
Millisecond by millisecond
By analyzing this data, the researchers produced a timeline of the brain’s object-recognition pathway that is very similar to results previously obtained by recording electrical signals in the visual cortex of monkeys, a technique that is extremely accurate but too invasive to use in humans.
About 50 milliseconds after subjects saw an image, visual information entered a part of the brain called the primary visual cortex, or V1, which recognizes basic elements of a shape, such as whether it is round or elongated. The information then flowed to the inferotemporal cortex, where the brain identified the object as early as 120 milliseconds. Within 160 milliseconds, all objects had been classified into categories such as plant or animal.
The MIT team’s strategy “provides a rich new source of evidence on this highly dynamic process,” says Nikolaus Kriegeskorte, a principal investigator in cognition and brain sciences at Cambridge Univ.
“The combination of MEG and fMRI in humans is no surrogate for invasive animal studies with techniques that simultaneously have high spatial and temporal precision, but Cichy et al. come closer to characterizing the dynamic emergence of representational geometries across stages of processing in humans than any previous work. The approach will be useful for future studies elucidating other perceptual and cognitive processes,” says Kriegeskorte, who was not part of the research team.
The MIT researchers are now using representational similarity analysis to study the accuracy of computer models of vision by comparing brain scan data with the models’ predictions of how vision works.
Using this approach, scientists should also be able to study how the human brain analyzes other types of information such as motor, verbal or sensory signals, the researchers say. It could also shed light on processes that underlie conditions such as memory disorders or dyslexia, and could benefit patients suffering from paralysis or neurodegenerative diseases.
“This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective,” Pantazis says. “We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain.”
Writers and Athletes Brains
The more we know about the workings of the brain, the more surprising some of the conclusions are. Here is something from the New York Times:
Neuroscientists are finding surprising similarities in the brain activity of writers and athletes, reports Carl Zimmer in The New York Times (6/28/14). The scientists, "led by Martin Lotze of the University of Greifswald in Germany," are using "scanners to track the brain activity of both experienced and novice writers." Dr. Lotze started by having 28 novice writers "copy some text" to provide a baseline, and then gave them a few lines from a story to finish on their own — giving them time to brainstorm a bit first, and then write.
Neuroscientists are finding surprising similarities in the brain activity of writers and athletes, reports Carl Zimmer in The New York Times (6/28/14). The scientists, "led by Martin Lotze of the University of Greifswald in Germany," are using "scanners to track the brain activity of both experienced and novice writers." Dr. Lotze started by having 28 novice writers "copy some text" to provide a baseline, and then gave them a few lines from a story to finish on their own — giving them time to brainstorm a bit first, and then write.
He and his team found that the
vision-processing part of the brain lit up during brainstorming,
perhaps because they were "seeing the scenes they wanted to write." This
changed when the trials turned to more experienced writers, whose
"brains appeared to work differently even before they set pen to paper."
Their brains activated "regions involving speech," rather than vision.
One theory is "that the novices are watching their stories like a film
inside their heads, while the writers are narrating it with an inner
voice."
Also unlike the novices, the brains of the experienced writers lit up "a region called the caudate nucleus,"
which "plays an essential role in the skill that comes with practice,
including activities like … playing basketball," or a musical
instrument. However, Harvard psychologist Steven Pinker
is skeptical that the experiments provide a clear picture of
creativity, pointing out, among other things, that the "very nature of
creativity can make it different from one person to the next."
"Creativity is a perversely difficult thing to study," he says.
SyncSense Concept Proven Through Brain Research
The basic concept of SyncSense is that since human brain activity is associated with audiovisual perception and attention, the refinement of such stimuli in a short for video results in more neural synchrony and therefore higher perception, engagement and attention.
This is all born out in a 2007 study from Neuroimage:
"Coherent perception of objects in our environment often requires perceptual integration of auditory and visual information. Recent behavioral data suggest that audiovisual integration depends on attention. The current study investigated the neural basis of audiovisual integration using 3-Tesla functional magnetic resonance imaging (fMRI) in 12 healthy volunteers during attention to auditory or visual features, or audiovisual feature combinations of abstract stimuli (simultaneous harmonic sounds and colored circles).
Audiovisual attention was found to modulate activity in the same frontal, temporal, parietal and occipital cortical regions as auditory and visual attention. In addition, attention to audiovisual feature combinations produced stronger activity in the superior temporal cortices than attention to only auditory or visual features. These modality-specific areas might be involved in attention-dependent perceptual binding of synchronous auditory and visual events into coherent audiovisual objects.
Furthermore, the modality-specific temporal auditory and occipital visual cortical areas showed attention-related modulations during both auditory and visual attention tasks. This result supports the proposal that attention to stimuli in one modality can spread to encompass synchronously presented stimuli in another modality."
This is all born out in a 2007 study from Neuroimage:
"Coherent perception of objects in our environment often requires perceptual integration of auditory and visual information. Recent behavioral data suggest that audiovisual integration depends on attention. The current study investigated the neural basis of audiovisual integration using 3-Tesla functional magnetic resonance imaging (fMRI) in 12 healthy volunteers during attention to auditory or visual features, or audiovisual feature combinations of abstract stimuli (simultaneous harmonic sounds and colored circles).
Audiovisual attention was found to modulate activity in the same frontal, temporal, parietal and occipital cortical regions as auditory and visual attention. In addition, attention to audiovisual feature combinations produced stronger activity in the superior temporal cortices than attention to only auditory or visual features. These modality-specific areas might be involved in attention-dependent perceptual binding of synchronous auditory and visual events into coherent audiovisual objects.
Furthermore, the modality-specific temporal auditory and occipital visual cortical areas showed attention-related modulations during both auditory and visual attention tasks. This result supports the proposal that attention to stimuli in one modality can spread to encompass synchronously presented stimuli in another modality."
Brainwashed - The Value of Brain Scans on Predicting Behavior
The book Brainwashed: The Seductive Appeal of Mindless Neuroscience has a provocative title. The book says that brain scans – neuromarketing –
isn’t any better at predicting consumer behavior than "the old standbys
of surveys and focus groups," reports Matthew Hutson in a Wall Street Journal review. But we at SyncSense maintain that it is not neuroscience methodology that is at fault when predictions are not accurate, it is the way we measure the impact of neuroscience that is in error.
According to the Wall Street Journal, the book says brain scans can also blunder: In 2006, a neuroscientist declared a racy GoDaddy.com Super Bowl ad a flop after it failed to activate viewers’ pleasure centers." Funny thing: the ad increased "traffic to the site 16-fold." The problem with neuromarketing, the authors say, is that "most neural real-estate is zoned for mixed-use development."
According to the Wall Street Journal, the book says brain scans can also blunder: In 2006, a neuroscientist declared a racy GoDaddy.com Super Bowl ad a flop after it failed to activate viewers’ pleasure centers." Funny thing: the ad increased "traffic to the site 16-fold." The problem with neuromarketing, the authors say, is that "most neural real-estate is zoned for mixed-use development."
In other words, just because a
particular part of your brain lights up doesn’t necessarily mean you’re
experiencing a particular emotional response. For example, "disgust"
might illuminate "your insula
– a part of the cerebral cortex involved in attention, emotion and
other functions," but that doesn’t mean "that whenever the insula lights
up you’re disgusted." It could mean something else entirely. It’s more
complicated than that. Of course, this didn’t stop one neuromarketer
from using brain-scan data "to claim that Apple users literally adore
their devices."
More serious is the application of brain scans – or functional magnetic resonance imaging
(fMRI) as it is known – in criminal cases. "Predictably, defense
attorneys try to use brain scans to prove that their clients lack …
impulse control and therefore can’t be held legally responsible." In
medicine, the authors say, the use of fMRIs has conflated addiction with
brain disease. Even more profoundly, reducing the brain to a
"biological machine" undercuts the concept of "free will" and "personal
accountability." In short: "Neuroimaging isn’t the hard science we like
to think it is."
Many companies use biometrics, heart rate, eye tracking, sweating and brain imaging to "predict" what the respondent is actually feeling an thinking. This can often be misinterpreted. At SyncSense we measure effectiveness, attention and call to action using hard data such as Nielsen ratings and direct response calls. In this way we KNOW that our methodology not only has validity but also impact.
Many companies use biometrics, heart rate, eye tracking, sweating and brain imaging to "predict" what the respondent is actually feeling an thinking. This can often be misinterpreted. At SyncSense we measure effectiveness, attention and call to action using hard data such as Nielsen ratings and direct response calls. In this way we KNOW that our methodology not only has validity but also impact.
Dog Brains vs Cat Brains
An interesting article from Cool News talks about the differences in dog brains vs cats and even rats. I think it may also be impacted by other senses such as sense of smell when it comes to locating food sources but the differences in the way the brain enable retention, even in animals, corroborates our brain synchrony findings.
The genius of dogs is in their relationship with humans, write Brian Hare and Vanessa Woods, authors of The Genius of Dogs, in The Wall Street Journal. While other language-trained animals can "learn to respond to dozens of spoken signals associated with different objects," dogs are the only animals that have demonstrated an ability to learn the names of objects the same way humans do. A 2004 experiment by Juliane Kaminski of University of Portsmouth in Britain, involved a dog named Rico, who could infer the name of a new toy simply because the name was different from that of the toys he already knew, just like a human.
Dogs have only "half as many neurons in their cerebral cortex as cats," but apparently have better memories. "Several years ago, Sylvain Fiset of Canada's University of Moncton and colleagues reported experiments in which a dog or cat watched while a researcher hid a reward in one of four boxes. After a delay, they were allowed to search for a treat. Cats started guessing after only one minute. But even after four minutes, dogs hadn't forgotten where they saw the food." Okay, so maybe the cats are just too smart to be bothered with "playing our silly games."
Dogs are not as bright when it comes to "navigational memory." Rats -- not cats this time -- performed better at finding food through a maze. In a contest against wolves, dogs were not as adept at figuring out how to get at food "placed on the opposite side of a fence, as shown by a study by Harry and Martha Frank of the University of Michigan." However, a later study out of Hungary found that dogs solved the problem immediately after observing "a human rounding the fence first," suggesting that "the secret of the genius of dogs" is when they "join forces with us."
The genius of dogs is in their relationship with humans, write Brian Hare and Vanessa Woods, authors of The Genius of Dogs, in The Wall Street Journal. While other language-trained animals can "learn to respond to dozens of spoken signals associated with different objects," dogs are the only animals that have demonstrated an ability to learn the names of objects the same way humans do. A 2004 experiment by Juliane Kaminski of University of Portsmouth in Britain, involved a dog named Rico, who could infer the name of a new toy simply because the name was different from that of the toys he already knew, just like a human.
Dogs have only "half as many neurons in their cerebral cortex as cats," but apparently have better memories. "Several years ago, Sylvain Fiset of Canada's University of Moncton and colleagues reported experiments in which a dog or cat watched while a researcher hid a reward in one of four boxes. After a delay, they were allowed to search for a treat. Cats started guessing after only one minute. But even after four minutes, dogs hadn't forgotten where they saw the food." Okay, so maybe the cats are just too smart to be bothered with "playing our silly games."
Dogs are not as bright when it comes to "navigational memory." Rats -- not cats this time -- performed better at finding food through a maze. In a contest against wolves, dogs were not as adept at figuring out how to get at food "placed on the opposite side of a fence, as shown by a study by Harry and Martha Frank of the University of Michigan." However, a later study out of Hungary found that dogs solved the problem immediately after observing "a human rounding the fence first," suggesting that "the secret of the genius of dogs" is when they "join forces with us."
Our Brains Can't Chew Popcorn and Listen At The Same Time
One of the conclusions we reached in our brain synchrony work is that when different parts of the brain (sound, text, patterns, colors, movement, for example) are stimulated at the same time from the same stimuli, that helps us retain messages. So when scientists found that chewing while listening disrupts comprehension, we were not surprised.
According to Bloomberg Businessweek (10/28/13) and as reported by Cool News, researchers at the University of Cologne conducted a test where half the subjects were given a sugar cube during the pre-movie commercials, while the other half was given popcorn. A week later, all of the subjects were exposed to images of the advertised products and the "sugar-cube moviegoers had a clear preference" for them "while the popcorn eaters didn’t. In other words, the ads hadn’t stuck with them
According to Bloomberg Businessweek (10/28/13) and as reported by Cool News, researchers at the University of Cologne conducted a test where half the subjects were given a sugar cube during the pre-movie commercials, while the other half was given popcorn. A week later, all of the subjects were exposed to images of the advertised products and the "sugar-cube moviegoers had a clear preference" for them "while the popcorn eaters didn’t. In other words, the ads hadn’t stuck with them
The result had nothing to do
with popcorn, per se. The reason is that when people read or hear
something, "the brain simulates the corresponding muscle movement of the
throat and mouth … Chewing, however, disrupts the process by
monopolizing the speech muscles (unlike eating a sugar cube, which
dissolves on its own), effectively drowning out any subvocalization and,
with it, the process of familiarization."
An earlier study produced a
similar result when subjects were chewing gum (or not). The research
undermines conventional wisdom that "mere exposure" to an image or
message "predisposes people to liking it." It also "has ramifications"
beyond advertising, as "there are plenty of settings in which people are
trying to absorb new information while eating – the working breakfast,
the client dinner, the lunch consumed resentfully at one’s desk while
trying to catch up on e-mail. Those might all be occasions in which
we’re not taking in information as easily as calories."
Subscribe to:
Posts (Atom)