Our slow, uncertain brains are still better than computers — here’s why

By Parashkev Nachev, UCL

Automated financial trading machines can make complex decisions in a thousandth of a second. A human being making a choice – however simple – can never be faster than about one-fifth of a second. Our reaction times are not only slow but also remarkably variable, ranging over hundreds of milliseconds.

Is this because our brains are poorly designed, prone to random uncertainty – or “noise” in the electronic jargon? Measured in the laboratory, even the neurons of a fly are both fast and precise in their responses to external events, down to a few milliseconds. The sloppiness of our reaction times looks less like an accident than a built-in feature. The brain deliberately procrastinates, even if we ask it to do otherwise.

Massively parallel wetware

Why should this be? Unlike computers, our brains are massively parallel in their organisation, concurrently running many millions of separate processes. They must do this because they are not designed to perform a specific set of actions but to select from a vast repertoire of alternatives that the fundamental unpredictability of our environment offers us. From an evolutionary perspective, it is best to trust nothing and no one, least of all oneself. So before each action the brain must flip through a vast Rolodex of possibilities. It is amazing it can do this at all, let alone in a fraction of a second.

But why the variability? There is hierarchically nothing higher than the brain, so decisions have to arise through peer-to-peer interactions between different groups of neurons. Since there can be only one winner at any one time – our movements would otherwise be chaotic – the mode of resolution is less negotiation than competition: a winner-takes-all race. To ensure the competition is fair, the race must run for a minimum length of time – hence the delay – and the time it takes will depend on the nature and quality of the field of competitors, hence the variability.

Fanciful though this may sound, the distributions of human reaction times, across different tasks, limbs, and people, have been repeatedly shown to fit the “race” model remarkably well. And one part of the brain – the medial frontal cortex – seems to track reaction time tightly, as an area crucial to procrastination ought to. Disrupting the medial frontal cortex should therefore disrupt the race, bringing it to an early close. Rather than slowing us down, disrupting the brain should here speed us up, accelerating behaviour but at the cost of less considered actions.

[Read more…]

Facebooktwittermail

The more you know about a topic the more likely you are to have false memories about it

By Ciara Greene, University College Dublin

Human memory does not operate like a video tape that can be rewound and rewatched, with every viewing revealing the same events in the same order. In fact, memories are reconstructed every time we recall them. Aspects of the memory can be altered, added or deleted altogether with each new recollection. This can lead to the phenomenon of false memory, where people have clear memories of an event that they never experienced.

False memory is surprisingly common, but a number of factors can increase its frequency. Recent research in my lab shows that being very interested in a topic can make you twice as likely to experience a false memory about that topic.

Previous research has indicated that experts in a few clearly defined fields, such as investments and American football, might be more likely to experience false memory in relation to their areas of expertise. Opinion as to the cause of this effect is divided. Some researchers have suggested that greater knowledge makes a person more likely to incorrectly recognise new information that is similar to previously experienced information. Another interpretation suggests that experts feel that they should know everything about their topic of expertise. According to this account, experts’ sense of accountability for their judgements causes them to “fill in the gaps” in their knowledge with plausible, but false, information.

To further investigate this, we asked 489 participants to rank seven topics from most to least interesting. The topics we used were football, politics, business, technology, film, science and pop music. The participants were then asked if they remembered the events described in four news items about the topic they selected as the most interesting, and four items about the topic selected as least interesting. In each case, three of the events depicted had really happened and one was fictional.

The results showed that being interested in a topic increased the frequency of accurate memories relating to that topic. Critically, it also increased the number of false memories – 25% of people experienced a false memory in relation to an interesting topic, compared with 10% in relation to a less interesting topic. Importantly, our participants were not asked to identify themselves as experts, and did not get to choose which topics they would answer questions about. This means that the increase in false memories is unlikely to be due to a sense of accountability for judgements about a specialist topic.

[Read more…]

Facebooktwittermail

Why neuroscientists need to study the crow

crow

Grigori Guitchounts writes: The animals of neuroscience research are an eclectic bunch, and for good reason. Different model organisms—like zebra fish larvae, C. elegans worms, fruit flies, and mice — give researchers the opportunity to answer specific questions. The first two, for example, have transparent bodies, which let scientists easily peer into their brains; the last two have eminently tweakable genomes, which allow scientists to isolate the effects of specific genes. For cognition studies, researchers have relied largely on primates and, more recently, rats, which I use in my own work. But the time is ripe for this exclusive club of research animals to accept a new, avian member: the corvid family.

Corvids, such as crows, ravens, and magpies, are among the most intelligent birds on the planet — the list of their cognitive achievements goes on and on — yet neuroscientists have not scrutinized their brains for one simple reason: They don’t have a neocortex. The obsession with the neocortex in neuroscience research is not unwarranted; what’s unwarranted is the notion that the neocortex alone is responsible for sophisticated cognition. Because birds lack this structure—the most recently evolved portion of the mammalian brain, crucial to human intelligence—neuroscientists have largely and unfortunately neglected the neural basis of corvid intelligence.

This makes them miss an opportunity for an important insight. Having diverged from mammals more than 300 million years ago, avian brains have had plenty of time to develop along remarkably different lines (instead of a cortex with its six layers of neatly arranged neurons, birds evolved groups of neurons densely packed into clusters called nuclei). So, any computational similarities between corvid and primate brains — which are so different neurally — would indicate the development of common solutions to shared evolutionary problems, like creating and storing memories, or learning from experience. If neuroscientists want to know how brains produce intelligence, looking solely at the neocortex won’t cut it; they must study how corvid brains achieve the same clever behaviors that we see in ourselves and other mammals. [Continue reading…]

Facebooktwittermail

Human brain mapped in unprecedented detail

The Human Connectome Project

Nature reports: Think of a spinning globe and the patchwork of countries it depicts: such maps help us to understand where we are, and that nations differ from one another. Now, neuroscientists have charted an equivalent map of the brain’s outermost layer — the cerebral cortex — subdividing each hemisphere’s mountain- and valley-like folds into 180 separate parcels.

Ninety-seven of these areas have never previously been described, despite showing clear differences in structure, function and connectivity from their neighbours. The new brain map is published today in Nature.

Each discrete area on the map contains cells with similar structure, function and connectivity. But these areas differ from each other, just as different countries have well-defined borders and unique cultures, says David Van Essen, a neuroscientist at Washington University Medical School in St Louis, Missouri, who supervised the study.

Neuroscientists have long sought to divide the brain into smaller pieces to better appreciate how it works as a whole. One of the best-known brain maps chops the cerebral cortex into 52 areas based on the arrangement of cells in the tissue. More recently, maps have been constructed using magnetic resonance imaging (MRI) techniques — such as functional MRI, which measures the flow of blood in response to different mental tasks.

Yet until now, most such maps have been based on a single type of measurement. That can provide an incomplete or even misleading view of the brain’s inner workings, says Thomas Yeo, a computational neuroscientist at the National University of Singapore. The new map is based on multiple MRI measurements, which Yeo says “greatly increases confidence that they are producing the best in vivo estimates of cortical areas”. [Continue reading…]

Facebooktwittermail

Where is language in the brain?

By Gaia Vince, Mosaic

If you read a sentence (such as this one) about kicking a ball, neurons related to the motor function of your leg and foot will be activated in your brain. Similarly, if you talk about cooking garlic, neurons associated with smelling will fire up. Since it is almost impossible to do or think about anything without using language – whether this entails an internal talk-through by your inner voice or following a set of written instructions – language pervades our brains and our lives like no other skill.

For more than a century, it’s been established that our capacity to use language is usually located in the left hemisphere of the brain, specifically in two areas: Broca’s area (associated with speech production and articulation) and Wernicke’s area (associated with comprehension). Damage to either of these, caused by a stroke or other injury, can lead to language and speech problems or aphasia, a loss of language.

In the past decade, however, neurologists have discovered it’s not that simple: language is not restricted to two areas of the brain or even just to one side, and the brain itself can grow when we learn new languages.

[Read more…]

Facebooktwittermail

Whatever you think, you don’t necessarily know your own mind

Keith Frankish writes: Do you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?

Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.

Evidence for this comes from experimental work in social psychology. It is well established that people sometimes think they have beliefs that they don’t really have. For example, if offered a choice between several identical items, people tend to choose the one on the right. But when asked why they chose it, they confabulate a reason, saying they thought the item was a nicer colour or better quality. Similarly, if a person performs an action in response to an earlier (and now forgotten) hypnotic suggestion, they will confabulate a reason for performing it. What seems to be happening is that the subjects engage in unconscious self-interpretation. They don’t know the real explanation of their action (a bias towards the right, hypnotic suggestion), so they infer some plausible reason and ascribe it to themselves. They are not aware that they are interpreting, however, and make their reports as if they were directly aware of their reasons. [Continue reading…]

Facebooktwittermail

There’s no such thing as free will

structure3bw

Stephen Cave writes: For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will — and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty” — the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness.

Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream — the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.”

So what happens if this faith erodes?

The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties — which some people have to a greater degree than others — to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance.

Galton launched a debate that raged throughout the 20th century over nature versus nurture. Are our actions the unfolding effect of our genetics? Or the outcome of what has been imprinted on us by the environment? Impressive evidence accumulated for the importance of each factor. Whether scientists supported one, the other, or a mix of both, they increasingly assumed that our deeds must be determined by something. [Continue reading…]

Facebooktwittermail

Scientists map brain’s thesaurus to help decode inner thoughts

UC Berkeley reports: What if a map of the brain could help us decode people’s inner thoughts?

UC Berkeley scientists have taken a step in that direction by building a “semantic atlas” that shows in vivid colors and multiple dimensions how the human brain organizes language. The atlas identifies brain areas that respond to words that have similar meanings.

The findings, published in the journal Nature, are based on a brain imaging study that recorded neural activity while study volunteers listened to stories from the “Moth Radio Hour.” They show that at least one-third of the brain’s cerebral cortex, including areas dedicated to high-level cognition, is involved in language processing.

Notably, the study found that different people share similar language maps: “The similarity in semantic topography across different subjects is really surprising,” said study lead author Alex Huth, a postdoctoral researcher in neuroscience at UC Berkeley. Click here for Huth’s online brain viewer. [Continue reading…]

Facebooktwittermail

Exploding the myth of the scientific vs artistic mind

By David Pearson, Anglia Ruskin University

It’s a stereotype, but many of us have made the assumption that scientists are a bit rigid and less artistic than others. Artists, on the other hand, are often seen as being less rational than the rest of us. Sometimes described as the left side of the brain versus the right side – or simply logical thinking versus artistic creativity – the two are often seen as polar opposites.

Neuroscience has already shown that everyone uses both sides of the brain when performing any task. And while certain patterns of brain activity have sometimes been linked to artistic or logical thinking, it doesn’t really explain who is good at what – and why. That’s because the exact interplay of nature and nurture is notoriously difficult to tease out. But if we put the brain aside for a while and just focus on documented ability, is there any evidence to support the logic versus art stereotype?

Psychological research has approached this question by distinguishing between two styles of thinking: convergent and divergent. The emphasis in convergent thinking is on analytical and deductive reasoning, such as that measured in IQ tests. Divergent thinking, however, is more spontaneous and free-flowing. It focuses on novelty and is measured by tasks requiring us to generate multiple solutions for a problem. An example may be thinking of new, innovative uses for familiar objects.

Studies conducted during the 1960s suggested that convergent thinkers were more likely to be good at science subjects at school. Divergent thinking was shown to be more common in the arts and humanities.

However, we are increasingly learning that convergent and divergent thinking styles need not be mutually exclusive. In 2011, researchers assessed 116 final-year UK arts and science undergraduates on measures of convergent and divergent thinking and creative problem solving. The study found no difference in ability between the arts and science groups on any of these measures. Another study reported no significant difference in measures of divergent thinking between arts, natural science and social science undergraduates. Both arts and natural sciences students, however, rated themselves as being more creative than social sciences students did.

[Read more…]

Facebooktwittermail

Why our perceptions of an independent reality must be illusions

wet-rock

Amanda Gefter writes: As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.

Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.

Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.” [Continue reading…]

Facebooktwittermail

How animals think

Alison Gopnik writes: For 2,000 years, there was an intuitive, elegant, compelling picture of how the world worked. It was called “the ladder of nature.” In the canonical version, God was at the top, followed by angels, who were followed by humans. Then came the animals, starting with noble wild beasts and descending to domestic animals and insects. Human animals followed the scheme, too. Women ranked lower than men, and children were beneath them. The ladder of nature was a scientific picture, but it was also a moral and political one. It was only natural that creatures higher up would have dominion over those lower down.

Darwin’s theory of evolution by natural selection delivered a serious blow to this conception. Natural selection is a blind historical process, stripped of moral hierarchy. A cockroach is just as well adapted to its environment as I am to mine. In fact, the bug may be better adapted — cockroaches have been around a lot longer than humans have, and may well survive after we are gone. But the very word evolution can imply a progression — New Agers talk about becoming “more evolved” — and in the 19th century, it was still common to translate evolutionary ideas into ladder-of-nature terms.

Modern biological science has in principle rejected the ladder of nature. But the intuitive picture is still powerful. In particular, the idea that children and nonhuman animals are lesser beings has been surprisingly persistent. Even scientists often act as if children and animals are defective adult humans, defined by the abilities we have and they don’t. Neuroscientists, for example, sometimes compare brain-damaged adults to children and animals.

We always should have been suspicious of this picture, but now we have no excuse for continuing with it. In the past 30 years, research has explored the distinctive ways in which children as well as animals think, and the discoveries deal the coup de grâce to the ladder of nature. The primatologist Frans de Waal has been at the forefront of the animal research, and its most important public voice. In Are We Smart Enough to Know How Smart Animals Are?, he makes a passionate and convincing case for the sophistication of nonhuman minds. [Continue reading…]

Facebooktwittermail

Processing high-level math concepts uses the same neural networks as in basic math skills

pattern9

Scientific American reports: Alan Turing, Albert Einstein, Stephen Hawking, John Nash — these “beautiful” minds never fail to enchant the public, but they also remain somewhat elusive. How do some people progress from being able to perform basic arithmetic to grasping advanced mathematical concepts and thinking at levels of abstraction that baffle the rest of the population? Neuroscience has now begun to pin down whether the brain of a math wiz somehow takes conceptual thinking to another level.

Specifically, scientists have long debated whether the basis of high-level mathematical thought is tied to the brain’s language-processing centers — that thinking at such a level of abstraction requires linguistic representation and an understanding of syntax — or to independent regions associated with number and spatial reasoning. In a study published this week in Proceedings of the National Academy of Sciences, a pair of researchers at the INSERM–CEA Cognitive Neuroimaging Unit in France reported that the brain areas involved in math are different from those engaged in equally complex nonmathematical thinking.

The team used functional magnetic resonance imaging (fMRI) to scan the brains of 15 professional mathematicians and 15 nonmathematicians of the same academic standing. While in the scanner the subjects listened to a series of 72 high-level mathematical statements, divided evenly among algebra, analysis, geometry and topology, as well as 18 high-level nonmathematical (mostly historical) statements. They had four seconds to reflect on each proposition and determine whether it was true, false or meaningless.

The researchers found that in the mathematicians only, listening to math-related statements activated a network involving bilateral intraparietal, dorsal prefrontal, and inferior temporal regions of the brain. This circuitry is usually not associated with areas involved in language processing and semantics, which were activated in both mathematicians and nonmathematicians when they were presented with the nonmathematical statements. “On the contrary,” says study co-author and graduate student Marie Amalric, “our results show that high-level mathematical reflection recycles brain regions associated with an evolutionarily ancient knowledge of number and space.” [Continue reading…]

Facebooktwittermail