Category Archives: Neuroscience

Why your brain hates other people

Robert Sapolsky writes: As a kid, I saw the 1968 version of Planet of the Apes. As a future primatologist, I was mesmerized. Years later I discovered an anecdote about its filming: At lunchtime, the people playing chimps and those playing gorillas ate in separate groups.

It’s been said, “There are two kinds of people in the world: those who divide the world into two kinds of people and those who don’t.” In reality, there’s lots more of the former. And it can be vastly consequential when people are divided into Us and Them, ingroup and outgroup, “the people” (i.e., our kind) and the Others.

Humans universally make Us/Them dichotomies along lines of race, ethnicity, gender, language group, religion, age, socioeconomic status, and so on. And it’s not a pretty picture. We do so with remarkable speed and neurobiological efficiency; have complex taxonomies and classifications of ways in which we denigrate Thems; do so with a versatility that ranges from the minutest of microaggression to bloodbaths of savagery; and regularly decide what is inferior about Them based on pure emotion, followed by primitive rationalizations that we mistake for rationality. Pretty depressing.

But crucially, there is room for optimism. Much of that is grounded in something definedly human, which is that we all carry multiple Us/Them divisions in our heads. A Them in one case can be an Us in another, and it can only take an instant for that identity to flip. Thus, there is hope that, with science’s help, clannishness and xenophobia can lessen, perhaps even so much so that Hollywood-extra chimps and gorillas can break bread together. [Continue reading…]

Facebooktwittermail

Why your brain hates other people — and how to make it think differently

Robert Sapolsky writes: As a kid, I saw the 1968 version of Planet of the Apes. As a future primatologist, I was mesmerized. Years later I discovered an anecdote about its filming: At lunchtime, the people playing chimps and those playing gorillas ate in separate groups.

It’s been said, “There are two kinds of people in the world: those who divide the world into two kinds of people and those who don’t.” In reality, there’s lots more of the former. And it can be vastly consequential when people are divided into Us and Them, ingroup and outgroup, “the people” (i.e., our kind) and the Others.

Humans universally make Us/Them dichotomies along lines of race, ethnicity, gender, language group, religion, age, socioeconomic status, and so on. And it’s not a pretty picture. We do so with remarkable speed and neurobiological efficiency; have complex taxonomies and classifications of ways in which we denigrate Thems; do so with a versatility that ranges from the minutest of microaggression to bloodbaths of savagery; and regularly decide what is inferior about Them based on pure emotion, followed by primitive rationalizations that we mistake for rationality. Pretty depressing.

But crucially, there is room for optimism. Much of that is grounded in something definedly human, which is that we all carry multiple Us/Them divisions in our heads. A Them in one case can be an Us in another, and it can only take an instant for that identity to flip. Thus, there is hope that, with science’s help, clannishness and xenophobia can lessen, perhaps even so much so that Hollywood-extra chimps and gorillas can break bread together. [Continue reading…]

Facebooktwittermail

Power causes brain damage

Jerry Useem writes: If power were a prescription drug, it would come with a long list of known side effects. It can intoxicate. It can corrupt. It can even make Henry Kissinger believe that he’s sexually magnetic. But can it cause brain damage?

When various lawmakers lit into John Stumpf at a congressional hearing last fall, each seemed to find a fresh way to flay the now-former CEO of Wells Fargo for failing to stop some 5,000 employees from setting up phony accounts for customers. But it was Stumpf’s performance that stood out. Here was a man who had risen to the top of the world’s most valuable bank, yet he seemed utterly unable to read a room. Although he apologized, he didn’t appear chastened or remorseful. Nor did he seem defiant or smug or even insincere. He looked disoriented, like a jet-lagged space traveler just arrived from Planet Stumpf, where deference to him is a natural law and 5,000 a commendably small number. Even the most direct barbs—“You have got to be kidding me” (Sean Duffy of Wisconsin); “I can’t believe some of what I’m hearing here” (Gregory Meeks of New York)—failed to shake him awake.

What was going through Stumpf’s head? New research suggests that the better question may be: What wasn’t going through it?

The historian Henry Adams was being metaphorical, not medical, when he described power as “a sort of tumor that ends by killing the victim’s sympathies.” But that’s not far from where Dacher Keltner, a psychology professor at UC Berkeley, ended up after years of lab and field experiments. Subjects under the influence of power, he found in studies spanning two decades, acted as if they had suffered a traumatic brain injury—becoming more impulsive, less risk-aware, and, crucially, less adept at seeing things from other people’s point of view.

Sukhvinder Obhi, a neuroscientist at McMaster University, in Ontario, recently described something similar. Unlike Keltner, who studies behaviors, Obhi studies brains. And when he put the heads of the powerful and the not-so-powerful under a transcranial-magnetic-stimulation machine, he found that power, in fact, impairs a specific neural process, “mirroring,” that may be a cornerstone of empathy. Which gives a neurological basis to what Keltner has termed the “power paradox”: Once we have power, we lose some of the capacities we needed to gain it in the first place. [Continue reading…]

Facebooktwittermail

The thoughts of a spiderweb

Joshua Sokol writes: Millions of years ago, a few spiders abandoned the kind of round webs that the word “spiderweb” calls to mind and started to focus on a new strategy. Before, they would wait for prey to become ensnared in their webs and then walk out to retrieve it. Then they began building horizontal nets to use as a fishing platform. Now their modern descendants, the cobweb spiders, dangle sticky threads below, wait until insects walk by and get snagged, and reel their unlucky victims in.

In 2008, the researcher Hilton Japyassú prompted 12 species of orb spiders collected from all over Brazil to go through this transition again. He waited until the spiders wove an ordinary web. Then he snipped its threads so that the silk drooped to where crickets wandered below. When a cricket got hooked, not all the orb spiders could fully pull it up, as a cobweb spider does. But some could, and all at least began to reel it in with their two front legs.

Their ability to recapitulate the ancient spiders’ innovation got Japyassú, a biologist at the Federal University of Bahia in Brazil, thinking. When the spider was confronted with a problem to solve that it might not have seen before, how did it figure out what to do? “Where is this information?” he said. “Where is it? Is it in her head, or does this information emerge during the interaction with the altered web?”

In February, Japyassú and Kevin Laland, an evolutionary biologist at the University of Saint Andrews, proposed a bold answer to the question. They argued in a review paper, published in the journal Animal Cognition, that a spider’s web is at least an adjustable part of its sensory apparatus, and at most an extension of the spider’s cognitive system.

This would make the web a model example of extended cognition, an idea first proposed by the philosophers Andy Clark and David Chalmers in 1998 to apply to human thought. In accounts of extended cognition, processes like checking a grocery list or rearranging Scrabble tiles in a tray are close enough to memory-retrieval or problem-solving tasks that happen entirely inside the brain that proponents argue they are actually part of a single, larger, “extended” mind.

Among philosophers of mind, that idea has racked up citations, including supporters and critics. And by its very design, Japyassú’s paper, which aims to export extended cognition as a testable idea to the field of animal behavior, is already stirring up antibodies among scientists. “I got the impression that it was being very careful to check all the boxes for hot topics and controversial topics in animal cognition,” said Alex Jordan, a collective behaviorial scientist at the Max Planck Institute in Konstanz, Germany (who nonetheless supports the idea).

While many disagree with the paper’s interpretations, the study shouldn’t be confused for a piece of philosophy. Japyassú and Laland propose ways to test their ideas in concrete experiments that involve manipulating the spider’s web — tests that other researchers are excited about. “We can break that machine; we can snap strands; we can reduce the way that animal is able to perceive the system around it,” Jordan said. “And that generates some very direct and testable hypotheses.” [Continue reading…]

Facebooktwittermail

The neurology for reaching a destination

Moheb Costandi writes: How do humans and other animals find their way from A to B? This apparently simple question has no easy answer. But after decades of extensive research, a picture of how the brain encodes space and enables us to navigate through it is beginning to emerge. Earlier, neuroscientists had found that the mammalian brain contains at least three different cell types, which cooperate to encode neural representations of an animal’s location and movements.
But that picture has just grown far more complex. New research now points to the existence of two more types of brain cells involved in spatial navigation — and suggests previously unrecognized neural mechanisms underlying the way mammals make their way about the world.

Earlier work, performed in freely moving rodents, revealed that neurons called place cells fire when an animal is in a specific location. Another type — grid cells — activate periodically as an animal moves around. Finally, head direction cells fire when a mouse or rat moves in a particular direction. Together, these cells, which are located in and around a deep brain structure called the hippocampus, appear to encode an animal’s current location within its environment by tracking the distance and direction of its movements.

This process is fine for simply moving around, but it does not explain exactly how a traveler gets to a specific destination. The question of how the brain encodes the endpoint of a journey has remained unanswered. To investigate this, Ayelet Sarel of the Weismann Institute of Science in Israel and her colleagues trained three Egyptian fruit bats to fly in complicated paths and then land at a specific location where they could eat and rest. The researchers recorded the activity of a total of 309 hippocampal neurons with a wireless electrode array. About a third of these neurons exhibited the characteristics of place cells, each of them firing only when the bat was in a specific area of the large flight room. But the researchers also identified 58 cells that fired only when the bats were flying directly toward the landing site. [Continue reading…]

Facebooktwittermail

Our slow, uncertain brains are still better than computers — here’s why

By Parashkev Nachev, UCL

Automated financial trading machines can make complex decisions in a thousandth of a second. A human being making a choice – however simple – can never be faster than about one-fifth of a second. Our reaction times are not only slow but also remarkably variable, ranging over hundreds of milliseconds.

Is this because our brains are poorly designed, prone to random uncertainty – or “noise” in the electronic jargon? Measured in the laboratory, even the neurons of a fly are both fast and precise in their responses to external events, down to a few milliseconds. The sloppiness of our reaction times looks less like an accident than a built-in feature. The brain deliberately procrastinates, even if we ask it to do otherwise.

Massively parallel wetware

Why should this be? Unlike computers, our brains are massively parallel in their organisation, concurrently running many millions of separate processes. They must do this because they are not designed to perform a specific set of actions but to select from a vast repertoire of alternatives that the fundamental unpredictability of our environment offers us. From an evolutionary perspective, it is best to trust nothing and no one, least of all oneself. So before each action the brain must flip through a vast Rolodex of possibilities. It is amazing it can do this at all, let alone in a fraction of a second.

But why the variability? There is hierarchically nothing higher than the brain, so decisions have to arise through peer-to-peer interactions between different groups of neurons. Since there can be only one winner at any one time – our movements would otherwise be chaotic – the mode of resolution is less negotiation than competition: a winner-takes-all race. To ensure the competition is fair, the race must run for a minimum length of time – hence the delay – and the time it takes will depend on the nature and quality of the field of competitors, hence the variability.

Fanciful though this may sound, the distributions of human reaction times, across different tasks, limbs, and people, have been repeatedly shown to fit the “race” model remarkably well. And one part of the brain – the medial frontal cortex – seems to track reaction time tightly, as an area crucial to procrastination ought to. Disrupting the medial frontal cortex should therefore disrupt the race, bringing it to an early close. Rather than slowing us down, disrupting the brain should here speed us up, accelerating behaviour but at the cost of less considered actions.

Continue reading

Facebooktwittermail

The more you know about a topic the more likely you are to have false memories about it

By Ciara Greene, University College Dublin

Human memory does not operate like a video tape that can be rewound and rewatched, with every viewing revealing the same events in the same order. In fact, memories are reconstructed every time we recall them. Aspects of the memory can be altered, added or deleted altogether with each new recollection. This can lead to the phenomenon of false memory, where people have clear memories of an event that they never experienced.

False memory is surprisingly common, but a number of factors can increase its frequency. Recent research in my lab shows that being very interested in a topic can make you twice as likely to experience a false memory about that topic.

Previous research has indicated that experts in a few clearly defined fields, such as investments and American football, might be more likely to experience false memory in relation to their areas of expertise. Opinion as to the cause of this effect is divided. Some researchers have suggested that greater knowledge makes a person more likely to incorrectly recognise new information that is similar to previously experienced information. Another interpretation suggests that experts feel that they should know everything about their topic of expertise. According to this account, experts’ sense of accountability for their judgements causes them to “fill in the gaps” in their knowledge with plausible, but false, information.

To further investigate this, we asked 489 participants to rank seven topics from most to least interesting. The topics we used were football, politics, business, technology, film, science and pop music. The participants were then asked if they remembered the events described in four news items about the topic they selected as the most interesting, and four items about the topic selected as least interesting. In each case, three of the events depicted had really happened and one was fictional.

The results showed that being interested in a topic increased the frequency of accurate memories relating to that topic. Critically, it also increased the number of false memories – 25% of people experienced a false memory in relation to an interesting topic, compared with 10% in relation to a less interesting topic. Importantly, our participants were not asked to identify themselves as experts, and did not get to choose which topics they would answer questions about. This means that the increase in false memories is unlikely to be due to a sense of accountability for judgements about a specialist topic.

Continue reading

Facebooktwittermail

Why neuroscientists need to study the crow

crow

Grigori Guitchounts writes: The animals of neuroscience research are an eclectic bunch, and for good reason. Different model organisms—like zebra fish larvae, C. elegans worms, fruit flies, and mice — give researchers the opportunity to answer specific questions. The first two, for example, have transparent bodies, which let scientists easily peer into their brains; the last two have eminently tweakable genomes, which allow scientists to isolate the effects of specific genes. For cognition studies, researchers have relied largely on primates and, more recently, rats, which I use in my own work. But the time is ripe for this exclusive club of research animals to accept a new, avian member: the corvid family.

Corvids, such as crows, ravens, and magpies, are among the most intelligent birds on the planet — the list of their cognitive achievements goes on and on — yet neuroscientists have not scrutinized their brains for one simple reason: They don’t have a neocortex. The obsession with the neocortex in neuroscience research is not unwarranted; what’s unwarranted is the notion that the neocortex alone is responsible for sophisticated cognition. Because birds lack this structure—the most recently evolved portion of the mammalian brain, crucial to human intelligence—neuroscientists have largely and unfortunately neglected the neural basis of corvid intelligence.

This makes them miss an opportunity for an important insight. Having diverged from mammals more than 300 million years ago, avian brains have had plenty of time to develop along remarkably different lines (instead of a cortex with its six layers of neatly arranged neurons, birds evolved groups of neurons densely packed into clusters called nuclei). So, any computational similarities between corvid and primate brains — which are so different neurally — would indicate the development of common solutions to shared evolutionary problems, like creating and storing memories, or learning from experience. If neuroscientists want to know how brains produce intelligence, looking solely at the neocortex won’t cut it; they must study how corvid brains achieve the same clever behaviors that we see in ourselves and other mammals. [Continue reading…]

Facebooktwittermail

Human brain mapped in unprecedented detail

The Human Connectome Project

Nature reports: Think of a spinning globe and the patchwork of countries it depicts: such maps help us to understand where we are, and that nations differ from one another. Now, neuroscientists have charted an equivalent map of the brain’s outermost layer — the cerebral cortex — subdividing each hemisphere’s mountain- and valley-like folds into 180 separate parcels.

Ninety-seven of these areas have never previously been described, despite showing clear differences in structure, function and connectivity from their neighbours. The new brain map is published today in Nature.

Each discrete area on the map contains cells with similar structure, function and connectivity. But these areas differ from each other, just as different countries have well-defined borders and unique cultures, says David Van Essen, a neuroscientist at Washington University Medical School in St Louis, Missouri, who supervised the study.

Neuroscientists have long sought to divide the brain into smaller pieces to better appreciate how it works as a whole. One of the best-known brain maps chops the cerebral cortex into 52 areas based on the arrangement of cells in the tissue. More recently, maps have been constructed using magnetic resonance imaging (MRI) techniques — such as functional MRI, which measures the flow of blood in response to different mental tasks.

Yet until now, most such maps have been based on a single type of measurement. That can provide an incomplete or even misleading view of the brain’s inner workings, says Thomas Yeo, a computational neuroscientist at the National University of Singapore. The new map is based on multiple MRI measurements, which Yeo says “greatly increases confidence that they are producing the best in vivo estimates of cortical areas”. [Continue reading…]

Facebooktwittermail

Where is language in the brain?

By Gaia Vince, Mosaic

If you read a sentence (such as this one) about kicking a ball, neurons related to the motor function of your leg and foot will be activated in your brain. Similarly, if you talk about cooking garlic, neurons associated with smelling will fire up. Since it is almost impossible to do or think about anything without using language – whether this entails an internal talk-through by your inner voice or following a set of written instructions – language pervades our brains and our lives like no other skill.

For more than a century, it’s been established that our capacity to use language is usually located in the left hemisphere of the brain, specifically in two areas: Broca’s area (associated with speech production and articulation) and Wernicke’s area (associated with comprehension). Damage to either of these, caused by a stroke or other injury, can lead to language and speech problems or aphasia, a loss of language.

In the past decade, however, neurologists have discovered it’s not that simple: language is not restricted to two areas of the brain or even just to one side, and the brain itself can grow when we learn new languages.

Continue reading

Facebooktwittermail

Whatever you think, you don’t necessarily know your own mind

Keith Frankish writes: Do you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?

Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.

Evidence for this comes from experimental work in social psychology. It is well established that people sometimes think they have beliefs that they don’t really have. For example, if offered a choice between several identical items, people tend to choose the one on the right. But when asked why they chose it, they confabulate a reason, saying they thought the item was a nicer colour or better quality. Similarly, if a person performs an action in response to an earlier (and now forgotten) hypnotic suggestion, they will confabulate a reason for performing it. What seems to be happening is that the subjects engage in unconscious self-interpretation. They don’t know the real explanation of their action (a bias towards the right, hypnotic suggestion), so they infer some plausible reason and ascribe it to themselves. They are not aware that they are interpreting, however, and make their reports as if they were directly aware of their reasons. [Continue reading…]

Facebooktwittermail

There’s no such thing as free will

structure3bw

Stephen Cave writes: For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will — and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty” — the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness.

Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream — the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.”

So what happens if this faith erodes?

The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties — which some people have to a greater degree than others — to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance.

Galton launched a debate that raged throughout the 20th century over nature versus nurture. Are our actions the unfolding effect of our genetics? Or the outcome of what has been imprinted on us by the environment? Impressive evidence accumulated for the importance of each factor. Whether scientists supported one, the other, or a mix of both, they increasingly assumed that our deeds must be determined by something. [Continue reading…]

Facebooktwittermail

Scientists map brain’s thesaurus to help decode inner thoughts

UC Berkeley reports: What if a map of the brain could help us decode people’s inner thoughts?

UC Berkeley scientists have taken a step in that direction by building a “semantic atlas” that shows in vivid colors and multiple dimensions how the human brain organizes language. The atlas identifies brain areas that respond to words that have similar meanings.

The findings, published in the journal Nature, are based on a brain imaging study that recorded neural activity while study volunteers listened to stories from the “Moth Radio Hour.” They show that at least one-third of the brain’s cerebral cortex, including areas dedicated to high-level cognition, is involved in language processing.

Notably, the study found that different people share similar language maps: “The similarity in semantic topography across different subjects is really surprising,” said study lead author Alex Huth, a postdoctoral researcher in neuroscience at UC Berkeley. Click here for Huth’s online brain viewer. [Continue reading…]

Facebooktwittermail

Exploding the myth of the scientific vs artistic mind

By David Pearson, Anglia Ruskin University

It’s a stereotype, but many of us have made the assumption that scientists are a bit rigid and less artistic than others. Artists, on the other hand, are often seen as being less rational than the rest of us. Sometimes described as the left side of the brain versus the right side – or simply logical thinking versus artistic creativity – the two are often seen as polar opposites.

Neuroscience has already shown that everyone uses both sides of the brain when performing any task. And while certain patterns of brain activity have sometimes been linked to artistic or logical thinking, it doesn’t really explain who is good at what – and why. That’s because the exact interplay of nature and nurture is notoriously difficult to tease out. But if we put the brain aside for a while and just focus on documented ability, is there any evidence to support the logic versus art stereotype?

Psychological research has approached this question by distinguishing between two styles of thinking: convergent and divergent. The emphasis in convergent thinking is on analytical and deductive reasoning, such as that measured in IQ tests. Divergent thinking, however, is more spontaneous and free-flowing. It focuses on novelty and is measured by tasks requiring us to generate multiple solutions for a problem. An example may be thinking of new, innovative uses for familiar objects.

Studies conducted during the 1960s suggested that convergent thinkers were more likely to be good at science subjects at school. Divergent thinking was shown to be more common in the arts and humanities.

However, we are increasingly learning that convergent and divergent thinking styles need not be mutually exclusive. In 2011, researchers assessed 116 final-year UK arts and science undergraduates on measures of convergent and divergent thinking and creative problem solving. The study found no difference in ability between the arts and science groups on any of these measures. Another study reported no significant difference in measures of divergent thinking between arts, natural science and social science undergraduates. Both arts and natural sciences students, however, rated themselves as being more creative than social sciences students did.

Continue reading

Facebooktwittermail

Why our perceptions of an independent reality must be illusions

wet-rock

Amanda Gefter writes: As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.

Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.

Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.” [Continue reading…]

Facebooktwittermail

How animals think

Alison Gopnik writes: For 2,000 years, there was an intuitive, elegant, compelling picture of how the world worked. It was called “the ladder of nature.” In the canonical version, God was at the top, followed by angels, who were followed by humans. Then came the animals, starting with noble wild beasts and descending to domestic animals and insects. Human animals followed the scheme, too. Women ranked lower than men, and children were beneath them. The ladder of nature was a scientific picture, but it was also a moral and political one. It was only natural that creatures higher up would have dominion over those lower down.

Darwin’s theory of evolution by natural selection delivered a serious blow to this conception. Natural selection is a blind historical process, stripped of moral hierarchy. A cockroach is just as well adapted to its environment as I am to mine. In fact, the bug may be better adapted — cockroaches have been around a lot longer than humans have, and may well survive after we are gone. But the very word evolution can imply a progression — New Agers talk about becoming “more evolved” — and in the 19th century, it was still common to translate evolutionary ideas into ladder-of-nature terms.

Modern biological science has in principle rejected the ladder of nature. But the intuitive picture is still powerful. In particular, the idea that children and nonhuman animals are lesser beings has been surprisingly persistent. Even scientists often act as if children and animals are defective adult humans, defined by the abilities we have and they don’t. Neuroscientists, for example, sometimes compare brain-damaged adults to children and animals.

We always should have been suspicious of this picture, but now we have no excuse for continuing with it. In the past 30 years, research has explored the distinctive ways in which children as well as animals think, and the discoveries deal the coup de grâce to the ladder of nature. The primatologist Frans de Waal has been at the forefront of the animal research, and its most important public voice. In Are We Smart Enough to Know How Smart Animals Are?, he makes a passionate and convincing case for the sophistication of nonhuman minds. [Continue reading…]

Facebooktwittermail

Processing high-level math concepts uses the same neural networks as in basic math skills

pattern9

Scientific American reports: Alan Turing, Albert Einstein, Stephen Hawking, John Nash — these “beautiful” minds never fail to enchant the public, but they also remain somewhat elusive. How do some people progress from being able to perform basic arithmetic to grasping advanced mathematical concepts and thinking at levels of abstraction that baffle the rest of the population? Neuroscience has now begun to pin down whether the brain of a math wiz somehow takes conceptual thinking to another level.

Specifically, scientists have long debated whether the basis of high-level mathematical thought is tied to the brain’s language-processing centers — that thinking at such a level of abstraction requires linguistic representation and an understanding of syntax — or to independent regions associated with number and spatial reasoning. In a study published this week in Proceedings of the National Academy of Sciences, a pair of researchers at the INSERM–CEA Cognitive Neuroimaging Unit in France reported that the brain areas involved in math are different from those engaged in equally complex nonmathematical thinking.

The team used functional magnetic resonance imaging (fMRI) to scan the brains of 15 professional mathematicians and 15 nonmathematicians of the same academic standing. While in the scanner the subjects listened to a series of 72 high-level mathematical statements, divided evenly among algebra, analysis, geometry and topology, as well as 18 high-level nonmathematical (mostly historical) statements. They had four seconds to reflect on each proposition and determine whether it was true, false or meaningless.

The researchers found that in the mathematicians only, listening to math-related statements activated a network involving bilateral intraparietal, dorsal prefrontal, and inferior temporal regions of the brain. This circuitry is usually not associated with areas involved in language processing and semantics, which were activated in both mathematicians and nonmathematicians when they were presented with the nonmathematical statements. “On the contrary,” says study co-author and graduate student Marie Amalric, “our results show that high-level mathematical reflection recycles brain regions associated with an evolutionarily ancient knowledge of number and space.” [Continue reading…]

Facebooktwittermail

Brain scans reveal how LSD affects consciousness

Researchers from Imperial College London, working with the Beckley Foundation, have for the first time visualised the effects of LSD on the brain: In a series of experiments, scientists have gained a glimpse into how the psychedelic compound affects brain activity. The team administered LSD (Lysergic acid diethylamide) to 20 healthy volunteers in a specialist research centre and used various leading-edge and complementary brain scanning techniques to visualise how LSD alters the way the brain works.

The findings, published in Proceedings of the National Academy of Sciences (PNAS), reveal what happens in the brain when people experience the complex visual hallucinations that are often associated with LSD state. They also shed light on the brain changes that underlie the profound altered state of consciousness the drug can produce.

A major finding of the research is the discovery of what happens in the brain when people experience complex dreamlike hallucinations under LSD. Under normal conditions, information from our eyes is processed in a part of the brain at the back of the head called the visual cortex. However, when the volunteers took LSD, many additional brain areas – not just the visual cortex – contributed to visual processing.

Dr Robin Carhart-Harris, from the Department of Medicine at Imperial, who led the research, explained: “We observed brain changes under LSD that suggested our volunteers were ‘seeing with their eyes shut’ – albeit they were seeing things from their imagination rather than from the outside world. We saw that many more areas of the brain than normal were contributing to visual processing under LSD – even though the volunteers’ eyes were closed. Furthermore, the size of this effect correlated with volunteers’ ratings of complex, dreamlike visions.”

The study also revealed what happens in the brain when people report a fundamental change in the quality of their consciousness under LSD.

Dr Carhart-Harris explained: “Normally our brain consists of independent networks that perform separate specialised functions, such as vision, movement and hearing – as well as more complex things like attention. However, under LSD the separateness of these networks breaks down and instead you see a more integrated or unified brain.

“Our results suggest that this effect underlies the profound altered state of consciousness that people often describe during an LSD experience. It is also related to what people sometimes call ‘ego-dissolution’, which means the normal sense of self is broken down and replaced by a sense of reconnection with themselves, others and the natural world. This experience is sometimes framed in a religious or spiritual way – and seems to be associated with improvements in well-being after the drug’s effects have subsided.” [Continue reading…]

Amanda Feilding, executive director of the Beckley Foundation, in an address she will deliver to the Royal Society tomorrow, says: I think Albert Hoffman would have been delighted to have his “Problem child” celebrated at the Royal Society, as in his long lifetime the academic establishment never recognised his great contribution. But for the taboo surrounding this field, he would surely have won the Nobel Prize. That was the beginning of the modern psychedelic age, which has fundamentally changed society.

After the discovery of the effects of LSD, there was a burst of excitement in the medical and therapeutic worlds – over 1000 experimental and clinical studies were undertaken. Then, in the early 60s, LSD escaped from the labs and began to spread into the world at large. Fuelled by its transformational insights, a cultural evolution took place, whose effects are still felt today. It sparked a wave of interest in Eastern mysticism, healthy living, nurturing the environment, individual freedoms and new music and art among many other changes. Then the establishment panicked and turned to prohibition, partly motivated by American youth becoming disenchanted with fighting a war in far-off Vietnam.

Aghast at the global devastation caused by the war on drugs, I set up the Beckley Foundation in 1998. With the advent of brain imaging technology, I realised that one could correlate the subjective experience of altered states of consciousness, brought about by psychedelic substances, with empirical findings. I realised that only through the very best science investigating how psychedelics work in the brain could one overcome the misplaced taboo which had transformed them from the food of the gods to the work of the devil. [Continue reading…]

Just to be clear, as valuable as this research is, it is an exercise in map-making. The map should never be confused with the territory.

Facebooktwittermail