Keith Frankish writes: Do you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?
Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.
Evidence for this comes from experimental work in social psychology. It is well established that people sometimes think they have beliefs that they don’t really have. For example, if offered a choice between several identical items, people tend to choose the one on the right. But when asked why they chose it, they confabulate a reason, saying they thought the item was a nicer colour or better quality. Similarly, if a person performs an action in response to an earlier (and now forgotten) hypnotic suggestion, they will confabulate a reason for performing it. What seems to be happening is that the subjects engage in unconscious self-interpretation. They don’t know the real explanation of their action (a bias towards the right, hypnotic suggestion), so they infer some plausible reason and ascribe it to themselves. They are not aware that they are interpreting, however, and make their reports as if they were directly aware of their reasons. [Continue reading…]
Stephen Cave writes: For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will — and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty” — the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness.
Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream — the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.”
So what happens if this faith erodes?
The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties — which some people have to a greater degree than others — to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance.
Galton launched a debate that raged throughout the 20th century over nature versus nurture. Are our actions the unfolding effect of our genetics? Or the outcome of what has been imprinted on us by the environment? Impressive evidence accumulated for the importance of each factor. Whether scientists supported one, the other, or a mix of both, they increasingly assumed that our deeds must be determined by something. [Continue reading…]
UC Berkeley reports: What if a map of the brain could help us decode people’s inner thoughts?
UC Berkeley scientists have taken a step in that direction by building a “semantic atlas” that shows in vivid colors and multiple dimensions how the human brain organizes language. The atlas identifies brain areas that respond to words that have similar meanings.
The findings, published in the journal Nature, are based on a brain imaging study that recorded neural activity while study volunteers listened to stories from the “Moth Radio Hour.” They show that at least one-third of the brain’s cerebral cortex, including areas dedicated to high-level cognition, is involved in language processing.
Notably, the study found that different people share similar language maps: “The similarity in semantic topography across different subjects is really surprising,” said study lead author Alex Huth, a postdoctoral researcher in neuroscience at UC Berkeley. Click here for Huth’s online brain viewer. [Continue reading…]
It’s a stereotype, but many of us have made the assumption that scientists are a bit rigid and less artistic than others. Artists, on the other hand, are often seen as being less rational than the rest of us. Sometimes described as the left side of the brain versus the right side – or simply logical thinking versus artistic creativity – the two are often seen as polar opposites.
Neuroscience has already shown that everyone uses both sides of the brain when performing any task. And while certain patterns of brain activity have sometimes been linked to artistic or logical thinking, it doesn’t really explain who is good at what – and why. That’s because the exact interplay of nature and nurture is notoriously difficult to tease out. But if we put the brain aside for a while and just focus on documented ability, is there any evidence to support the logic versus art stereotype?
Psychological research has approached this question by distinguishing between two styles of thinking: convergent and divergent. The emphasis in convergent thinking is on analytical and deductive reasoning, such as that measured in IQ tests. Divergent thinking, however, is more spontaneous and free-flowing. It focuses on novelty and is measured by tasks requiring us to generate multiple solutions for a problem. An example may be thinking of new, innovative uses for familiar objects.
Studies conducted during the 1960s suggested that convergent thinkers were more likely to be good at science subjects at school. Divergent thinking was shown to be more common in the arts and humanities.
However, we are increasingly learning that convergent and divergent thinking styles need not be mutually exclusive. In 2011, researchers assessed 116 final-year UK arts and science undergraduates on measures of convergent and divergent thinking and creative problem solving. The study found no difference in ability between the arts and science groups on any of these measures. Another study reported no significant difference in measures of divergent thinking between arts, natural science and social science undergraduates. Both arts and natural sciences students, however, rated themselves as being more creative than social sciences students did.
Amanda Gefter writes: As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.” [Continue reading…]
Alison Gopnik writes: For 2,000 years, there was an intuitive, elegant, compelling picture of how the world worked. It was called “the ladder of nature.” In the canonical version, God was at the top, followed by angels, who were followed by humans. Then came the animals, starting with noble wild beasts and descending to domestic animals and insects. Human animals followed the scheme, too. Women ranked lower than men, and children were beneath them. The ladder of nature was a scientific picture, but it was also a moral and political one. It was only natural that creatures higher up would have dominion over those lower down.
Darwin’s theory of evolution by natural selection delivered a serious blow to this conception. Natural selection is a blind historical process, stripped of moral hierarchy. A cockroach is just as well adapted to its environment as I am to mine. In fact, the bug may be better adapted — cockroaches have been around a lot longer than humans have, and may well survive after we are gone. But the very word evolution can imply a progression — New Agers talk about becoming “more evolved” — and in the 19th century, it was still common to translate evolutionary ideas into ladder-of-nature terms.
Modern biological science has in principle rejected the ladder of nature. But the intuitive picture is still powerful. In particular, the idea that children and nonhuman animals are lesser beings has been surprisingly persistent. Even scientists often act as if children and animals are defective adult humans, defined by the abilities we have and they don’t. Neuroscientists, for example, sometimes compare brain-damaged adults to children and animals.
We always should have been suspicious of this picture, but now we have no excuse for continuing with it. In the past 30 years, research has explored the distinctive ways in which children as well as animals think, and the discoveries deal the coup de grâce to the ladder of nature. The primatologist Frans de Waal has been at the forefront of the animal research, and its most important public voice. In Are We Smart Enough to Know How Smart Animals Are?, he makes a passionate and convincing case for the sophistication of nonhuman minds. [Continue reading…]
Scientific American reports: Alan Turing, Albert Einstein, Stephen Hawking, John Nash — these “beautiful” minds never fail to enchant the public, but they also remain somewhat elusive. How do some people progress from being able to perform basic arithmetic to grasping advanced mathematical concepts and thinking at levels of abstraction that baffle the rest of the population? Neuroscience has now begun to pin down whether the brain of a math wiz somehow takes conceptual thinking to another level.
Specifically, scientists have long debated whether the basis of high-level mathematical thought is tied to the brain’s language-processing centers — that thinking at such a level of abstraction requires linguistic representation and an understanding of syntax — or to independent regions associated with number and spatial reasoning. In a study published this week in Proceedings of the National Academy of Sciences, a pair of researchers at the INSERM–CEA Cognitive Neuroimaging Unit in France reported that the brain areas involved in math are different from those engaged in equally complex nonmathematical thinking.
The team used functional magnetic resonance imaging (fMRI) to scan the brains of 15 professional mathematicians and 15 nonmathematicians of the same academic standing. While in the scanner the subjects listened to a series of 72 high-level mathematical statements, divided evenly among algebra, analysis, geometry and topology, as well as 18 high-level nonmathematical (mostly historical) statements. They had four seconds to reflect on each proposition and determine whether it was true, false or meaningless.
The researchers found that in the mathematicians only, listening to math-related statements activated a network involving bilateral intraparietal, dorsal prefrontal, and inferior temporal regions of the brain. This circuitry is usually not associated with areas involved in language processing and semantics, which were activated in both mathematicians and nonmathematicians when they were presented with the nonmathematical statements. “On the contrary,” says study co-author and graduate student Marie Amalric, “our results show that high-level mathematical reflection recycles brain regions associated with an evolutionarily ancient knowledge of number and space.” [Continue reading…]
Researchers from Imperial College London, working with the Beckley Foundation, have for the first time visualised the effects of LSD on the brain: In a series of experiments, scientists have gained a glimpse into how the psychedelic compound affects brain activity. The team administered LSD (Lysergic acid diethylamide) to 20 healthy volunteers in a specialist research centre and used various leading-edge and complementary brain scanning techniques to visualise how LSD alters the way the brain works.
The findings, published in Proceedings of the National Academy of Sciences (PNAS), reveal what happens in the brain when people experience the complex visual hallucinations that are often associated with LSD state. They also shed light on the brain changes that underlie the profound altered state of consciousness the drug can produce.
A major finding of the research is the discovery of what happens in the brain when people experience complex dreamlike hallucinations under LSD. Under normal conditions, information from our eyes is processed in a part of the brain at the back of the head called the visual cortex. However, when the volunteers took LSD, many additional brain areas – not just the visual cortex – contributed to visual processing.
Dr Robin Carhart-Harris, from the Department of Medicine at Imperial, who led the research, explained: “We observed brain changes under LSD that suggested our volunteers were ‘seeing with their eyes shut’ – albeit they were seeing things from their imagination rather than from the outside world. We saw that many more areas of the brain than normal were contributing to visual processing under LSD – even though the volunteers’ eyes were closed. Furthermore, the size of this effect correlated with volunteers’ ratings of complex, dreamlike visions.”
The study also revealed what happens in the brain when people report a fundamental change in the quality of their consciousness under LSD.
Dr Carhart-Harris explained: “Normally our brain consists of independent networks that perform separate specialised functions, such as vision, movement and hearing – as well as more complex things like attention. However, under LSD the separateness of these networks breaks down and instead you see a more integrated or unified brain.
“Our results suggest that this effect underlies the profound altered state of consciousness that people often describe during an LSD experience. It is also related to what people sometimes call ‘ego-dissolution’, which means the normal sense of self is broken down and replaced by a sense of reconnection with themselves, others and the natural world. This experience is sometimes framed in a religious or spiritual way – and seems to be associated with improvements in well-being after the drug’s effects have subsided.” [Continue reading…]
Amanda Feilding, executive director of the Beckley Foundation, in an address she will deliver to the Royal Society tomorrow, says: I think Albert Hoffman would have been delighted to have his “Problem child” celebrated at the Royal Society, as in his long lifetime the academic establishment never recognised his great contribution. But for the taboo surrounding this field, he would surely have won the Nobel Prize. That was the beginning of the modern psychedelic age, which has fundamentally changed society.
After the discovery of the effects of LSD, there was a burst of excitement in the medical and therapeutic worlds – over 1000 experimental and clinical studies were undertaken. Then, in the early 60s, LSD escaped from the labs and began to spread into the world at large. Fuelled by its transformational insights, a cultural evolution took place, whose effects are still felt today. It sparked a wave of interest in Eastern mysticism, healthy living, nurturing the environment, individual freedoms and new music and art among many other changes. Then the establishment panicked and turned to prohibition, partly motivated by American youth becoming disenchanted with fighting a war in far-off Vietnam.
Aghast at the global devastation caused by the war on drugs, I set up the Beckley Foundation in 1998. With the advent of brain imaging technology, I realised that one could correlate the subjective experience of altered states of consciousness, brought about by psychedelic substances, with empirical findings. I realised that only through the very best science investigating how psychedelics work in the brain could one overcome the misplaced taboo which had transformed them from the food of the gods to the work of the devil. [Continue reading…]
Just to be clear, as valuable as this research is, it is an exercise in map-making. The map should never be confused with the territory.
Stephen T Asma writes: After you spend time with wild animals in the primal ecosystem where our big brains first grew, you have to chuckle a bit at the reigning view of the mind as a computer. Most cognitive scientists, from the logician Alan Turing to the psychologist James Lloyd McClelland, have been narrowly focused on linguistic thought, ignoring the whole embodied organism. They see the mind as a Boolean algebra binary system of 1 or 0, ‘on’ or ‘off’. This has been methodologically useful, and certainly productive for the artifical intelligence we use in our digital technology, but it merely mimics the biological mind. Computer ‘intelligence’ might be impressive, but it is an impersonation of biological intelligence. The ‘wet’ biological mind is embodied in the squishy, organic machinery of our emotional systems — where action-patterns are triggered when chemical cascades cross volumetric tipping points.
Neuroscience has begun to correct the computational model by showing how our rational, linguistic mind depends on the ancient limbic brain, where emotions hold sway and social skills dominate. In fact, the cognitive mind works only when emotions preferentially tilt our deliberations. The neuroscientist Antonio Damasio worked with patients who had damage in the communication system between the cognitive and emotional brain. The subjects could compute all the informational aspects of a decision in detail, but they couldn’t actually commit to anything. Without clear limbic values (that is, feelings), Damasio’s patients couldn’t decide their own social calendars, prioritise jobs at work, or even make decisions in their own best interest. Our rational mind is truly embodied, and without this emotional embodiment we have no preferences. In order for our minds to go beyond syntax to semantics, we need feelings. And our ancestral minds were rich in feelings before they were adept in computations.
Our neo-cortex mushroomed to its current size less than one million years ago. That’s a very recent development when we remember that the human clade or group broke off from the great apes in Africa 7 million years ago. That future-looking, tool-wielding, symbol-juggling cortex grew on top of the limbic system. Older still is the reptile brain — the storehouse of innate motivational instincts such as pain-avoidance, exploration, hunger, lust, aggression and so on. Walking around (very carefully) on the Serengeti is like visiting the nursery of our own mind. [Continue reading…]
Most people, however strongly they might hold to what they regard as a scientific view of life — that we are biological organisms, products of evolution, not destined for a supernatural afterlife — nevertheless most likely have a sense of identity that does not easily accommodate the idea that our thoughts and feelings are influenced by bacteria. Indeed, such an idea might sound delusional.
Yet this is what is increasingly clearly understood: that the body is not the abode of an elusive self; nor that human experience can be reduced to the aggregation of cascades of action potentials producing a neural symphony; but that this seemingly unitary being is in fact a community in which what we are and what lives inside our body cannot be separated.
Science magazine reports: The 22 men took the same pill for four weeks. When interviewed, they said they felt less daily stress and their memories were sharper. The brain benefits were subtle, but the results, reported at last year’s annual meeting of the Society for Neuroscience, got attention. That’s because the pills were not a precise chemical formula synthesized by the pharmaceutical industry.
The capsules were brimming with bacteria.
In the ultimate PR turnaround, once-dreaded bacteria are being welcomed as health heroes. People gobble them up in probiotic yogurts, swallow pills packed with billions of bugs and recoil from hand sanitizers. Helping us nurture the microbial gardens in and on our bodies has become big business, judging by grocery store shelves.
These bacteria are possibly working at more than just keeping our bodies healthy: They may be changing our minds. Recent studies have begun turning up tantalizing hints about how the bacteria living in the gut can alter the way the brain works. These findings raise a question with profound implications for mental health: Can we soothe our brains by cultivating our bacteria?
By tinkering with the gut’s bacterial residents, scientists have changed the behavior of lab animals and small numbers of people. Microbial meddling has turned anxious mice bold and shy mice social. Rats inoculated with bacteria from depressed people develop signs of depression themselves. And small studies of people suggest that eating specific kinds of bacteria may change brain activity and ease anxiety. Because gut bacteria can make the very chemicals that brain cells use to communicate, the idea makes a certain amount of sense.
Though preliminary, such results suggest that the right bacteria in your gut could brighten mood and perhaps even combat pernicious mental disorders including anxiety and depression. The wrong microbes, however, might lead in a darker direction.
This perspective might sound a little too much like our minds are being controlled by our bacterial overlords. But consider this: Microbes have been with us since even before we were humans. Human and bacterial cells evolved together, like a pair of entwined trees, growing and adapting into a (mostly) harmonious ecosystem.
Our microbes (known collectively as the microbiome) are “so innate in who we are,” says gastroenterologist Kirsten Tillisch of UCLA. It’s easy to imagine that “they’re controlling us, or we’re controlling them.” But it’s becoming increasingly clear that no one is in charge. Instead, “it’s a conversation that our bodies are having with our microbiome,” Tillisch says. [Continue reading…]
Daniel A Gross writes: Before Josh McDermott was a neuroscientist, he was a club DJ in Boston and Minneapolis. He saw first-hand how music could unite people in sound, rhythm, and emotion. “One of the reasons it was so fun to DJ is that, by playing different pieces of music, you can transform the vibe in a roomful of people,” he says.
With his club days behind him, McDermott now ventures into the effects of sound and music in his lab at the Massachusetts Institute of Technology, where he is an assistant professor in the Department of Brain and Cognitive Sciences. In 2015, he and a post-doctoral colleague, Sam Norman-Haignere, and Nancy Kanwisher, a professor of cognitive neuroscience at MIT, made news by locating a neural pathway activated by music and music alone. McDermott and his colleagues played a total of 165 commonly heard natural sounds to ten subjects willing to be rolled into an fMRI machine to listen to the piped-in sounds. The sounds included a man speaking, a songbird, a car horn, a flushing toilet, and a dog barking. None sparked the same population of neurons as music.
Their discovery that certain neurons have “music selectivity” stirs questions about the role of music in human life. Why do our brains contain music-selective neurons? Could some evolutionary purpose have led to neurons devoted to music? McDermott says the study can’t answer such questions. But he is excited by the fact that it shows music has a unique biological effect. “We presume those neurons are doing something in relation to the analysis of music that allows you to extract structure, following melodies or rhythms, or maybe extract emotion,” he says. [Continue reading…]
Emily Singer writes: Our brains have an extraordinary ability to monitor time. A driver can judge just how much time is left to run a yellow light; a dancer can keep a beat down to the millisecond. But exactly how the brain tracks time is still a mystery. Researchers have defined the brain areas involved in movement, memory, color vision and other functions, but not the ones that monitor time. Indeed, our neural timekeeper has proved so elusive that most scientists assume this mechanism is distributed throughout the brain, with different regions using different monitors to keep track of time according to their needs.
Over the last few years, a handful of researchers have compiled growing evidence that the same cells that monitor an individual’s location in space also mark the passage of time. This suggests that two brain regions — the hippocampus and the entorhinal cortex, both famous for their role in memory and navigation — can also act as a sort of timer.
In research published in November, Howard Eichenbaum, a neuroscientist at Boston University, and collaborators showed that cells in rats that form the brain’s internal GPS system, known as grid cells, are more malleable than had been anticipated. Typically these cells act like a dead-reckoning system, with certain neurons firing when an animal is in a specific place. (The researchers who discovered this shared the Nobel Prize in 2014.) Eichenbaum found that when an animal is kept in place — such as when it runs on a treadmill — the cells keep track of both distance and time. The work suggests that the brain’s sense of space and time are intertwined. [Continue reading…]
University of Southern California: Everyone has at least a few non-negotiable values. These are the things that, no matter what the circumstance, you’d never compromise for any reason – such as “I’d never hurt a child,” or “I’m against the death penalty.”
Real-time brain scans show that when people read stories that deal with these core, protected values, the “default mode network” in their brains activates.
This network was once thought of as just the brain’s autopilot, since it has been shown to be active when you’re not engaged by anything in the outside world – but studies like this one suggest that it’s actually working to find meaning in the narratives.
“The brain is devoting a huge amount of energy to whatever that network is doing. We need to understand why,” said Jonas Kaplan of the USC Dornsife Brain and Creativity Institute. Kaplan was the lead author of the study, which was published on Jan. 7 in the journal Cerebral Cortex.
Kaplan thinks that it’s not just that the brain is presented with a moral quandary, but rather that the quandary is presented in a narrative format.
“Stories help us to organize information in a unique way,” he said. [Read more…]
Science News reports: Six-month-old babies can spot subtle differences between two monkey faces easy as pie. But 9-month-olds — and adults — are blind to the differences. In a 2002 study of facial recognition, scientists pitted 30 6-month-old babies against 30 9-month-olds and 11 adults. First, the groups got familiar with a series of monkey and human faces that flashed on a screen. Then new faces showed up, interspersed with already familiar faces. The idea is that the babies would spend more time looking at new faces than ones they had already seen.
When viewing human faces, all of the observers, babies and adults alike, did indeed spend more time looking at the new people, showing that they could easily pick out familiar human faces. But when it came to recognizing monkey faces, the youngsters blew the competition out of the water. Six-month-old babies recognized familiar monkey faces and stared at the newcomers longer. But both adults and 9-month-old babies were flummoxed, and looked at the new and familiar monkey faces for about the same amount of time.
Superior visual skills don’t apply to just faces, either. Three- to 4-month-old babies can see differences in lighting that are undetectable to adults. This ephemeral superskill evaporates just months later, scientists reported in December in Current Biology. To test babies’ visual acuity, researchers led by Jiale Yang of Chuo University in Tokyo first generated a series of 3-D pictures of snails. The shiny snails were made to look as though light was hitting them from different places. Like adults, 5- to 6-month-old babies couldn’t spot the lighting differences. But younger babies could, the team found. [Continue reading…]
Science News reports: It’s happened to all of us at one time or another: You’re walking through a crowd, and suddenly a face seems incredibly familiar — so much so that you do a double-take. Who is that? How do you know them? You have no idea, but something about their face nags at you. You know you’ve seen it before.
The reason you know that face is in part because of your perirhinal cortex. This is an area of the brain that helps us to determine familiarity, or whether we have seen an object before. A new study of brain cells in this area finds that firing these neurons at one frequency makes the brain treat novel images as old hat. But firing these same neurons at another frequency can make the old new again.
“Novelty and familiarity are both really important,” says study coauthor Rebecca Burwell, a neuroscientist at Brown University in Providence, R.I. “They are important for learning and memory and decision making.” Finding a cache of food and knowing it is new could be useful for an animal’s future. So is recognizing a familiar place where the pickings were good in the past.
But knowing that something is familiar is not quite the same thing as knowing what that thing is. “You’re in a crowd and you see a familiar face, and there’s a feeling,” Burwell explains. “You can’t identify them, you don’t know where you met them, but there’s a sense of familiarity.” It’s different from recalling where you met the person, or even who the person is. This is a sense at the base of memory. And while scientists knew the perirhinal cortex was involved in this sense of familiarity, how that feeling of new or old was coded in the brain wasn’t fully understood. [Continue reading…]