Category Archives: Attention to the Unseen
Neuroscience vs philosophy: Taking aim at free will
The leading science journal, Nature, reports: The experiment helped to change John-Dylan Haynes’s outlook on life. In 2007, Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience in Berlin, put people into a brain scanner in which a display screen flashed a succession of random letters. He told them to press a button with either their right or left index fingers whenever they felt the urge, and to remember the letter that was showing on the screen when they made the decision. The experiment used functional magnetic resonance imaging (fMRI) to reveal brain activity in real time as the volunteers chose to use their right or left hands. The results were quite a surprise.
“The first thought we had was ‘we have to check if this is real’,” says Haynes. “We came up with more sanity checks than I’ve ever seen in any other study before.”
The conscious decision to push the button was made about a second before the actual act, but the team discovered that a pattern of brain activity seemed to predict that decision by as many as seven seconds. Long before the subjects were even aware of making a choice, it seems, their brains had already decided.
As humans, we like to think that our decisions are under our conscious control — that we have free will. Philosophers have debated that concept for centuries, and now Haynes and other experimental neuroscientists are raising a new challenge. They argue that consciousness of a decision may be a mere biochemical afterthought, with no influence whatsoever on a person’s actions. According to this logic, they say, free will is an illusion. “We feel we choose, but we don’t,” says Patrick Haggard, a neuroscientist at University College London.
You may have thought you decided whether to have tea or coffee this morning, for example, but the decision may have been made long before you were aware of it. For Haynes, this is unsettling. “I’ll be very honest, I find it very difficult to deal with this,” he says. “How can I call a will ‘mine’ if I don’t even know when it occurred and what it has decided to do?”
Philosophers aren’t convinced that brain scans can demolish free will so easily. Some have questioned the neuroscientists’ results and interpretations, arguing that the researchers have not quite grasped the concept that they say they are debunking. Many more don’t engage with scientists at all. “Neuroscientists and philosophers talk past each other,” says Walter Glannon, a philosopher at the University of Calgary in Canada, who has interests in neuroscience, ethics and free will.
There are some signs that this is beginning to change. This month, a raft of projects will get under way as part of Big Questions in Free Will, a four-year, US$4.4-million programme funded by the John Templeton Foundation in West Conshohocken, Pennsylvania, which supports research bridging theology, philosophy and natural science. Some say that, with refined experiments, neuroscience could help researchers to identify the physical processes underlying conscious intention and to better understand the brain activity that precedes it. And if unconscious brain activity could be found to predict decisions perfectly, the work really could rattle the notion of free will. “It’s possible that what are now correlations could at some point become causal connections between brain mechanisms and behaviours,” says Glannon. “If that were the case, then it would threaten free will, on any definition by any philosopher.” [Continue reading…]
Anesthesia may leave patients conscious — and finally show consciousness in the brain
Vaughan Bell writes: During surgery, a patient awakes but is unable to move. She sees people dressed in green who talk in strange slowed-down voices. There seem to be tombstones nearby and she assumes she is at her own funeral. Slipping back into oblivion, she awakes later in her hospital bed, troubled by her frightening experiences.
These are genuine memories from a patient who regained awareness during an operation. Her experiences are clearly a distorted version of reality but crucially, none of the medical team was able to tell she was conscious.
This is because medical tests for consciousness are based on your behavior. Essentially, someone talks to you or prods you, and if you don’t respond, you’re assumed to be out cold. Consciousness, however, is not defined as a behavioral response but as a mental experience. If I were completely paralyzed, I could still be conscious and I could still experience the world, even if I was unable to communicate this to anyone else.
This is obviously a pressing medical problem. Doctors don’t want people to regain awareness during surgery because the experiences may be frightening and even traumatic. But on a purely scientific level, these fine-grained alterations in our awareness may help us understand the neural basis of consciousness. If we could understand how these drugs alter the brain and could see when people flicker into consciousness, we could perhaps understand what circuits are important for consciousness itself. Unfortunately, surgical anesthesia is not an ideal way of testing this because several drugs are often used at once and some can affect memory, meaning that the patient could become conscious during surgery but not remember it afterwards, making it difficult to do reliable retrospective comparisons between brain function and awareness.
An attempt to solve this problem was behind an attention-grabbing new study, led by Valdas Noreika from the University of Turku in Finland, that investigated the extent to which common surgical anesthetics can leave us behaviorally unresponsive but subjectively conscious. [Continue reading…]
Why you don’t really have free will
Professor Jerry A. Coyne, from the Department of Ecology and Evolution at The University of Chicago, writes: Perhaps you’ve chosen to read this essay after scanning other articles on this website. Or, if you’re in a hotel, maybe you’ve decided what to order for breakfast, or what clothes you’ll wear today.
You haven’t. You may feel like you’ve made choices, but in reality your decision to read this piece, and whether to have eggs or pancakes, was determined long before you were aware of it — perhaps even before you woke up today. And your “will” had no part in that decision. So it is with all of our other choices: not one of them results from a free and conscious decision on our part. There is no freedom of choice, no free will. And those New Year’s resolutions you made? You had no choice about making them, and you’ll have no choice about whether you keep them.
The debate about free will, long the purview of philosophers alone, has been given new life by scientists, especially neuroscientists studying how the brain works. And what they’re finding supports the idea that free will is a complete illusion.
The issue of whether we have of free will is not an arcane academic debate about philosophy, but a critical question whose answer affects us in many ways: how we assign moral responsibility, how we punish criminals, how we feel about our religion, and, most important, how we see ourselves — as autonomous or automatons. [Continue reading…]
Avian mathematicians and musicians
Discovery News reports: Pigeons may be ubiquitous, but they’re also brainy, according to a new study that found these birds are on par with primates when it comes to numerical competence.
The study, published in the latest issue of the journal Science, discovered that pigeons can discriminate against different amounts of number-like objects, order pairs, and learn abstract mathematical rules. Aside from humans, only rhesus monkeys have exhibited equivalent skills.
“It would be fair to say that, even among birds, pigeons are not thought to be the sharpest crayon in the box,” lead author Damian Scarf told Discovery News. “I think that this ability may be widespread among birds. There is already clear evidence that it is widespread among primates.”
The neural pathways that allow pigeons to do math might be connected to the ones that enable the cockatoos below to dance. Both skills hinge on the ability to conceptualize uniform units — the most abstract representations of space and time.
Turning war into ‘peace’ by deleting and replacing memories
Imagine soldiers who couldn’t be traumatized; who could engage in the worst imaginable brutality and not only remember nothing, but remember something else, completely benign. That might just sound like dystopian science fiction, but ongoing research is laying the foundations to turn this into reality.
Alison Winter, author of Memory: Fragments of a Modern History, from which the following is adapted, writes:
The first speculative steps are now being taken in an attempt to develop techniques of what is being called “therapeutic forgetting.” Military veterans suffering from PTSD are currently serving as subjects in research projects on using propranolol to mitigate the effects of wartime trauma. Some veterans’ advocates criticize the project because they see it as a “metaphor” for how the “administration, Defense Department, and Veterans Affairs officials, not to mention many Americans, are approaching the problem of war trauma during the Iraq experience.”
The argument is that terrible combat experiences are “part of a soldier’s life” and are “embedded in our national psyche, too,” and that these treatments reflect an illegitimate wish to forget the pain suffered by war veterans. Tara McKelvey, who researched veterans’ attitudes to the research project, quoted one veteran as disapproving of the project on the grounds that “problems have to be dealt with.” This comment came from a veteran who spends time “helping other veterans deal with their ghosts, and he gives talks to high school and college students about war.” McKelvey’s informant felt that the definition of who he was “comes from remembering the pain and dealing with it — not from trying to forget it.” The assumption here is that treating the pain of war pharmacologically is equivalent to minimizing, discounting, disrespecting and ultimately setting aside altogether the sacrifices made by veterans, and by society itself. People who objected to the possibility of altering emotional memories with drugs were concerned that this amounted to avoiding one’s true problems instead of “dealing” with them. An artificial record of the individual past would by the same token contribute to a skewed collective memory of the costs of war.
In addition to the work with veterans, there have been pilot studies with civilians in emergency rooms. In 2002, psychiatrist Roger Pitman of Harvard took a group of 31 volunteers from the emergency rooms at Massachusetts General Hospital, all people who had suffered some traumatic event, and for 10 days treated some with a placebo and the rest with propranolol [a beta blocker]. Those who received propranolol later had no stressful physical response to reminders of the original trauma, while almost half of the others did. Should those E.R. patients have been worried about the possible legal implications of taking the drug? Could one claim to be as good a witness once one’s memory had been altered by propranolol? And in a civil suit, could the defense argue that less harm had been done, since the plaintiff had avoided much of the emotional damage that an undrugged victim would have suffered? Attorneys did indeed ask about the implications for witness testimony, damages, and more generally, a devaluation of harm to victims of crime. One legal scholar framed this as a choice between protecting memory “authenticity” (a category he used with some skepticism) and “freedom of memory.” Protecting “authenticity” could not be done without sacrificing our freedom to control our own minds, including our acts of recall.
The anxiety provoked by the idea of “memory dampening” is so intriguing that even the President’s Council on Bioethics, convened by President George W. Bush in his first term, thought the issue important enough to reflect on it alongside discussions of cloning and stem-cell research. Editing memories could “disconnect people from reality or their true selves,” the council warned. While it did not give a definition of “selfhood,” it did give examples of how such techniques could warp us by “falsifying our perception and understanding of the world.” The potential technique “risks making shameful acts seem less shameful, or terrible acts less terrible, than they really are.”
Meanwhile, David DiSalvo notes ten brain science studies from 2011 including this:
Brain Implant Enables Memories to be Recorded and Played Back
Neural prosthetics had a big year in 2011, and no development in this area was bigger than an implant designed to record and replay memories.
Researchers had a group of rats with the implant perform a simply memory task: get a drink of water by hitting one lever in a cage, then—after a distraction—hitting another. They had to remember which lever they’d already pushed to know which one to push the second time. As the rats did this memory task, electrodes in the implants recorded signals between two areas of their brains involved in storing new information in long-term memory.
The researchers then gave the rats a drug that kept those brain areas from communicating. The rats still knew they had to press one lever then the other to get water, but couldn’t remember which lever they’d already pressed. When researchers played back the neural signals they’d recorded earlier via the implants, the rats again remembered which lever they had hit, and pressed the other one. When researchers played back the signals in rats not on the drug (thus amplifying their normal memory) the rats made fewer mistakes and remembered which lever they’d pressed even longer.
The bottom line: This is ground-level research demonstrating that neural signals involved in memory can be recorded and replayed. Progress from rats to humans will take many years, but even knowing that it’s plausible is remarkable.
Honeybee democracy
Joseph Castro reports: Honeybees choose new nest sites by essentially head-butting each other into a consensus, shows a new study.
When scout bees find a new potential home, they do a waggle dance to broadcast to other scout bees where the nest is and how suitable it is for the swarm. The nest with the most support in the end becomes the swarm’s new home.
But new research shows another layer of complexity to the decision-making process: The bees deliver “stop signals” via head butts to scouts favoring a different site. With enough head butts, a scout bee will stop its dance, decreasing the apparent support for that particular nest.
This process of excitation (waggle dances) and inhibition (head butts) in the bee swarm parallels how a complex brain makes decisions using neurons, the researchers say.
“Other studies have suggested that there could be a close relationship between collective decision-making in a swarm of bees and the brain,” said Iain Couzin, an evolutionary biologist at Princeton University, who was not involved in the study.”
“But this [study] takes it to a new level by showing that a fundamental process that’s very important in human decision-making is similarly important to honeybee decision-making.”
When honeybees outgrow their hive, several thousand workers leave the nest with their mother queen to establish a new colony. A few hundred of the oldest, most experienced bees, called scout bees, fly out to find that new nest.
“They then run a popularity contest with a dance party,” said Thomas Seeley, a biologist at Cornell University and lead author of the new study. When a scout bee finds a potential nest site, it advertizes the site with a waggle dance, which points other scouts to the nest’s location. The bees carefully adjust how long they dance based on the quality of the site.”
“We thought it was just a race to see which group of scout bees could attract a threshold number of bees,” Seeley told LiveScience. [Bees Form Better Democracy]
But in 2009, Seeley learned that there might be more to the story. He discovered that a bee could produce a stop-dancing signal by butting its head against a dancer and making a soft beep sound with a flight muscle. An accumulation of these head butts would eventually cause the bee to stop dancing. Seeley observed that the colony used these stop signals to reduce the number of bees recruited to forage from a perilous food source, but he wondered if the bees also used the head butts during nest hunting.
Thomas Seeley talks about Honeybee Democracy:
The Internet hasn’t changed our concept of truth as much as some theorists claim
Evgeny Morozov reviews Too Big to Know by David Weinberger: Weinberger argues that on the Internet facts are born “linked,” pointing to other facts and opinions. With time, other entities start linking to them, creating digital traces that can be used to scrutinize and even revise original facts.
On paper, facts look firm and reliable; online, they are always in flux. Furthermore, the Internet, unlike your local library, is infinite. Librarians choose which books to acquire; books that don’t make the cut become invisible. Not so with search engines. What they filter out doesn’t disappear — it stays in the background. New filters, Weinberger claims, don’t “filter out” but “filter forward.”
This triumph of the “networked” and the “hyperlinked” unsettles everything: facts (those who think that Barack Obama was born in Kenya also have facts), books (they are unable to contain “linked” and infinite knowledge) and even knowledge itself (it’s too obsessed with theories and consensus-seeking). Thus, “knowledge has become a network with the characteristics — for better and for worse — of the Net.”
This is an ambitious thesis. It’s also not original. “The Postmodern Condition: A Report on Knowledge,” a famous 1979 book by the French philosopher Jean-François Lyotard, makes a similar claim about computerization. “Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge statements,’” wrote Lyotard. Weinberger doesn’t mention Lyotard by name but claims that “the Internet showed us that the postmodernists were right.”
Too bad, then, that his argument is ridden with familiar postmodernist fallacies, the chief of which is his lack of discipline in using loaded terms like “knowledge.” This term means different things in philosophy and information science; the truth of a proposition matters in the former but not necessarily in the latter. Likewise, sociologists of knowledge trace the social life of facts, often by studying how and why people come to regard certain claims as “knowledge.” The truth of such claims is often irrelevant.
For epistemologists, however, to say that “S knows that p” three conditions must be met. P must be true; S must believe that p; S must be justified in believing that p. One can’t “know” that “Barack Obama was born in Kenya” because it’s untrue. On the other hand, to “know” that “Barack Obama was born in Hawaii,” one needs to have justification. A copy of his birth certificate would do. The hyperlink nirvana has not rid us of the justification requirement. The Internet may have altered the context in which justification is obtained — one can now link to Obama’s birth certificate — but it hasn’t changed what counts as “knowledge.”
Happy New Year?
Tali Sharot, author of The Optimism Bias: Why we’re wired to look on the bright side (this book is not available in the U.S. yet), writes: We like to think of ourselves as rational creatures. We watch our backs, weigh the odds, pack an umbrella. But both neuroscience and social science suggest that we are more optimistic than realistic. On average, we expect things to turn out better than they wind up being. People hugely underestimate their chances of getting divorced, losing their job or being diagnosed with cancer; expect their children to be extraordinarily gifted; envision themselves achieving more than their peers; and overestimate their likely life span (sometimes by 20 years or more).
The belief that the future will be much better than the past and present is known as the optimism bias. It abides in every race, region and socioeconomic bracket. Schoolchildren playing when-I-grow-up are rampant optimists, but so are grown-ups: a 2005 study found that adults over 60 are just as likely to see the glass half full as young adults.
You might expect optimism to erode under the tide of news about violent conflicts, high unemployment, tornadoes and floods and all the threats and failures that shape human life. Collectively we can grow pessimistic – about the direction of our country or the ability of our leaders to improve education and reduce crime. But private optimism, about our personal future, remains incredibly resilient. A survey conducted in 2007 found that while 70% thought families in general were less successful than in their parents’ day, 76% of respondents were optimistic about the future of their own family.
Overly positive assumptions can lead to disastrous miscalculations – make us less likely to get health checkups, apply sunscreen or open a savings account, and more likely to bet the farm on a bad investment. But the bias also protects and inspires us: it keeps us moving forward rather than to the nearest high-rise ledge. Without optimism, our ancestors might never have ventured far from their tribes and we might all be cave dwellers, still huddled together and dreaming of light and heat.
To make progress, we need to be able to imagine alternative realities – better ones – and we need to believe that we can achieve them. Such faith helps motivate us to pursue our goals. Optimists in general work longer hours and tend to earn more. Economists at Duke University found that optimists even save more. And although they are not less likely to divorce, they are more likely to remarry – an act that is, as Samuel Johnson wrote, the triumph of hope over experience.
Even if that better future is often an illusion, optimism has clear benefits in the present. Hope keeps our minds at ease, lowers stress and improves physical health. Researchers studying heart-disease patients found that optimists were more likely than non-optimistic patients to take vitamins, eat low-fat diets and exercise, thereby reducing their overall coronary risk. A study of cancer patients revealed that pessimistic patients under 60 were more likely to die within eight months than non-pessimistic patients of the same initial health, status and age.
In fact, a growing body of scientific evidence points to the conclusion that optimism may be hardwired by evolution into the human brain. The science of optimism, once scorned as an intellectually suspect province of pep rallies and smiley faces, is opening a new window on the workings of human consciousness. What it shows could fuel a revolution in psychology, as the field comes to grips with accumulating evidence that our brains aren’t just stamped by the past. They are constantly being shaped by the future.
Hardwired for hope?I would have liked to tell you that my work on optimism grew out of a keen interest in the positive side of human nature. The reality is that I stumbled onto the brain’s innate optimism by accident. After living through 9/11, in New York City, I had set out to investigate people’s memories of the terrorist attacks. I was intrigued by the fact that people felt their memories were as accurate as a videotape, while often they were filled with errors. A survey conducted around the country showed that 11 months after the attacks, individuals’ recollections of their experience that day were consistent with their initial accounts (given in September 2011) only 63% of the time. They were also poor at remembering details of the event, such as the names of the airline carriers. Where did these mistakes in memory come from?
Scientists who study memory proposed an intriguing answer: memories are susceptible to inaccuracies partly because the neural system responsible for remembering episodes from our past might not have evolved for memory alone. Rather, the core function of the memory system could in fact be to imagine the future – to enable us to prepare for what has yet to come. The system is not designed to perfectly replay past events, the researchers claimed. It is designed to flexibly construct future scenarios in our minds. As a result, memory also ends up being a reconstructive process, and occasionally, details are deleted and others inserted. [Continue reading…]
One world on Christmas Day
The best Christian slogan I know comes from the charity, Christian Aid: We believe in life before death.
Keep that in mind when gazing into the life-sustaining sky that from below looks so vast, yet from above is revealed to be wafer thin — all that stands between us and a lifeless void.
Sufjan Stevens — Michigan
Beissoul and Sophie — Lithuania
A murmuration of starlings — Ireland
Murmuration from Sophie Windsor Clive on Vimeo.
Wade Davis: Dreams from endangered cultures
Antonio Damasio: The quest to understand consciousness
A new career for Mike Tyson?
Noosphere — the sphere of human thought
Video — Noosphere by Tatiana Plakhova
Music — Singtree by Solar Quest
Chaotic order
A Thanksgiving message
As Brother David Steindl-Rast says, whether one is religious or secular, it’s hard to argue against gratefulness.
How much gratefulness we feel has little to do with whether life seems abundant or filled with hardship. On the contrary, it hinges on the degree to which we are prey to the delusion that we are self-made, or instead have discovered that life is a process in which we endlessly stumble into the unknown.
Let’s never forget what a wondrous planet we live on — a place where staggering beauty can suddenly sweep up from the horizon.
Murmuration from Sophie Windsor Clive on Vimeo.
Earth from space
My summer at an Indian call center
Andrew Marantz writes:
Indian BPOs [business process outsourcing jobs] work with firms from dozens of countries, but most call-center jobs involve talking to Americans. New hires must be fluent in English, but many have never spoken to a foreigner. So to earn their headsets, they must complete classroom training lasting from one week to three months. First comes voice training, an attempt to “neutralize” pronunciation and diction by eliminating the round vowels of Indian English. Speaking Hindi on company premises is often a fireable offense.
Next is “culture training,” in which trainees memorize colloquialisms and state capitals, study clips of Seinfeld and photos of Walmarts, and eat in cafeterias serving paneer burgers and pizza topped with lamb pepperoni. Trainers aim to impart something they call “international culture”—which is, of course, no culture at all, but a garbled hybrid of Indian and Western signifiers designed to be recognizable to everyone and familiar to no one. The result is a comically botched translation—a multibillion dollar game of telephone. “The most marketable skill in India today,” the Guardian wrote in 2003, “is the ability to abandon your identity and slip into someone else’s.”