Category Archives: Neuroscience

Stress: The roots of resilience

Nature reports: On a chilly, January night in 1986, Elizabeth Ebaugh carried a bag of groceries across the quiet car park of a shopping plaza in the suburbs of Washington DC. She got into her car and tossed the bag onto the empty passenger seat. But as she tried to close the door, she found it blocked by a slight, unkempt man with a big knife. He forced her to slide over and took her place behind the wheel.

The man drove aimlessly along country roads, ranting about his girlfriend’s infidelity and the time he had spent in jail. Ebaugh, a psychotherapist who was 30 years old at the time, used her training to try to calm the man and negotiate her freedom. But after several hours and a few stops, he took her to a motel, watched a pornographic film and raped her. Then he forced her back into the car.

She pleaded with him to let her go, and he said that he would. So when he stopped on a bridge at around 2 a.m. and told her to get out, she thought she was free. Then he motioned for her to jump. “That’s the time where my system, I think, just lost it,” Ebaugh recalls. Succumbing to the terror and exhaustion of the night, she fainted.

Ebaugh awoke in freefall. The man had thrown her, limp and handcuffed, off the bridge four storeys above a river reservoir. When she hit the frigid water, she turned onto her back and started kicking. “At that point, there was no part of me that thought I wasn’t going to make it,” she says.

Few people will experience psychological and physical abuse as terrible as the abuse Ebaugh endured that night. But extreme stress is not unusual. In the United States, an estimated 50–60% of people will experience a traumatic event at some point in their lives, whether through military combat, assault, a serious car accident or a natural disaster. Acute stress triggers an intense physiological response and cements an association in the brain’s circuits between the event and fear. If this association lingers for more than a month, as it does for about 8% of trauma victims, it is considered to be post-traumatic stress disorder (PTSD). The three main criteria for diagnosis are recurring and frightening memories, avoidance of any potential triggers for such memories and a heightened state of arousal.

Ebaugh experienced these symptoms in the months after her attack and was diagnosed with PTSD. But with the help of friends, psychologists and spiritual practices, she recovered. After about five years, she no longer met the criteria for the disorder. She opened her own private practice, married and had a son.

About two-thirds of people diagnosed with PTSD eventually recover. “The vast majority of people actually do OK in the face of horrendous stresses and traumas,” says Robert Ursano, director of the Center for the Study of Traumatic Stress at the Uniformed Services University of the Health Sciences in Bethesda, Maryland. Ursano and other researchers want to know what underlies people’s mental strength. “How does one understand the resilience of the human spirit?” he asks.

Since the 1970s, scientists have learned that several psychosocial factors — such as strong social networks, recalling and confronting fears and an optimistic outlook — help people to recover. But today, scientists in the field are searching for the biological factors involved. Some have found specific genetic variants in humans and in animals that influence an individual’s odds of developing PTSD. Other groups are investigating how the body and brain change during the recovery process and why psychological interventions do not always work. The hope is that this research might lead to therapies that enhance resilience. [Continue reading…]

Note that while this report focuses on advances in the biological understanding of trauma and resilience, Ebaugh’s recovery did not result from conventional medical treatment.

Ebaugh, who now specializes in therapy for trauma victims, agrees that drug-based treatments could aid in recovery. But some people may find relief elsewhere. Religious practices — especially those that emphasize altruism, community and having a purpose in life — have been found to help trauma victims to overcome PTSD. Ebaugh says that yoga, meditation, natural remedies and acupuncture worked for her.

Facebooktwittermail

Does self-awareness require a complex brain?

Ferris Jabr writes: The computer, smartphone or other electronic device on which you are reading this article has a rudimentary brain—kind of. It has highly organized electrical circuits that store information and behave in specific, predictable ways, just like the interconnected cells in your brain. On the most fundamental level, electrical circuits and neurons are made of the same stuff—atoms and their constituent elementary particles—but whereas the human brain is conscious, manmade gadgets do not know they exist. Consciousness, most scientists argue, is not a universal property of all matter in the universe. Rather, consciousness is restricted to a subset of animals with relatively complex brains. The more scientists study animal behavior and brain anatomy, however, the more universal consciousness seems to be. A brain as complex as the human brain is definitely not necessary for consciousness. On July 7 this year, a group of neuroscientists convening at Cambridge University signed a document officially declaring that non-human animals, “including all mammals and birds, and many other creatures, including octopuses” are conscious.

Humans are more than just conscious—they are also self-aware. Scientists differ on the difference between consciousness and self-awareness, but here is one common explanation: Consciousness is awareness of one’s body and one’s environment; self-awareness is recognition of that consciousness—not only understanding that one exists, but further understanding that one is aware of one’s existence. Another way of thinking about it: To be conscious is to think; to be self-aware is to realize that you are a thinking being and to think about your thoughts. Presumably, human infants are conscious—they perceive and respond to people and things around them—but they are not yet self-aware. In their first years of life, infants develop a sense of self, learn to recognize themselves in the mirror and to distinguish their own point of view from other people’s perspectives.

Numerous neuroimaging studies have suggested that thinking about ourselves, recognizing images of ourselves and reflecting on our thoughts and feelings—that is, different forms self-awareness—all involve the cerebral cortex, the outermost, intricately wrinkled part of the brain. The fact that humans have a particularly large and wrinkly cerebral cortex relative to body size supposedly explains why we seem to be more self-aware than most other animals.

One would expect, then, that a man missing huge portions of his cerebral cortex would lose at least some of his self-awareness. Patient R, also known as Roger, defies that expectation. Roger is a 57-year-old man who suffered extensive brain damage in 1980 after a severe bout of herpes simplex encephalitis—inflammation of the brain caused by the herpes virus. The disease destroyed most of Roger’s insular cortex, anterior cingulate cortex (ACC), and medial prefrontal cortex (mPFC), all brain regions thought to be essential for self-awareness. About 10 percent of his insula remains and only one percent of his ACC.

Roger cannot remember much of what happened to him between 1970 and 1980 and he has great difficulty forming new memories. He cannot taste or smell either. But he still knows who he is—he has a sense of self. [Continue reading…]

Facebooktwittermail

Rediscovering LSD

Tim Doody writes: At 9:30 in the morning, an architect and three senior scientists—two from Stanford, the other from Hewlett-Packard—donned eyeshades and earphones, sank into comfy couches, and waited for their government-approved dose of LSD to kick in. From across the suite and with no small amount of anticipation, Dr. James Fadiman spun the knobs of an impeccable sound system and unleashed Beethoven’s “Symphony No. 6 in F Major, Op. 68.” Then he stood by, ready to ease any concerns or discomfort.

For this particular experiment, the couched volunteers had each brought along three highly technical problems from their respective fields that they’d been unable to solve for at least several months. In approximately two hours, when the LSD became fully active, they were going to remove the eyeshades and earphones, and attempt to find some solutions. Fadiman and his team would monitor their efforts, insights, and output to determine if a relatively low dose of acid—100 micrograms to be exact—enhanced their creativity.

It was the summer of ’66. And the morning was beginning like many others at the International Foundation for Advanced Study, an inconspicuously named, privately funded facility dedicated to psychedelic drug research, which was located, even less conspicuously, on the second floor of a shopping plaza in Menlo Park, Calif. However, this particular morning wasn’t going to go like so many others had during the preceding five years, when researchers at IFAS (pronounced “if-as”) had legally dispensed LSD. Though Fadiman can’t recall the exact date, this was the day, for him at least, that the music died. Or, perhaps more accurately for all parties involved in his creativity study, it was the day before.

At approximately 10 a.m., a courier delivered an express letter to the receptionist, who in turn quickly relayed it to Fadiman and the other researchers. They were to stop administering LSD, by order of the U.S. Food and Drug Administration. Effective immediately. Dozens of other private and university-affiliated institutions had received similar letters that day.

That research centers once were permitted to explore the further frontiers of consciousness seems surprising to those of us who came of age when a strongly enforced psychedelic prohibition was the norm. They seem not unlike the last generation of children’s playgrounds, mostly eradicated during the ’90s, that were higher and riskier than today’s soft-plastic labyrinths. (Interestingly, a growing number of child psychologists now defend these playgrounds, saying they provided kids with both thrills and profound life lessons that simply can’t be had close to the ground.)

When the FDA’s edict arrived, Fadiman was 27 years old, IFAS’s youngest researcher. He’d been a true believer in the gospel of psychedelics since 1961, when his old Harvard professor Richard Alpert (now Ram Dass) dosed him with psilocybin, the magic in the mushroom, at a Paris café. That day, his narrow, self-absorbed thinking had fallen away like old skin. People would live more harmoniously, he’d thought, if they could access this cosmic consciousness. Then and there he’d decided his calling would be to provide such access to others. He migrated to California (naturally) and teamed up with psychiatrists and seekers to explore how and if psychedelics in general—and LSD in particular—could safely augment psychotherapy, addiction treatment, creative endeavors, and spiritual growth. At Stanford University, he investigated this subject at length through a dissertation—which, of course, the government ban had just dead-ended.

Couldn’t they comprehend what was at stake? Fadiman was devastated and more than a little indignant. However, even if he’d wanted to resist the FDA’s moratorium on ideological grounds, practical matters made compliance impossible: Four people who’d never been on acid before were about to peak.

“I think we opened this tomorrow,” he said to his colleagues.

And so one orchestra after the next wove increasingly visual melodies around the men on the couch. Then shortly before noon, as arranged, they emerged from their cocoons and got to work.

* * *

Over the course of the preceding year, IFAS researchers had dosed a total of 22 other men for the creativity study, including a theoretical mathematician, an electronics engineer, a furniture designer, and a commercial artist. By including only those whose jobs involved the hard sciences (the lack of a single female participant says much about mid-century career options for women), they sought to examine the effects of LSD on both visionary and analytical thinking. Such a group offered an additional bonus: Anything they produced during the study would be subsequently scrutinized by departmental chairs, zoning boards, review panels, corporate clients, and the like, thus providing a real-world, unbiased yardstick for their results.

In surveys administered shortly after their LSD-enhanced creativity sessions, the study volunteers, some of the best and brightest in their fields, sounded like tripped-out neopagans at a backwoods gathering. Their minds, they said, had blossomed and contracted with the universe. They’d beheld irregular but clean geometrical patterns glistening into infinity, felt a rightness before solutions manifested, and even shapeshifted into relevant formulas, concepts, and raw materials.

But here’s the clincher. After their 5HT2A neural receptors simmered down, they remained firm: LSD absolutely had helped them solve their complex, seemingly intractable problems. And the establishment agreed. The 26 men unleashed a slew of widely embraced innovations shortly after their LSD experiences, including a mathematical theorem for NOR gate circuits, a conceptual model of a photon, a linear electron accelerator beam-steering device, a new design for the vibratory microtome, a technical improvement of the magnetic tape recorder, blueprints for a private residency and an arts-and-crafts shopping plaza, and a space probe experiment designed to measure solar properties. Fadiman and his colleagues published these jaw-dropping results and closed shop. [Continue reading…]

Facebooktwittermail

The amygdala made who do it?

James Atlas writes: Why are we thinking so much about thinking these days? Near the top of best-seller lists around the country, you’ll find Jonah Lehrer’s “Imagine: How Creativity Works,” followed by Charles Duhigg’s book “The Power of Habit: Why We Do What We Do in Life and Business,” and somewhere in the middle, where it’s held its ground for several months, Daniel Kahneman’s “Thinking, Fast and Slow.” Recently arrived is “Subliminal: How Your Unconscious Mind Rules Your Behavior,” by Leonard Mlodinow.

It’s the invasion of the Can’t-Help-Yourself books.

Unlike most pop self-help books, these are about life as we know it — the one you can change, but only a little, and with a ton of work. Professor Kahneman, who won the Nobel Prize in economic science a decade ago, has synthesized a lifetime’s research in neurobiology, economics and psychology. “Thinking, Fast and Slow” goes to the heart of the matter: How aware are we of the invisible forces of brain chemistry, social cues and temperament that determine how we think and act? Has the concept of free will gone out the window?

These books possess a unifying theme: The choices we make in day-to-day life are prompted by impulses lodged deep within the nervous system. Not only are we not masters of our fate; we are captives of biological determinism. Once we enter the portals of the strange neuronal world known as the brain, we discover that — to put the matter plainly — we have no idea what we’re doing. [Continue reading…]

Facebooktwittermail

The trust molecule

Paul J. Zak writes: Could a single molecule — one chemical substance — lie at the very center of our moral lives?

Research that I have done over the past decade suggests that a chemical messenger called oxytocin accounts for why some people give freely of themselves and others are coldhearted louts, why some people cheat and steal and others you can trust with your life, why some husbands are more faithful than others, and why women tend to be nicer and more generous than men. In our blood and in the brain, oxytocin appears to be the chemical elixir that creates bonds of trust not just in our intimate relationships but also in our business dealings, in politics and in society at large.

Known primarily as a female reproductive hormone, oxytocin controls contractions during labor, which is where many women encounter it as Pitocin, the synthetic version that doctors inject in expectant mothers to induce delivery. Oxytocin is also responsible for the calm, focused attention that mothers lavish on their babies while breast-feeding. And it is abundant, too, on wedding nights (we hope) because it helps to create the warm glow that both women and men feel during sex, a massage or even a hug.

Since 2001, my colleagues and I have conducted a number of experiments showing that when someone’s level of oxytocin goes up, he or she responds more generously and caringly, even with complete strangers. As a benchmark for measuring behavior, we relied on the willingness of our subjects to share real money with others in real time. To measure the increase in oxytocin, we took their blood and analyzed it. Money comes in conveniently measurable units, which meant that we were able to quantify the increase in generosity by the amount someone was willing to share. We were then able to correlate these numbers with the increase in oxytocin found in the blood.

Later, to be certain that what we were seeing was true cause and effect, we sprayed synthetic oxytocin into our subjects’ nasal passages — a way to get it directly into their brains. Our conclusion: We could turn the behavioral response on and off like a garden hose. (Don’t try this at home: Oxytocin inhalers aren’t available to consumers in the U.S.)

More strikingly, we found that you don’t need to shoot a chemical up someone’s nose, or have sex with them, or even give them a hug in order to create the surge in oxytocin that leads to more generous behavior. To trigger this “moral molecule,” all you have to do is give someone a sign of trust. When one person extends himself to another in a trusting way—by, say, giving money — the person being trusted experiences a surge in oxytocin that makes her less likely to hold back and less likely to cheat. Which is another way of saying that the feeling of being trusted makes a person more… trustworthy. Which, over time, makes other people more inclined to trust, which in turn…

If you detect the makings of an endless loop that can feed back onto itself, creating what might be called a virtuous circle — and ultimately a more virtuous society — you are getting the idea. [Continue reading…]

Facebooktwittermail

Near death, explained

Mario Beauregard writes: In 1991, Atlanta-based singer and songwriter Pam Reynolds felt extremely dizzy, lost her ability to speak, and had difficulty moving her body. A CAT scan showed that she had a giant artery aneurysm—a grossly swollen blood vessel in the wall of her basilar artery, close to the brain stem. If it burst, which could happen at any moment, it would kill her. But the standard surgery to drain and repair it might kill her too.

With no other options, Pam turned to a last, desperate measure offered by neurosurgeon Robert Spetzler at the Barrow Neurological Institute in Phoenix, Arizona. Dr. Spetzler was a specialist and pioneer in hypothermic cardiac arrest—a daring surgical procedure nicknamed “Operation Standstill.” Spetzler would bring Pam’s body down to a temperature so low that she was essentially dead. Her brain would not function, but it would be able to survive longer without oxygen at this temperature. The low temperature would also soften the swollen blood vessels, allowing them to be operated on with less risk of bursting. When the procedure was complete, the surgical team would bring her back to a normal temperature before irreversible damage set in.

Essentially, Pam agreed to die in order to save her life—and in the process had what is perhaps the most famous case of independent corroboration of out of body experience (OBE) perceptions on record. This case is especially important because cardiologist Michael Sabom was able to obtain verification from medical personnel regarding crucial details of the surgical intervention that Pam reported. Here’s what happened.

Pam was brought into the operating room at 7:15 a.m., she was given general anesthesia, and she quickly lost conscious awareness. At this point, Spetzler and his team of more than 20 physicians, nurses, and technicians went to work. They lubricated Pam’s eyes to prevent drying, and taped them shut. They attached EEG electrodes to monitor the electrical activity of her cerebral cortex. They inserted small, molded speakers into her ears and secured them with gauze and tape. The speakers would emit repeated 100-decibel clicks—approximately the noise produced by a speeding express train—eliminating outside sounds and measuring the activity of her brainstem.

At 8:40 a.m., the tray of surgical instruments was uncovered, and Robert Spetzler began cutting through Pam’s skull with a special surgical saw that produced a noise similar to a dental drill. At this moment, Pam later said, she felt herself “pop” out of her body and hover above it, watching as doctors worked on her body.

Although she no longer had use of her eyes and ears, she described her observations in terms of her senses and perceptions. “I thought the way they had my head shaved was very peculiar,” she said. “I expected them to take all of the hair, but they did not.” She also described the Midas Rex bone saw (“The saw thing that I hated the sound of looked like an electric toothbrush and it had a dent in it … ”) and the dental-drill sound it made with considerable accuracy.

Meanwhile, Spetzler was removing the outermost membrane of Pamela’s brain, cutting it open with scissors. At about the same time, a female cardiac surgeon was attempting to locate the femoral artery in Pam’s right groin. Remarkably, Pam later claimed to remember a female voice saying, “We have a problem. Her arteries are too small.” And then a male voice: “Try the other side.” Medical records confirm this conversation, yet Pam could not have heard them.

The cardiac surgeon was right—Pam’s blood vessels were indeed too small to accept the abundant blood flow requested by the cardiopulmonary bypass machine, so at 10:50 a.m., a tube was inserted into Pam’s left femoral artery and connected to the cardiopulmonary bypass machine. The warm blood circulated from the artery into the cylinders of the bypass machine, where it was cooled down before being returned to her body. Her body temperature began to fall, and at 11:05 a.m. Pam’s heart stopped. Her EEG brain waves flattened into total silence. A few minutes later, her brain stem became totally unresponsive, and her body temperature fell to a sepulchral 60 degrees Fahrenheit. At 11:25 a.m., the team tilted up the head of the operating table, turned off the bypass machine, and drained the blood from her body. Pamela Reynolds was clinically dead. [Continue reading…]

Facebooktwittermail

How psychedelic drugs can help people face death

Lauren Slater writes: Pam Sakuda was 55 when she found out she was dying. Shortly after having a tumor removed from her colon, she heard the doctor’s dreaded words: Stage 4; metastatic. Sakuda was given 6 to 14 months to live. Determined to slow her disease’s insidious course, she ran several miles every day, even during her grueling treatment regimens. By nature upbeat, articulate and dignified, Sakuda — who died in November 2006, outlasting everyone’s expectations by living for four years — was alarmed when anxiety and depression came to claim her after she passed the 14-month mark, her days darkening as she grew closer to her biological demise. Norbert Litzinger, Sakuda’s husband, explained it this way: “When you pass your own death sentence by, you start to wonder: When? When? It got to the point where we couldn’t make even the most mundane plans, because we didn’t know if Pam would still be alive at that time — a concert, dinner with friends; would she still be here for that?” When came to claim the couple’s life completely, their anxiety building as they waited for the final day.

As her fears intensified, Sakuda learned of a study being conducted by Charles Grob, a psychiatrist and researcher at Harbor-U.C.L.A. Medical Center who was administering psilocybin — an active component of magic mushrooms — to end-stage cancer patients to see if it could reduce their fear of death. Twenty-two months before she died, Sakuda became one of Grob’s 12 subjects. When the research was completed in 2008 — (and published in the Archives of General Psychiatry last year) — the results showed that administering psilocybin to terminally ill subjects could be done safely while reducing the subjects’ anxiety and depression about their impending deaths.

Grob’s interest in the power of psychedelics to mitigate mortality’s sting is not just the obsession of one lone researcher. Dr. John Halpern, head of the Laboratory for Integrative Psychiatry at McLean Hospital in Belmont Mass., a psychiatric training hospital for Harvard Medical School, used MDMA — also known as ecstasy — in an effort to ease end-of-life anxieties in two patients with Stage 4 cancer. And there are two ongoing studies using psilocybin with terminal patients, one at New York University’s medical school, led by Stephen Ross, and another at Johns Hopkins Bayview Medical Center, where Roland Griffiths has administered psilocybin to 22 cancer patients and is aiming for a sample size of 44. “This research is in its very early stages,” Grob told me earlier this month, “but we’re getting consistently good results.”

Grob and his colleagues are part of a resurgence of scientific interest in the healing power of psychedelics. Michael Mithoefer, for instance, has shown that MDMA is an effective treatment for severe P.T.S.D. Halpern has examined case studies of people with cluster headaches who took LSD and reported their symptoms greatly diminished. And psychedelics have been recently examined as treatment for alcoholism and other addictions.

Despite the promise of these investigations, Grob and other end-of-life researchers are careful about the image they cultivate, distancing themselves as much as possible from the 1960s, when psychedelics were embraced by many and used in a host of controversial studies, most famously the psilocybin project run by Timothy Leary. Grob described the rampant drug use that characterized the ’60s as “out of control” and said of his and others’ current research, “We are trying to stay under the radar. We want to be anti-Leary.” Halpern agreed. “We are serious sober scientists,” he told me. [Continue reading…]

In the following interview, Pam Sakuda talks about her experience prior to taking psilocybin, the effect of her psychedelic experience and its enduring benefit.

Sakuda observes:

The beauty of being able to expand your consciousness, change the way you’re feeling about things… — I don’t think the drug is the cause of these things; I think it is a catalyst that allows you to release your own thoughts and feelings from some place that you’ve bound them very tightly. And so it allows to open your own mind and consciousness and to release other feelings, explore other ways you might feel about these things.

This idea that a psychedelic or mind-expanding drug functions as a trigger — unlike other psychoactive drugs such as alchohol or opiates — opening, as Aldous Huxley put it, “the doors of perception,” has been attested to by millions of people. Yet for those who have not passed through these doors such expressions cannot really mean much. And this begs the question: is contemporary research being conducted by scientists so sober that they barely understand what their subjects are experiencing?

At least in the case of Charles Grob, the answer is no. Here he describes his own experience after taking Ayahuasca with the UDV in Brazil:

Facebooktwittermail

Can you make yourself smarter?

Dan Hurley writes: Early on a drab afternoon in January, a dozen third graders from the working-class suburb of Chicago Heights, Ill., burst into the Mac Lab on the ground floor of Washington-McKinley School in a blur of blue pants, blue vests and white shirts. Minutes later, they were hunkered down in front of the Apple computers lining the room’s perimeter, hoping to do what was, until recently, considered impossible: increase their intelligence through training.

“Can somebody raise their hand,” asked Kate Wulfson, the instructor, “and explain to me how you get points?”

On each of the children’s monitors, there was a cartoon image of a haunted house, with bats and a crescent moon in a midnight blue sky. Every few seconds, a black cat appeared in one of the house’s five windows, then vanished. The exercise was divided into levels. On Level 1, the children earned a point by remembering which window the cat was just in. Easy. But the game is progressive: the cats keep coming, and the kids have to keep watching and remembering.

“And here’s where it gets confusing,” Wulfson continued. “If you get to Level 2, you have to remember where the cat was two windows ago. The time before last. For Level 3, you have to remember where it was three times ago. Level 4 is four times ago. That’s hard. You have to keep track. O.K., ready? Once we start, anyone who talks loses a star.”

So began 10 minutes of a remarkably demanding concentration game. At Level 2, even adults find the task somewhat taxing. Almost no one gets past Level 3 without training. But most people who stick with the game do get better with practice. This isn’t surprising: practice improves performance on almost every task humans engage in, whether it’s learning to read or playing horseshoes.

What is surprising is what else it improved. In a 2008 study, Susanne Jaeggi and Martin Buschkuehl, now of the University of Maryland, found that young adults who practiced a stripped-down, less cartoonish version of the game also showed improvement in a fundamental cognitive ability known as “fluid” intelligence: the capacity to solve novel problems, to learn, to reason, to see connections and to get to the bottom of things. The implication was that playing the game literally makes people smarter.

Psychologists have long regarded intelligence as coming in two flavors: crystallized intelligence, the treasure trove of stored-up information and how-to knowledge (the sort of thing tested on “Jeopardy!” or put to use when you ride a bicycle); and fluid intelligence. Crystallized intelligence grows as you age; fluid intelligence has long been known to peak in early adulthood, around college age, and then to decline gradually. And unlike physical conditioning, which can transform 98-pound weaklings into hunks, fluid intelligence has always been considered impervious to training.

That, after all, is the premise of I.Q. tests, or at least the portion that measures fluid intelligence: we can test you now and predict all sorts of things in the future, because fluid intelligence supposedly sets in early and is fairly immutable. While parents, teachers and others play an essential role in establishing an environment in which a child’s intellect can grow, even Tiger Mothers generally expect only higher grades will come from their children’s diligence — not better brains.

How, then, could watching black cats in a haunted house possibly increase something as profound as fluid intelligence? Because the deceptively simple game, it turns out, targets the most elemental of cognitive skills: “working” memory. What long-term memory is to crystallized intelligence, working memory is to fluid intelligence. Working memory is more than just the ability to remember a telephone number long enough to dial it; it’s the capacity to manipulate the information you’re holding in your head — to add or subtract those numbers, place them in reverse order or sort them from high to low. Understanding a metaphor or an analogy is equally dependent on working memory; you can’t follow even a simple statement like “See Jane run” if you can’t put together how “see” and “Jane” connect with “run.” Without it, you can’t make sense of anything.

Over the past three decades, theorists and researchers alike have made significant headway in understanding how working memory functions. They have developed a variety of sensitive tests to measure it and determine its relationship to fluid intelligence. Then, in 2008, Jaeggi turned one of these tests of working memory into a training task for building it up, in the same way that push-ups can be used both as a measure of physical fitness and as a strength-building task. “We see attention and working memory as the cardiovascular function of the brain,” Jaeggi says.“If you train your attention and working memory, you increase your basic cognitive skills that help you for many different complex tasks.”

Jaeggi’s study has been widely influential. Since its publication, others have achieved results similar to Jaeggi’s not only in elementary-school children but also in preschoolers, college students and the elderly. The training tasks generally require only 15 to 25 minutes of work per day, five days a week, and have been found to improve scores on tests of fluid intelligence in as little as four weeks. Follow-up studies linking that improvement to real-world gains in schooling and job performance are just getting under way. But already, people with disorders including attention-deficit hyperactivity disorder (A.D.H.D.) and traumatic brain injury have seen benefits from training. Gains can persist for up to eight months after treatment. [Continue reading…]

Facebooktwittermail

There are no images

Tim Parks writes: “There are no images.” This was the first time I noticed Riccardo Manzotti. It was a conference on art and neuroscience. Someone had spoken about the images we keep in our minds. Manzotti seemed agitated. The girl sitting next to me explained that he built robots, was a genius. “There are no images and no representations in our minds,” he insisted. “Our visual experience of the world is a continuum between see-er and seen united in a shared process of seeing.”

I was curious, if only because, as a novelist I’d always supposed I was dealing in images, imagery. This stuff might have implications. So we had a beer together.

Manzotti has a degree in engineering and another in philosophy. He teaches in the psychology department at IULM University, Milan. The move from engineering to philosophy was prompted by conceptual problems he’d run into when first seeking to build robots. What does it mean that a subject sees an object? “People say the robot stores images of the world through its video camera. It doesn’t, it stores digital data. It has no images.”

Manzotti is what they call a radical externalist: for him consciousness is not safely confined within a brain whose neurons select and store information received from a separate world, appropriating, segmenting, and manipulating various forms of input. Instead, he offers a model he calls Spread Mind: consciousness is a process shared between various otherwise distinct processes which, for convenience’s sake we have separated out and stabilized in the words subject and object. Language, or at least our modern language, thus encourages a false account of experience.

His favorite example is the rainbow. For the rainbow experience to happen we need sunshine, raindrops, and a spectator. It is not that the sun and the raindrops cease to exist if there is no one there to see them. Manzotti is not a Bishop Berkeley. But unless someone is present at a particular point no colored arch can appear. The rainbow is hence a process requiring various elements, one of which happens to be an instrument of sense perception. It doesn’t exist whole and separate in the world nor does it exist as an acquired image in the head separated from what is perceived (the view held by the “internalists” who account for the majority of neuroscientists); rather, consciousness is spread between sunlight, raindrops, and visual cortex, creating a unique, transitory new whole, the rainbow experience. Or again: the viewer doesn’t see the world; he is part of a world process. [Continue reading…]

Facebooktwittermail

Christof Koch on free will, the singularity, and the quest to crack consciousness

John Horgan talks to Christof Koch about his latest book, Consciousness: Confessions of a Romantic Reductionist.

Horgan: You seem to have written your latest book in an attempt to achieve catharsis. Did it work?

Koch: Yes, it did help me resolve a long-brewing conflict between my Catholic upbringing and faith on the one hand and my scientific view of the world on the other. And writing the book also helped me deal with a more personal crisis.

Horgan: Your late friend and colleague Francis Crick once told me that free will was an illusion. Do you share this pessimistic view?

Koch: Well, Francis was right in that the standard conception of free will, that has the soul hovering above the brain and making it “freely” decide this way or that, is an illusion. It simply does not work at the conceptual or empirical level However, more subtle readings of free will remain, as I discuss in my book. Yet we are all less free than we like to believe. What remains, though, is that I am the principal actor in my life, so I better take responsibility for my actions.

Horgan: Do you think consciousness will ever be really, totally, explained? Could the “mysterians” [who propose that consciousness is not scientifically solvable] turn out to be right?

Koch: There is no law that states that all phenomena will have an explanation that humans can apprehend or understand. But my gut feelings—based on the past several centuries of progressively ever more successful explanations of the natural world—is that there will be better and better answers to the puzzle of our existence. We are not condemned to wander forever in some sort of epistemological fog. We will know. We will understand consciousness.

Horgan: Can you tell my readers, briefly, what Integrated Information Theory is and why you think it may be the key to consciousness?

Koch: The Integrated Information Theory of consciousness of Giulio Tononi is a general and quantitative way to approach the problem of consciousness. Ultimately, science needs to explain why some systems—a healthy and awake human brain, for example—give rise to conscious sensations, to experience, while other biological networks—the immune system, for example—do not. We also need to answer questions about consciousness in severely injured brain patients, in new-born babies, in a fetus, in dogs and cats, frogs, bees and flies and in artificial creatures, in iPhones and the internet. And only an information-theoretical account of consciousness is rich and powerful enough to be able to answer those sorts of questions in a meaningful and empirically accessible manner.

This is a clip from an interview — the whole interview is worth watching.

Facebooktwittermail

The neurological power of metaphor

Annie Murphy Paul writes: Amid the squawks and pings of our digital devices, the old-fashioned virtues of reading novels can seem faded, even futile. But new support for the value of fiction is arriving from an unexpected quarter: neuroscience.

Brain scans are revealing what happens in our heads when we read a detailed description, an evocative metaphor or an emotional exchange between characters. Stories, this research is showing, stimulate the brain and even change how we act in life.

Researchers have long known that the “classical” language regions, like Broca’s area and Wernicke’s area, are involved in how the brain interprets written words. What scientists have come to realize in the last few years is that narratives activate many other parts of our brains as well, suggesting why the experience of reading can feel so alive. Words like “lavender,” “cinnamon” and “soap,” for example, elicit a response not only from the language-processing areas of our brains, but also those devoted to dealing with smells.

In a 2006 study published in the journal NeuroImage, researchers in Spain asked participants to read words with strong odor associations, along with neutral words, while their brains were being scanned by a functional magnetic resonance imaging (fMRI) machine. When subjects looked at the Spanish words for “perfume” and “coffee,” their primary olfactory cortex lit up; when they saw the words that mean “chair” and “key,” this region remained dark. The way the brain handles metaphors has also received extensive study; some scientists have contended that figures of speech like “a rough day” are so familiar that they are treated simply as words and no more. Last month, however, a team of researchers from Emory University reported in Brain & Language that when subjects in their laboratory read a metaphor involving texture, the sensory cortex, responsible for perceiving texture through touch, became active. Metaphors like “The singer had a velvet voice” and “He had leathery hands” roused the sensory cortex, while phrases matched for meaning, like “The singer had a pleasing voice” and “He had strong hands,” did not.

Facebooktwittermail

The bilingual will inherit the earth

Yudhijit Bhattacharjee writes: Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can have a profound effect on your brain, improving cognitive skills not related to language and even shielding against dementia in old age.

This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long considered a second language to be an interference, cognitively speaking, that hindered a child’s academic and intellectual development.

They were not wrong about the interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve internal conflict, giving the mind a workout that strengthens its cognitive muscles.

Facebooktwittermail

Remembering and forgetting

Jenny Diski reviews Memory: Fragments of a Modern History by Alison Winter: I was in my late thirties before it struck me that there was something odd about the tableau I have in my mind of a familiar living-room, armchair, my father in it, silvery hair, moustache, brown suede lace-ups, and me, aged six or so, sitting on his knee. The layout is correct – I have been back to the block of flats and sat in the living-room of the flat next door, with the same floor plan. Door in the right place; chair I’m sure accurate, a burgundy moquette; patterned carpet; windows looking out onto the brick wall of the offices opposite. My father looks like my father in pictures I have of him. I look like … well, actually I don’t have any pictures of me at that age. But I’m sure I looked pretty much like the memory I can call up at will. It’s not particularly interesting as a memory. Nothing special is happening. It could be a painting, or a photograph, except that I shift about as a child does sitting on her father’s knee. Here’s the thing, though: I can see the entire picture. I can, you may have noticed, see myself. My observation point is from the top of the wall opposite where we are sitting, just below the ceiling, looking down across the room towards me and my father in the chair. I can see me clearly, but what I can’t do is position myself on my father’s knee and become a part of the picture, even though I am in it. I can’t in other words look out at the room from my place on the chair. How can that be a memory? And if it isn’t, what is it? When I think about my childhood, that is invariably one of the first ‘memories’ to spring up, ready and waiting: an untraumatic, slightly-moving picture. It never crossed my mind to notice the anomalous point of view until I was middle-aged. Before then it went without saying that it was a ‘real’ memory. Afterwards, it became an indicator of how false recollection can be.

Memory has always been a worry to us. The thing we feel sure makes us ourselves (no memory, no me) is also something we know to be treacherous, overaccommodating, fugitive: delightfully and fearfully unreliable. We’re stuck inside our own heads with our recollections (or old photos and now videos that have become memories) and there is no way, except sometimes by trusting to the probably unreliable memories of other people, to be absolutely sure that we know what we think we know, or are who we think we are. That anxiety about the accuracy of our grasp of our past selves accounts for the way many other alarming aspects of being alive have become attached to the subject of memory; the theme changes and goes through cycles over time (law, war, politics, medicine, family, sexuality), but always serves to remind us to worry about the consequences of never being quite sure of what we and others remember. People have thrown all the expertise they can find or invent at the problem. We have asked shamans, clairvoyants, hypnotists, historians, scientists, surgeons, law-makers, artists and writers, social psychologists and psychoanalysts to investigate the truth, the facts, the interpretations, so to reassure us about the mechanism and reliability of remembering, but, as Alison Winter’s deft study of 20th-century memory controversies concludes, we haven’t come close to a definitive answer.

Yet, alongside our anxiety about the trustworthiness of remembering, there is an opposite pull, which is quite as powerful, towards the commonsense feeling that we can all know and trust our own memories; that we know our own minds. Memories when they rise feel reliable. Whatever scientists or other experts do in the laboratory, library or consulting room, individuals, including the experts themselves when off duty, proceed in their everyday lives as if their personal memories are a valid basis for action and interaction, just as physicists continue to walk on apparently solid floors while knowing that they are largely made up of empty space. We would be mad not to. Underlying the compelling feeling that we are our memories is a further common-sense assumption that our entire lives are accurately retained somewhere in the brain ‘bank’ as laid-down memories of our experience, and that we retrieve our lives and selves from an ever expanding stockpile of recollections. Or we can’t, and then that feeling that it’s on the tip of our tongue, or there but just out of range, still encourages us to think that everything we have known or done is in us somewhere, if only our digging equipment were sharper. It’s considered a fault not with recording, but with playback. I was in no doubt about that as a small child. I had a small deep-red memory stone lodged in my left temple, and when I was asked a question at school it moved slowly and steadily from one side of my forehead around to the other. Before it was at the midway point, I tried for the answer, knowing it was in my mind, available to me; but once the stone passed the centre line between my eyes, I stopped worrying about it: I knew I didn’t know the answer, it simply wasn’t ‘there’. I supposed it was how everyone knew what they did and didn’t know. Looking back, it was an efficient filing and retrieval machine that unhandily had vanished by the time I reached secondary school. I recall the memory stone with some nostalgia; these days it’s the inefficiency of my mind-machine that exercises me. [Continue reading…]

Facebooktwittermail