Wired: It’s a question that’s perplexed philosophers for centuries and scientists for decades: Where does consciousness come from? We know it exists, at least in ourselves. But how it arises from chemistry and electricity in our brains is an unsolved mystery.
Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That’s just the way the universe works.
“The electric charge of an electron doesn’t arise out of more elemental properties. It simply has a charge,” says Koch. “Likewise, I argue that we live in a universe of space, time, mass, energy, and consciousness arising out of complex systems.”
What Koch proposes is a scientifically refined version of an ancient philosophical doctrine called panpsychism — and, coming from someone else, it might sound more like spirituality than science. But Koch has devoted the last three decades to studying the neurological basis of consciousness. His work at the Allen Institute now puts him at the forefront of the BRAIN Initiative, the massive new effort to understand how brains work, which will begin next year.
Koch’s insights have been detailed in dozens of scientific articles and a series of books, including last year’s Consciousness: Confessions of a Romantic Reductionist. WIRED talked to Koch about his understanding of this age-old question. [Continue reading...]
The Washington Post reports: While we are asleep, our bodies may be resting, but our brains are busy taking out the trash.
A new study has found that the cleanup system in the brain, responsible for flushing out toxic waste products that cells produce with daily use, goes into overdrive in mice that are asleep. The cells even shrink in size to make for easier cleaning of the spaces around them.
Scientists say this nightly self-clean by the brain provides a compelling biological reason for the restorative power of sleep.
“Sleep puts the brain in another state where we clean out all the byproducts of activity during the daytime,” said study author and University of Rochester neurosurgeon Maiken Nedergaard. Those byproducts include beta-amyloid protein, clumps of which form plaques found in the brains of Alzheimer’s patients.
Staying up all night could prevent the brain from getting rid of these toxins as efficiently, and explain why sleep deprivation has such strong and immediate consequences. Too little sleep causes mental fog, crankiness, and increased risks of migraine and seizure. Rats deprived of all sleep die within weeks.
Although as essential and universal to the animal kingdom as air and water, sleep is a riddle that has baffled scientists and philosophers for centuries. Drifting off into a reduced consciousness seems evolutionarily foolish, particularly for those creatures in danger of getting eaten or attacked. [Continue reading...]
Steven Shapin writes: In the movie “Groundhog Day,” the TV weatherman Phil Connors finds himself living the same day again and again. This has its advantages, as he has hundreds of chances to get things right. He can learn to speak French, to sculpt ice, to play jazz piano, and to become the kind of person with whom his beautiful colleague Rita might fall in love. But it’s a torment, too. An awful solitude flows from the fact that he’s the only one in Punxsutawney, Pennsylvania, who knows that something has gone terribly wrong with time. Nobody else seems to have any memory of all the previous iterations of the day. What is a new day for Rita is another of the same for Phil. Their realities are different—what passes between them in Phil’s world leaves no trace in hers—as are their senses of selfhood: Phil knows Rita as she cannot know him, because he knows her day after day after day, while she knows him only today. Time, reality, and identity are each curated by memory, but Phil’s and Rita’s memories work differently. From Phil’s point of view, she, and everyone else in Punxsutawney, is suffering from amnesia.
Amnesia comes in distinct varieties. In “retrograde amnesia,” a movie staple, victims are unable to retrieve some or all of their past knowledge — Who am I? Why does this woman say that she’s my wife? — but they can accumulate memories for everything that they experience after the onset of the condition. In the less cinematically attractive “anterograde amnesia,” memory of the past is more or less intact, but those who suffer from it can’t lay down new memories; every person encountered every day is met for the first time. In extremely unfortunate cases, retrograde and anterograde amnesia can occur in the same individual, who is then said to suffer from “transient global amnesia,” a condition that is, thankfully, temporary. Amnesias vary in their duration, scope, and originating events: brain injury, stroke, tumors, epilepsy, electroconvulsive therapy, and psychological trauma are common causes, while drug and alcohol use, malnutrition, and chemotherapy may play a part.
There isn’t a lot that modern medicine can do for amnesiacs. If cerebral bleeding or clots are involved, these may be treated, and occupational and cognitive therapy can help in some cases. Usually, either the condition goes away or amnesiacs learn to live with it as best they can — unless the notion of learning is itself compromised, along with what it means to have a life. Then, a few select amnesiacs disappear from systems of medical treatment and reappear as star players in neuroscience and cognitive psychology.
No star ever shone more brightly in these areas than Henry Gustave Molaison, a patient who, for more than half a century, until his death, in 2008, was known only as H.M., and who is now the subject of a book, “Permanent Present Tense” (Basic), by Suzanne Corkin, the neuroscientist most intimately involved in his case. [Continue reading...]
Michael Hanlon writes: The question of how the brain produces the feeling of subjective experience, the so-called ‘hard problem’, is a conundrum so intractable that one scientist I know refuses even to discuss it at the dinner table. Another, the British psychologist Stuart Sutherland, declared in 1989 that ‘nothing worth reading has been written on it’. For long periods, it is as if science gives up on the subject in disgust. But the hard problem is back in the news, and a growing number of scientists believe that they have consciousness, if not licked, then at least in their sights.
A triple barrage of neuroscientific, computational and evolutionary artillery promises to reduce the hard problem to a pile of rubble. Today’s consciousness jockeys talk of p‑zombies and Global Workspace Theory, mirror neurones, ego tunnels, and attention schemata. They bow before that deus ex machina of brain science, the functional magnetic resonance imaging (fMRI) machine. Their work is frequently very impressive and it explains a lot. All the same, it is reasonable to doubt whether it can ever hope to land a blow on the hard problem.
For example, fMRI scanners have shown how people’s brains ‘light up’ when they read certain words or see certain pictures. Scientists in California and elsewhere have used clever algorithms to interpret these brain patterns and recover information about the original stimulus — even to the point of being able to reconstruct pictures that the test subject was looking at. This ‘electronic telepathy’ has been hailed as the ultimate death of privacy (which it might be) and as a window on the conscious mind (which it is not).
The problem is that, even if we know what someone is thinking about, or what they are likely to do, we still don’t know what it’s like to be that person. Hemodynamic changes in your prefrontal cortex might tell me that you are looking at a painting of sunflowers, but then, if I thwacked your shin with a hammer, your screams would tell me you were in pain. Neither lets me know what pain or sunflowers feel like for you, or how those feelings come about. In fact, they don’t even tell us whether you really have feelings at all. One can imagine a creature behaving exactly like a human — walking, talking, running away from danger, mating and telling jokes — with absolutely no internal mental life. Such a creature would be, in the philosophical jargon, a zombie. (Zombies, in their various incarnations, feature a great deal in consciousness arguments.)
Why might an animal need to have experiences (‘qualia’, as they are called by some) rather than merely responses? In this magazine, the American psychologist David Barash summarised some of the current theories. One possibility, he says, is that consciousness evolved to let us to overcome the ‘tyranny of pain’. Primitive organisms might be slaves to their immediate wants, but humans have the capacity to reflect on the significance of their sensations, and therefore to make their decisions with a degree of circumspection. This is all very well, except that there is presumably no pain in the non-conscious world to start with, so it is hard to see how the need to avoid it could have propelled consciousness into existence.
Despite such obstacles, the idea is taking root that consciousness isn’t really mysterious at all; complicated, yes, and far from fully understood, but in the end just another biological process that, with a bit more prodding and poking, will soon go the way of DNA, evolution, the circulation of blood, and the biochemistry of photosynthesis. [Continue reading...]
Michael Graziano writes: Scientific talks can get a little dry, so I try to mix it up. I take out my giant hairy orangutan puppet, do some ventriloquism and quickly become entangled in an argument. I’ll be explaining my theory about how the brain — a biological machine — generates consciousness. Kevin, the orangutan, starts heckling me. ‘Yeah, well, I don’t have a brain. But I’m still conscious. What does that do to your theory?’
Kevin is the perfect introduction. Intellectually, nobody is fooled: we all know that there’s nothing inside. But everyone in the audience experiences an illusion of sentience emanating from his hairy head. The effect is automatic: being social animals, we project awareness onto the puppet. Indeed, part of the fun of ventriloquism is experiencing the illusion while knowing, on an intellectual level, that it isn’t real.
Many thinkers have approached consciousness from a first-person vantage point, the kind of philosophical perspective according to which other people’s minds seem essentially unknowable. And yet, as Kevin shows, we spend a lot of mental energy attributing consciousness to other things. We can’t help it, and the fact that we can’t help it ought to tell us something about what consciousness is and what it might be used for. If we evolved to recognise it in others – and to mistakenly attribute it to puppets, characters in stories, and cartoons on a screen — then, despite appearances, it really can’t be sealed up within the privacy of our own heads.
Lately, the problem of consciousness has begun to catch on in neuroscience. How does a brain generate consciousness? In the computer age, it is not hard to imagine how a computing machine might construct, store and spit out the information that ‘I am alive, I am a person, I have memories, the wind is cold, the grass is green,’ and so on. But how does a brain become aware of those propositions? The philosopher David Chalmers has claimed that the first question, how a brain computes information about itself and the surrounding world, is the ‘easy’ problem of consciousness. The second question, how a brain becomes aware of all that computed stuff, is the ‘hard’ problem.
I believe that the easy and the hard problems have gotten switched around. The sheer scale and complexity of the brain’s vast computations makes the easy problem monumentally hard to figure out. How the brain attributes the property of awareness to itself is, by contrast, much easier. If nothing else, it would appear to be a more limited set of computations. In my laboratory at Princeton University, we are working on a specific theory of awareness and its basis in the brain. Our theory explains both the apparent awareness that we can attribute to Kevin and the direct, first-person perspective that we have on our own experience. And the easiest way to introduce it is to travel about half a billion years back in time. [Continue reading...]
The Washington Post reports: It’s called a near-death experience, but the emphasis is on “near.” The heart stops, you feel yourself float up and out of your body. You glide toward the entrance of a tunnel, and a searing bright light envelops your field of vision.
It could be the afterlife, as many people who have come close to dying have asserted. But a new study says it might well be a show created by the brain, which is still very much alive. When the heart stops, neurons in the brain appeared to communicate at an even higher level than normal, perhaps setting off the last picture show, packed with special effects.
“A lot of people believed that what they saw was heaven,” said lead researcher and neurologist Jimo Borjigin. “Science hadn’t given them a convincing alternative.”
Scientists from the University of Michigan recorded electroencephalogram (EEG) signals in nine anesthetized rats after inducing cardiac arrest. Within the first 30 seconds after the heart had stopped, all the mammals displayed a surge of highly synchronized brain activity that had features associated with consciousness and visual activation. The burst of electrical patterns even exceeded levels seen during a normal, awake state.
In other words, they may have been having the rodent version of a near-death experience. [Continue reading...]
Raymond Tallis writes: The grip of neuroscience on the academic and popular imagination is extraordinary. In recent decades, brain scientists have burst out of the laboratory into the public forum. They are everywhere, analysing and explaining every aspect of our humanity, mobilising their expertise to instruct economists, criminologists, educationists, theologians, literary critics, social scientists and even politicians, and in some cases predicting a neuro-savvy utopia in which mankind, blessed with complete self-understanding, will be able to create a truly rational and harmonious future.
So the smile-worthy prediction, reported in the Huffington Post, by Kathleen Taylor, Oxford scientist and author of The Brain Supremacy, that Muslim fundamentalism “may be categorised as mental illness and cured by science” as a result of advances in neuroscience is not especially eccentric. It does, however, make you wonder why the pronouncements of neuroscientists command such a quantity of air-time and even credence.
It would be a mistake to assume their authority is based on revelatory discoveries, comparable to those made in leading-edge physics, which have translated so spectacularly into the evolving gadgetry of daily life. There is no shortage of data pouring out of neuroscience departments. Research papers on the brain run into millions. The key to their influence, however, is the exciting technologies the studies employ, notably various scans used to reveal the activity of the waking, living brain.
The jewel in the neuroscientific crown is functional magnetic resonance imaging (fMRI), justly described by Matt Crawford as “a fast-acting solvent of the critical faculties”. It seems that pretty well any assertion placed next to an fMRI scan will attract credulous attention. Behind this is something that goes deeper than uncritical technophilia. It is the belief that you are your brain, and brain activity is identical with your consciousness, so that peering into the intracranial darkness is the best way of advancing our knowledge of humankind.
Alas, this is based on a simple error. As someone who worked for many years, as a clinician and scientist, with people who had had strokes or suffered from epilepsy, I was acutely aware of the extent to which living an ordinary life was dependent on having a brain in some kind of working order. It did not follow from this that everyday living is being a brain in some kind of working order. The brain is a necessary condition for ordinary consciousness, but not a sufficient condition.
You don’t have to be a Cartesian dualist to accept that we are more than our brains. It’s enough to acknowledge that our consciousness is not tucked away in a particular space, but is irreducibly relational. [Continue reading...]
The New York Times reports: The vagaries of human memory are notorious. A friend insists you were at your 15th class reunion when you know it was your 10th. You distinctly remember that another friend was at your wedding, until she reminds you that you didn’t invite her. Or, more seriously, an eyewitness misidentifies the perpetrator of a terrible crime.
Not only are false, or mistaken, memories common in normal life, but researchers have found it relatively easy to generate false memories of words and images in human subjects. But exactly what goes on in the brain when mistaken memories are formed has remained mysterious.
Now scientists at the Riken-M.I.T. Center for Neural Circuit Genetics at the Massachusetts Institute of Technology say they have created a false memory in a mouse, providing detailed clues to how such memories may form in human brains.
Steve Ramirez, Xu Liu and other scientists, led by Susumu Tonegawa, reported Thursday in the journal Science that they caused mice to remember being shocked in one location, when in reality the electric shock was delivered in a completely different location.
The finding, said Dr. Tonegawa, a Nobel laureate for his work in immunology, and founder of the Picower Institute for Learning and Memory, of which the center is a part, is yet another cautionary reminder of how unreliable memory can be in mice and humans. It adds to evidence he and others first presented last year in the journal Nature that the physical trace of a specific memory can be identified in a group of brain cells as it forms, and activated later by stimulating those same cells.
Although mice are not people, the basic mechanisms of memory formation in mammals are evolutionarily ancient, said Edvard I. Moser, a neuroscientist at the Norwegian University of Science and Technology, who studies spatial memory and navigation and was not part of Dr. Tonegawa’s team.
At this level of brain activity, he said, “the difference between a mouse and a human is quite small.” [Continue reading...]
UCLA Newsroom: UCLA researchers now have the first evidence that bacteria ingested in food can affect brain function in humans. In an early proof-of-concept study of healthy women, they found that women who regularly consumed beneficial bacteria known as probiotics through yogurt showed altered brain function, both while in a resting state and in response to an emotion-recognition task.
The study, conducted by scientists with the Gail and Gerald Oppenheimer Family Center for Neurobiology of Stress, part of the UCLA Division of Digestive Diseases, and the Ahmanson–Lovelace Brain Mapping Center at UCLA, appears in the current online edition of the peer-reviewed journal Gastroenterology.
The discovery that changing the bacterial environment, or microbiota, in the gut can affect the brain carries significant implications for future research that could point the way toward dietary or drug interventions to improve brain function, the researchers said.
“Many of us have a container of yogurt in our refrigerator that we may eat for enjoyment, for calcium or because we think it might help our health in other ways,” said Dr. Kirsten Tillisch, an associate professor of medicine in the digestive diseases division at UCLA’s David Geffen School of Medicine and lead author of the study. “Our findings indicate that some of the contents of yogurt may actually change the way our brain responds to the environment. When we consider the implications of this work, the old sayings ‘you are what you eat’ and ‘gut feelings’ take on new meaning.”
Researchers have known that the brain sends signals to the gut, which is why stress and other emotions can contribute to gastrointestinal symptoms. This study shows what has been suspected but until now had been proved only in animal studies: that signals travel the opposite way as well.
“Time and time again, we hear from patients that they never felt depressed or anxious until they started experiencing problems with their gut,” Tillisch said. “Our study shows that the gut–brain connection is a two-way street.” [Continue reading...]
David Robson writes: Some 36-year-olds choose to collect vintage wine, vinyl records or sports memorabilia. For Richard Simcott, it is languages. His itch to learn has led him to study more than 30 foreign tongues – and he’s not ready to give up.
During our conversation in a London restaurant, he reels off sentences in Spanish, Turkish and Icelandic as easily as I can name the pizza and pasta on our menu. He has learned Dutch on the streets of Rotterdam, Czech in Prague and Polish during a house share with some architects. At home, he talks to his wife in fluent Macedonian.
What’s remarkable about Simcott isn’t just the number and diversity of languages he has mastered. It’s his age. Long before grey hairs appear and waistlines expand, the mind’s cogs are meant to seize up, making it difficult to pick up any new skill, be it a language, the flute, or archery. Even if Simcott had primed his mind for new languages while at school, he should have faced a steep decline in his abilities as the years went by – yet he still devours unfamiliar grammars and strange vocabularies to a high level. “My linguistic landscape is always changing,” he says. “If you’re school-aged, or middle-aged – I don’t think there’s a big difference.”
A decade ago, few neuroscientists would have agreed that adults can rival the learning talents of children. But we needn’t be so defeatist. The mature brain, it turns out, is more supple than anyone thought. “The idea that there’s a critical period for learning in childhood is overrated,” says Gary Marcus, a psychologist at New York University. What’s more, we now understand the best techniques to accelerate knowledge and skill acquisition in adults, so can perhaps unveil a few tricks of the trade of super-learners like Simcott. Whatever you want to learn, it’s never too late to charge those grey cells.
The idea that the mind fossilises as it ages is culturally entrenched. The phrase “an old dog will learn no tricks” is recorded in an 18th century book of proverbs and is probably hundreds of years older.
When researchers finally began to investigate the adult brain’s malleability in the 1960s, their results appeared to agree with the saying. Most insights came indirectly from studies of perception, which suggested that an individual’s visual abilities were capped at a young age. For example, restricting young animals’ vision for a few weeks after birth means they will never manage to see normally. The same is true for people born with cataracts or a lazy eye – repair too late, and the brain fails to use the eye properly for life. “For a very long time, it seemed that those constraints were set in stone after that critical period,” says Daphne Bavelier at the University of Rochester, New York.
These are extreme circumstances, of course, but the evidence suggested that the same neural fossilisation would stifle other kinds of learning. Many of the studies looked at language development – particularly in families of immigrants. While the children picked up new tongues with ease, their parents were still stuttering broken sentences. But if there is a critical period for foreign language learning, everyone should be affected equally; Simcott’s ability to master a host of languages should be as impossible as a dog playing the piano. [Continue reading...]
The Atlantic: About 40 years ago, researchers first began to suspect that we have neurons in our brains called “place cells.” They’re responsible for helping us (rats and humans alike) find our way in the world, navigating the environment with some internal sense of where we are, how far we’ve come, and how to find our way back home. All of this sounds like the work of maps. But our brains do impressively sophisticated mapping work, too, and in ways we never actively notice.
Every time you walk out your front door and past the mailbox, for instance, a neuron in your hippocampus fires as you move through that exact location – next to the mailbox – with a real-world precision down to as little as 30 centimeters. When you come home from work and pass the same spot at night, the neuron fires again, just as it will the next morning. “Each neuron cares for one place,” says Mayank Mehta, a neurophysicist at UCLA. “And it doesn’t care for any other place in the world.”
This is why these neurons are called “place cells.” And, in constantly shuffling patterns, they generate our cognitive maps of the world. Exactly how they do this, though, has remained a bit of an enigma. The latest research from Mehta and his colleagues, published this month in the online edition of the journal Science, provides more clues. It now appears as if all of the sensory cues around us – the smell of a pizzeria, the feel of a sidewalk, the sound of a passing bus – are much more integral to how our brains map our movement through space than scientists previously believed. [Continue reading...]
UNC School of Medicine: As the human body fine-tunes its neurological wiring, nerve cells often must fix a faulty connection by amputating an axon — the “business end” of the neuron that sends electrical impulses to tissues or other neurons. It is a dance with death, however, because the molecular poison the neuron deploys to sever an axon could, if uncontained, kill the entire cell.
Researchers from the University of North Carolina School of Medicine have uncovered some surprising insights about the process of axon amputation, or “pruning,” in a study published May 21 in the journal Nature Communications. Axon pruning has mystified scientists curious to know how a neuron can unleash a self -destruct mechanism within its axon, but keep it from spreading to the rest of the cell. The researchers’ findings could offer clues about the processes underlying some neurological disorders.
“Aberrant axon pruning is thought to underlie some of the causes for neurodevelopmental disorders, such as schizophrenia and autism,” said Mohanish Deshmukh, PhD, professor of cell biology and physiology at UNC and the study’s senior author. “This study sheds light on some of the mechanisms by which neurons are able to regulate axon pruning.”
Axon pruning is part of normal development and plays a key role in learning and memory. Another important process, apoptosis — the purposeful death of an entire cell — is also crucial because it allows the body to cull broken or incorrectly placed neurons. But both processes have been linked with disease when improperly regulated. [Continue reading...]
Riken: In our interaction with our environment we constantly refer to past experiences stored as memories to guide behavioral decisions. But how memories are formed, stored and then retrieved to assist decision-making remains a mystery. By observing whole-brain activity in live zebrafish, researchers from the RIKEN Brain Science Institute have visualized for the first time how information stored as long-term memory in the cerebral cortex is processed to guide behavioral choices.
The study, published today in the journal Neuron, was carried out by Dr. Tazu Aoki and Dr. Hitoshi Okamoto from the Laboratory for Developmental Gene Regulation, a pioneer in the study of how the brain controls behavior in zebrafish.
The mammalian brain is too large to observe the whole neural circuit in action. But using a technique called calcium imaging, Aoki et al. were able to visualize for the first time the activity of the whole zebrafish brain during memory retrieval.
Calcium imaging takes advantage of the fact that calcium ions enter neurons upon neural activation. By introducing a calcium sensitive fluorescent substance in the neural tissue, it becomes possible to trace the calcium influx in neurons and thus visualize neural activity.
The researchers trained transgenic zebrafish expressing a calcium sensitive protein to avoid a mild electric shock using a red LED as cue. By observing the zebrafish brain activity upon presentation of the red LED they were able to visualize the process of remembering the learned avoidance behavior.
They observe spot-like neural activity in the dorsal part of the fish telencephalon, which corresponds to the human cortex, upon presentation of the red LED 24 hours after the training session. No activity is observed when the cue is presented 30 minutes after training.
In another experiment, Aoki et al. show that if this region of the brain is removed, the fish are able to learn the avoidance behavior, remember it short-term, but cannot form any long-term memory of it.
“This indicates that short-term and long-term memories are formed and stored in different parts of the brain. We think that short-term memories must be transferred to the cortical region to be consolidated into long-term memories,” explains Dr. Aoki.
The team then tested whether memories for the best behavioral choices can be modified by new learning. The fish were trained to learn two opposite avoidance behaviors, each associated with a different LED color, blue or red, as a cue. They find that presentation of the different cues leads to the activation of different groups of neurons in the telencephalon, which indicates that different behavioral programs are stored and retrieved by different populations of neurons.
“Using calcium imaging on zebrafish, we were able to visualize an on-going process of memory consolidation for the first time. This approach opens new avenues for research into memory using zebrafish as model organism,” concludes Dr. Okamoto.
>UC Berkeley: Whether we’re listening to Bach or the blues, our brains are wired to make music-color connections depending on how the melodies make us feel, according to new research from the University of California, Berkeley. For instance, Mozart’s jaunty Flute Concerto No. 1 in G major is most often associated with bright yellow and orange, whereas his dour Requiem in D minor is more likely to be linked to dark, bluish gray.
Moreover, people in both the United States and Mexico linked the same pieces of classical orchestral music with the same colors. This suggests that humans share a common emotional palette – when it comes to music and color – that appears to be intuitive and can cross cultural barriers, UC Berkeley researchers said.
“The results were remarkably strong and consistent across individuals and cultures and clearly pointed to the powerful role that emotions play in how the human brain maps from hearing music to seeing colors,” said UC Berkeley vision scientist Stephen Palmer, lead author of a paper published this week in the journal Proceedings of the National Academy of Sciences. [Continue reading...]