Category Archives: Neuroscience

How the brain creates personality: A new theory

Stephen M. Kosslyn and G. Wayne Miller write: It is possible to examine any object — including a brain — at different levels. Take the example of a building. If we want to know whether the house will have enough space for a family of five, we want to focus on the architectural level; if we want to know how easily it could catch fire, we want to focus on the materials level; and if we want to engineer a product for a brick manufacturer, we focus on molecular structure.

Similarly, if we want to know how the brain gives rise to thoughts, feelings, and behaviors, we want to focus on the bigger picture of how its structure allows it to store and process information — the architecture, as it were. To understand the brain at this level, we don’t have to know everything about the individual connections among brain cells or about any other biochemical process. We use a rela­tively high level of analysis, akin to architecture in buildings, to characterize relatively large parts of the brain.

To explain the Theory of Cognitive Modes, which specifies general ways of thinking that underlie how a person approaches the world and interacts with other people, we need to provide you with a lot of information. We want you to understand where this theory came from — that we didn’t just pull it out of a hat or make it up out of whole cloth. But there’s no need to lose the forest for the trees: there are only three key points that you will really need to keep in mind.

First, the top parts and the bottom parts of the brain have differ­ent functions. The top brain formulates and executes plans (which often involve deciding where to move objects or how to move the body in space), whereas the bottom brain classifies and interprets incoming information about the world. The two halves always work together; most important, the top brain uses information from the bottom brain to formulate its plans (and to reformulate them, as they unfold over time).

Second, according to the theory, people vary in the degree that they tend to rely on each of the two brain systems for functions that are optional (i.e., not dictated by the immediate situation): Some people tend to rely heavily on both brain systems, some rely heavily on the bottom brain system but not the top, some rely heavily on the top but not the bottom, and some don’t rely heavily on either system.

Third, these four scenarios define four basic cognitive modes— general ways of thinking that underlie how a person approaches the world and interacts with other people. According to the Theory of Cognitive Modes, each of us has a particular dominant cognitive mode, which affects how we respond to situations we encounter and how we relate to others. The possible modes are: Mover Mode, Perceiver Mode, Stimulator Mode, and Adaptor Mode. [Continue reading…]

Facebooktwittermail

How memory speaks

Jerome Groopman writes: I began writing these words on what appeared to be an unremarkable Sunday morning. Shortly before sunrise, the bedroom still dim, I awoke and quietly made my way to the kitchen, careful not to disturb my still-sleeping wife. The dark-roast coffee was retrieved from its place in the pantry, four scoops then placed in a filter. While the coffee was brewing, I picked up The New York Times at the door. Scanning the front page, my eyes rested on an article mentioning Svoboda, the far-right Ukrainian political party (svoboda, means, I remembered, “freedom”).

I prepared an egg-white omelette and toasted two slices of multigrain bread. After a few sips of coffee, fragments of the night’s dream came to mind: I am rushing to take my final examination in college chemistry, but as I enter the amphitheater where the test is given, no one is there. Am I early? Or in the wrong room? The dream was not new to me. It often occurs before I embark on a project, whether it’s an experiment in the laboratory, a drug to be tested in the clinic, or an article to write on memory.

The start of that Sunday morning seems quite mundane. But when we reflect on the manifold manifestations of memory, the mundane becomes marvelous. Memory is operative not only in recalling the meaning of svoboda, knowing who was sleeping with me in bed, and registering my dream as recurrent, but also in rote tasks: navigating the still-dark bedroom, scooping the coffee, using a knife and fork to eat breakfast. Simple activities of life, hardly noticed, reveal memory as a map, clock, and mirror, vital to our sense of place, time, and person.

This role of memory in virtually every activity of our day is put in sharp focus when it is lost. Su Meck, in I Forgot to Remember, pieces together a fascinating tale of life after suffering head trauma as a young mother. A ceiling fan fell and struck her head:

You might wonder how it feels to wake up one morning and not know who you are. I don’t know. The accident didn’t just wipe out all my memories; it hindered me from making new ones for quite some time. I awoke each day to a house full of strangers…. And this wasn’t just a few days. It was weeks before I recognized my boys when they toddled into the room, months before I knew my own telephone number, years before I was able to find my way home from anywhere. I have no more memory of those first several years after the accident than my own kids have of their first years of life.

A computed tomography (CT) scan of Meck’s brain showed swelling over the right frontal area. But neurologists were at a loss to explain the genesis of her amnesia. Memory does not exist in a single site or region of the central nervous system. There are estimated to be 10 to 100 billion neurons in the human brain, each neuron making about one thousand connections to other neurons at the junctions termed synapses. Learning, and then storing what we learn through life, involve intricate changes in the nature and number of these trillions of neuronal connections. But memory is made not only via alterations at the synaptic level. It also involves regional remodeling of parts of our cortex. Our brain is constantly changing in its elaborate circuitry and, to some degree, configuration. [Continue reading…]

Facebooktwittermail

How music hijacks our perception of time

Jonathan Berger writes: One evening, some 40 years ago, I got lost in time. I was at a performance of Schubert’s String Quintet in C major. During the second movement I had the unnerving feeling that time was literally grinding to a halt. The sensation was powerful, visceral, overwhelming. It was a life-changing moment, or, as it felt at the time, a life-changing eon.

It has been my goal ever since to compose music that usurps the perceived flow of time and commandeers the sense of how time passes. Although I’ve learned to manipulate subjective time, I still stand in awe of Schubert’s unparalleled power. Nearly two centuries ago, the composer anticipated the neurological underpinnings of time perception that science has underscored in the past few decades.

The human brain, we have learned, adjusts and recalibrates temporal perception. Our ability to encode and decode sequential information, to integrate and segregate simultaneous signals, is fundamental to human survival. It allows us to find our place in, and navigate, our physical world. But music also demonstrates that time perception is inherently subjective—and an integral part of our lives. “For the time element in music is single,” wrote Thomas Mann in his novel, The Magic Mountain. “Into a section of mortal time music pours itself, thereby inexpressibly enhancing and ennobling what it fills.” [Continue reading…]

Facebooktwittermail

Searching for the elephant’s genius inside the largest brain on land

elephant

Ferris Jabr writes: Many years ago, while wandering through Amboseli National Park in Kenya, an elephant matriarch named Echo came upon the bones of her former companion Emily. Echo and her family slowed down and began to inspect the remains. They stroked Emily’s skull with their trunks, investigating every crevice; they touched her skeleton gingerly with their padded hind feet; they carried around her tusks. Elephants consistently react this way to other dead elephants, but do not show much interest in deceased rhinos, buffalo or other species. Sometimes elephants will even cover their dead with soil and leaves.

What is going through an elephant’s mind in these moments? We cannot explain their behavior as an instinctual and immediate reaction to a dying or recently perished compatriot. Rather, they seem to understand—even years and years after a friend or relative’s death—that an irreversible change has taken place, that, here on the ground, is an elephant who used to be alive, but no longer is. In other words, elephants grieve.

Such grief is but one of many indications that elephants are exceptionally intelligent, social and empathic creatures. After decades of observing wild elephants—and a series of carefully controlled experiments in the last eight years—scientists now agree that elephants form lifelong kinships, talk to one another with a large vocabulary of rumbles and trumpets and make group decisions; elephants play, mimic their parents and cooperate to solve problems; they use tools, console one another when distressed, and probably have a sense of self (See: The Science Is In: Elephants Are Even Smarter Than We Realized)

All this intellect must emerge, in one way or another, from the elephant brain—the largest of any land animal, three times as big as the human brain with individual neurons that seem to be three to five times the size of human brain cells. [Continue reading…]

Facebooktwittermail

Too much to remember?

AnalysisBenedict Carey writes: People of a certain age (and we know who we are) don’t spend much leisure time reviewing the research into cognitive performance and aging. The story is grim, for one thing: Memory’s speed and accuracy begin to slip around age 25 and keep on slipping.

The story is familiar, too, for anyone who is over 50 and, having finally learned to live fully in the moment, discovers it’s a senior moment. The finding that the brain slows with age is one of the strongest in all of psychology.

Over the years, some scientists have questioned this dotage curve. But these challenges have had an ornery-old-person slant: that the tests were biased toward the young, for example. Or that older people have learned not to care about clearly trivial things, like memory tests. Or that an older mind must organize information differently from one attached to some 22-year-old who records his every Ultimate Frisbee move on Instagram.

Now comes a new kind of challenge to the evidence of a cognitive decline, from a decidedly digital quarter: data mining, based on theories of information processing. In a paper published in Topics in Cognitive Science, a team of linguistic researchers from the University of Tübingen in Germany used advanced learning models to search enormous databases of words and phrases.

Since educated older people generally know more words than younger people, simply by virtue of having been around longer, the experiment simulates what an older brain has to do to retrieve a word. And when the researchers incorporated that difference into the models, the aging “deficits” largely disappeared.

“What shocked me, to be honest, is that for the first half of the time we were doing this project, I totally bought into the idea of age-related cognitive decline in healthy adults,” the lead author, Michael Ramscar, said by email. But the simulations, he added, “fit so well to human data that it slowly forced me to entertain this idea that I didn’t need to invoke decline at all.” [Continue reading…]

Facebooktwittermail

Why we find it difficult to face the future

Alisa Opar writes: The British philosopher Derek Parfit espoused a severely reductionist view of personal identity in his seminal book, Reasons and Persons: It does not exist, at least not in the way we usually consider it. We humans, Parfit argued, are not a consistent identity moving through time, but a chain of successive selves, each tangentially linked to, and yet distinct from, the previous and subsequent ones. The boy who begins to smoke despite knowing that he may suffer from the habit decades later should not be judged harshly: “This boy does not identify with his future self,” Parfit wrote. “His attitude towards this future self is in some ways like his attitude to other people.”

Parfit’s view was controversial even among philosophers. But psychologists are beginning to understand that it may accurately describe our attitudes towards our own decision-making: It turns out that we see our future selves as strangers. Though we will inevitably share their fates, the people we will become in a decade, quarter century, or more, are unknown to us. This impedes our ability to make good choices on their—which of course is our own—behalf. That bright, shiny New Year’s resolution? If you feel perfectly justified in breaking it, it may be because it feels like it was a promise someone else made.

“It’s kind of a weird notion,” says Hal Hershfield, an assistant professor at New York University’s Stern School of Business. “On a psychological and emotional level we really consider that future self as if it’s another person.”

Using fMRI, Hershfield and colleagues studied brain activity changes when people imagine their future and consider their present. They homed in on two areas of the brain called the medial prefrontal cortex and the rostral anterior cingulate cortex, which are more active when a subject thinks about himself than when he thinks of someone else. They found these same areas were more strongly activated when subjects thought of themselves today, than of themselves in the future. Their future self “felt” like somebody else. In fact, their neural activity when they described themselves in a decade was similar to that when they described Matt Damon or Natalie Portman. [Continue reading…]

Facebooktwittermail

Understanding the psychology shaping negotiations with Iran

“The only way for interaction with Iran is dialogue on an equal footing, confidence-building and mutual respect as well as reducing antagonism and aggression,” Iranian President Hassan Rouhani said in a speech after taking the oath of office last August.

“If you want the right response, don’t speak with Iran in the language of sanctions, speak in the language of respect.”

In the following article, Nicholas Wright and Karim Sadjadpour describe how an understanding of neuroscience — or lack of it — may determine the outcome of negotiations with Iran.

The whole piece is worth reading, but keep this in mind: every single insight that gets attributed to neuroscience has been clearly established without the need to conduct a single brain scan. Indeed, everything that is here being attributed to the “exquisite neural machinery” of the brain can be understood by studying the workings of the human mind and how thought shapes behavior.

It is important to draw a sharp distinction between the examination of the mind and observing the workings of the brain because the latter is totally dependent on the output of intermediary electronic scanning devices, whereas minds can study themselves and each other directly and through shared language.

One of the insidious effects of neuroscience is that it promotes a view that understanding the ways brains work has greater intrinsic value than understanding how minds work. What the negotiations with Iran demonstrate, however, is that the exact opposite is true.

To the extent that through the development of trust, negotiations are able to advance, this will have nothing to do with anyone’s confidence about what is happening inside anyone’s brain. On the contrary, it will depend on a meeting of minds and mutual understanding. No one will need to understand what is happening in their own or anyone else’s insula cortex, but what will most likely make or break the talks will be whether the Iranians believe they are being treated fairly. The determination of fairness does not depend on the presence or absence of a particular configuration of neural activity but rather on an assessment of reality.

Treat us as equals, Iran’s president said — and that was almost 15 years ago!

Nicholas Wright and Karim Sadjadpour write: “Imagine being told that you cannot do what everyone else is doing,” appealed Iranian Foreign Minister Javad Zarif in a somber YouTube message defending the country’s nuclear program in November. “Would you back down? Would you relent? Or would you stand your ground?”

While only 14 nations, including Iran, enrich uranium (e.g. “what everyone else is doing”), Zarif’s message raises a question at the heart of ongoing talks to implement a final nuclear settlement with Tehran: Why has the Iranian government subjected its population to the most onerous sanctions regime in contemporary history in order to do this? Indeed, it’s estimated that Iran’s antiquated nuclear program needs one year to enrich as much uranium as Europe’s top facility produces in five hours.

To many, the answer is obvious: Iran is seeking a nuclear weapons capability (which it has arguably already attained), if not nuclear weapons. Yet the numerous frameworks used to explain Iranian motivations—including geopolitics, ideology, nationalism, domestic politics, and threat perception—lead analysts to different conclusions. Does Iran want nuclear weapons to dominate the Middle East, or does it simply want the option to defend itself from hostile opponents both near and far? While there’s no single explanation for Tehran’s actions, if there is a common thread that connects these frameworks and may help illuminate Iranian thinking, it is the brain.

Although neuroscience can’t be divorced from culture, history, and geography, there is no Orientalism of the brain: The fundamental biology of social motivations is the same in Tokyo, Tehran, and Tennessee. It anticipates, for instance, how the mind’s natural instinct to reject perceived unfairness can impede similarly innate desires for accommodation, and how fairness can lead to tragedy. It tells us that genuinely conciliatory gestures are more likely and natural than many believe, and how to make our own conciliatory gestures more effective.

Distilled to their essence, nations are led by and comprised of humans, and the success of social animals like humans rests on our ability to control the balance between cooperation and self-interest. The following four lessons from neuroscience may help us understand the obstacles that were surmounted to reach an interim nuclear deal with Iran, and the enormous challenges that still must be overcome in order to reach a comprehensive agreement. [Continue reading…]

Facebooktwittermail

How we feel at home

Moheb Costandi writes: Home is more than a place on a map. It evokes a particular set of feelings, and a sense of safety and belonging. Location, memories, and emotions are intertwined within those walls. Over the past few decades, this sentiment has gained solid scientific grounding. And earlier this year, researchers identified some of the cells that help encode our multifaceted homes in the human brain.

In the early 1970s, neuroscientist John O’Keefe of University College London and his colleagues began to uncover the brain mechanisms responsible for navigating space. They monitored the electrical activity of neurons within a part of the brain called the hippocampus. As the animals moved around an enclosure with electrodes implanted in their hippocampus, specific neurons fired in response to particular locations. These neurons, which came to be known as place cells, each had a unique “place field” where it fired: For example, neuron A might be active when the rat was in the far right corner, near the edge of the enclosure, while neuron B fired when the rat was in the opposite corner.

Since then, further experiments have shown that the hippocampus contains at least two other types of brain cells involved in navigation. Grid cells fire periodically as an animal traverses a space, and head direction cells fire when the animal faces a certain direction. Together, place cells, grid cells, and head direction cells form the brain’s GPS, mapping the space around an animal and its location within it.

Neuroscientists assumed that these three types of cells in the hippocampus are how we humans, too, navigate our surroundings. But solid evidence of these cell types came only recently, when a research team implanted electrodes into the brains of epilepsy patients being evaluated before surgery. They measured the activity of neurons in the hippocampus while the patients navigated a computer-generated environment, and found that some of the cells fired at regular intervals, as grid cells in rodents did. The authors of the study, published last August, conclude that the mechanisms of spatial navigation in mice and humans are likely the same. [Continue reading…]

Facebooktwittermail

A theory of how networks become conscious

Wired: It’s a question that’s perplexed philosophers for centuries and scientists for decades: Where does consciousness come from? We know it exists, at least in ourselves. But how it arises from chemistry and electricity in our brains is an unsolved mystery.

Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That’s just the way the universe works.

“The electric charge of an electron doesn’t arise out of more elemental properties. It simply has a charge,” says Koch. “Likewise, I argue that we live in a universe of space, time, mass, energy, and consciousness arising out of complex systems.”

What Koch proposes is a scientifically refined version of an ancient philosophical doctrine called panpsychism — and, coming from someone else, it might sound more like spirituality than science. But Koch has devoted the last three decades to studying the neurological basis of consciousness. His work at the Allen Institute now puts him at the forefront of the BRAIN Initiative, the massive new effort to understand how brains work, which will begin next year.

Koch’s insights have been detailed in dozens of scientific articles and a series of books, including last year’s Consciousness: Confessions of a Romantic Reductionist. WIRED talked to Koch about his understanding of this age-old question. [Continue reading…]

Facebooktwittermail

Sleep cleanses the brain

The Washington Post reports: While we are asleep, our bodies may be resting, but our brains are busy taking out the trash.

A new study has found that the cleanup system in the brain, responsible for flushing out toxic waste products that cells produce with daily use, goes into overdrive in mice that are asleep. The cells even shrink in size to make for easier cleaning of the spaces around them.

Scientists say this nightly self-clean by the brain provides a compelling biological reason for the restorative power of sleep.

“Sleep puts the brain in another state where we clean out all the byproducts of activity during the daytime,” said study author and University of Rochester neurosurgeon Maiken Nedergaard. Those byproducts include beta-amyloid protein, clumps of which form plaques found in the brains of Alzheimer’s patients.

Staying up all night could prevent the brain from getting rid of these toxins as efficiently, and explain why sleep deprivation has such strong and immediate consequences. Too little sleep causes mental fog, crankiness, and increased risks of migraine and seizure. Rats deprived of all sleep die within weeks.

Although as essential and universal to the animal kingdom as air and water, sleep is a riddle that has baffled scientists and philosophers for centuries. Drifting off into a reduced consciousness seems evolutionarily foolish, particularly for those creatures in danger of getting eaten or attacked. [Continue reading…]

Facebooktwittermail

Henry Gustave Molaison — the man who forgot everything

Steven Shapin writes: In the movie “Groundhog Day,” the TV weatherman Phil Connors finds himself living the same day again and again. This has its advantages, as he has hundreds of chances to get things right. He can learn to speak French, to sculpt ice, to play jazz piano, and to become the kind of person with whom his beautiful colleague Rita might fall in love. But it’s a torment, too. An awful solitude flows from the fact that he’s the only one in Punxsutawney, Pennsylvania, who knows that something has gone terribly wrong with time. Nobody else seems to have any memory of all the previous iterations of the day. What is a new day for Rita is another of the same for Phil. Their realities are different—what passes between them in Phil’s world leaves no trace in hers—as are their senses of selfhood: Phil knows Rita as she cannot know him, because he knows her day after day after day, while she knows him only today. Time, reality, and identity are each curated by memory, but Phil’s and Rita’s memories work differently. From Phil’s point of view, she, and everyone else in Punxsutawney, is suffering from amnesia.

Amnesia comes in distinct varieties. In “retrograde amnesia,” a movie staple, victims are unable to retrieve some or all of their past knowledge — Who am I? Why does this woman say that she’s my wife? — but they can accumulate memories for everything that they experience after the onset of the condition. In the less cinematically attractive “anterograde amnesia,” memory of the past is more or less intact, but those who suffer from it can’t lay down new memories; every person encountered every day is met for the first time. In extremely unfortunate cases, retrograde and anterograde amnesia can occur in the same individual, who is then said to suffer from “transient global amnesia,” a condition that is, thankfully, temporary. Amnesias vary in their duration, scope, and originating events: brain injury, stroke, tumors, epilepsy, electroconvulsive therapy, and psychological trauma are common causes, while drug and alcohol use, malnutrition, and chemotherapy may play a part.

There isn’t a lot that modern medicine can do for amnesiacs. If cerebral bleeding or clots are involved, these may be treated, and occupational and cognitive therapy can help in some cases. Usually, either the condition goes away or amnesiacs learn to live with it as best they can — unless the notion of learning is itself compromised, along with what it means to have a life. Then, a few select amnesiacs disappear from systems of medical treatment and reappear as star players in neuroscience and cognitive psychology.

No star ever shone more brightly in these areas than Henry Gustave Molaison, a patient who, for more than half a century, until his death, in 2008, was known only as H.M., and who is now the subject of a book, “Permanent Present Tense” (Basic), by Suzanne Corkin, the neuroscientist most intimately involved in his case. [Continue reading…]

Facebooktwittermail

Can we get our heads around consciousness?

Michael Hanlon writes: The question of how the brain produces the feeling of subjective experience, the so-called ‘hard problem’, is a conundrum so intractable that one scientist I know refuses even to discuss it at the dinner table. Another, the British psychologist Stuart Sutherland, declared in 1989 that ‘nothing worth reading has been written on it’. For long periods, it is as if science gives up on the subject in disgust. But the hard problem is back in the news, and a growing number of scientists believe that they have consciousness, if not licked, then at least in their sights.

A triple barrage of neuroscientific, computational and evolutionary artillery promises to reduce the hard problem to a pile of rubble. Today’s consciousness jockeys talk of p‑zombies and Global Workspace Theory, mirror neurones, ego tunnels, and attention schemata. They bow before that deus ex machina of brain science, the functional magnetic resonance imaging (fMRI) machine. Their work is frequently very impressive and it explains a lot. All the same, it is reasonable to doubt whether it can ever hope to land a blow on the hard problem.

For example, fMRI scanners have shown how people’s brains ‘light up’ when they read certain words or see certain pictures. Scientists in California and elsewhere have used clever algorithms to interpret these brain patterns and recover information about the original stimulus — even to the point of being able to reconstruct pictures that the test subject was looking at. This ‘electronic telepathy’ has been hailed as the ultimate death of privacy (which it might be) and as a window on the conscious mind (which it is not).

The problem is that, even if we know what someone is thinking about, or what they are likely to do, we still don’t know what it’s like to be that person. Hemodynamic changes in your prefrontal cortex might tell me that you are looking at a painting of sunflowers, but then, if I thwacked your shin with a hammer, your screams would tell me you were in pain. Neither lets me know what pain or sunflowers feel like for you, or how those feelings come about. In fact, they don’t even tell us whether you really have feelings at all. One can imagine a creature behaving exactly like a human — walking, talking, running away from danger, mating and telling jokes — with absolutely no internal mental life. Such a creature would be, in the philosophical jargon, a zombie. (Zombies, in their various incarnations, feature a great deal in consciousness arguments.)

Why might an animal need to have experiences (‘qualia’, as they are called by some) rather than merely responses? In this magazine, the American psychologist David Barash summarised some of the current theories. One possibility, he says, is that consciousness evolved to let us to overcome the ‘tyranny of pain’. Primitive organisms might be slaves to their immediate wants, but humans have the capacity to reflect on the significance of their sensations, and therefore to make their decisions with a degree of circumspection. This is all very well, except that there is presumably no pain in the non-conscious world to start with, so it is hard to see how the need to avoid it could have propelled consciousness into existence.

Despite such obstacles, the idea is taking root that consciousness isn’t really mysterious at all; complicated, yes, and far from fully understood, but in the end just another biological process that, with a bit more prodding and poking, will soon go the way of DNA, evolution, the circulation of blood, and the biochemistry of photosynthesis. [Continue reading…]

Facebooktwittermail

How consciousness works

Michael Graziano writes: Scientific talks can get a little dry, so I try to mix it up. I take out my giant hairy orangutan puppet, do some ventriloquism and quickly become entangled in an argument. I’ll be explaining my theory about how the brain — a biological machine — generates consciousness. Kevin, the orangutan, starts heckling me. ‘Yeah, well, I don’t have a brain. But I’m still conscious. What does that do to your theory?’

Kevin is the perfect introduction. Intellectually, nobody is fooled: we all know that there’s nothing inside. But everyone in the audience experiences an illusion of sentience emanating from his hairy head. The effect is automatic: being social animals, we project awareness onto the puppet. Indeed, part of the fun of ventriloquism is experiencing the illusion while knowing, on an intellectual level, that it isn’t real.

Many thinkers have approached consciousness from a first-person vantage point, the kind of philosophical perspective according to which other people’s minds seem essentially unknowable. And yet, as Kevin shows, we spend a lot of mental energy attributing consciousness to other things. We can’t help it, and the fact that we can’t help it ought to tell us something about what consciousness is and what it might be used for. If we evolved to recognise it in others – and to mistakenly attribute it to puppets, characters in stories, and cartoons on a screen — then, despite appearances, it really can’t be sealed up within the privacy of our own heads.

Lately, the problem of consciousness has begun to catch on in neuroscience. How does a brain generate consciousness? In the computer age, it is not hard to imagine how a computing machine might construct, store and spit out the information that ‘I am alive, I am a person, I have memories, the wind is cold, the grass is green,’ and so on. But how does a brain become aware of those propositions? The philosopher David Chalmers has claimed that the first question, how a brain computes information about itself and the surrounding world, is the ‘easy’ problem of consciousness. The second question, how a brain becomes aware of all that computed stuff, is the ‘hard’ problem.

I believe that the easy and the hard problems have gotten switched around. The sheer scale and complexity of the brain’s vast computations makes the easy problem monumentally hard to figure out. How the brain attributes the property of awareness to itself is, by contrast, much easier. If nothing else, it would appear to be a more limited set of computations. In my laboratory at Princeton University, we are working on a specific theory of awareness and its basis in the brain. Our theory explains both the apparent awareness that we can attribute to Kevin and the direct, first-person perspective that we have on our own experience. And the easiest way to introduce it is to travel about half a billion years back in time. [Continue reading…]

Facebooktwittermail

Surge of brain activity may explain near-death experience, study says

The Washington Post reports: It’s called a near-death experience, but the emphasis is on “near.” The heart stops, you feel yourself float up and out of your body. You glide toward the entrance of a tunnel, and a searing bright light envelops your field of vision.

It could be the afterlife, as many people who have come close to dying have asserted. But a new study says it might well be a show created by the brain, which is still very much alive. When the heart stops, neurons in the brain appeared to communicate at an even higher level than normal, perhaps setting off the last picture show, packed with special effects.

“A lot of people believed that what they saw was heaven,” said lead researcher and neurologist Jimo Borjigin. “Science hadn’t given them a convincing alternative.”

Scientists from the University of Michigan recorded electroencephalogram (EEG) signals in nine anesthetized rats after inducing cardiac arrest. Within the first 30 seconds after the heart had stopped, all the mammals displayed a surge of highly synchronized brain activity that had features associated with consciousness and visual activation. The burst of electrical patterns even exceeded levels seen during a normal, awake state.

In other words, they may have been having the rodent version of a near-death experience. [Continue reading…]

Facebooktwittermail

Consciousness is irreducibly relational

Raymond Tallis writes: The grip of neuroscience on the academic and popular imagination is extraordinary. In recent decades, brain scientists have burst out of the laboratory into the public forum. They are everywhere, analysing and explaining every aspect of our humanity, mobilising their expertise to instruct economists, criminologists, educationists, theologians, literary critics, social scientists and even politicians, and in some cases predicting a neuro-savvy utopia in which mankind, blessed with complete self-understanding, will be able to create a truly rational and harmonious future.

So the smile-worthy prediction, reported in the Huffington Post, by Kathleen Taylor, Oxford scientist and author of The Brain Supremacy, that Muslim fundamentalism “may be categorised as mental illness and cured by science” as a result of advances in neuroscience is not especially eccentric. It does, however, make you wonder why the pronouncements of neuroscientists command such a quantity of air-time and even credence.

It would be a mistake to assume their authority is based on revelatory discoveries, comparable to those made in leading-edge physics, which have translated so spectacularly into the evolving gadgetry of daily life. There is no shortage of data pouring out of neuroscience departments. Research papers on the brain run into millions. The key to their influence, however, is the exciting technologies the studies employ, notably various scans used to reveal the activity of the waking, living brain.

The jewel in the neuroscientific crown is functional magnetic resonance imaging (fMRI), justly described by Matt Crawford as “a fast-acting solvent of the critical faculties”. It seems that pretty well any assertion placed next to an fMRI scan will attract credulous attention. Behind this is something that goes deeper than uncritical technophilia. It is the belief that you are your brain, and brain activity is identical with your consciousness, so that peering into the intracranial darkness is the best way of advancing our knowledge of humankind.

Alas, this is based on a simple error. As someone who worked for many years, as a clinician and scientist, with people who had had strokes or suffered from epilepsy, I was acutely aware of the extent to which living an ordinary life was dependent on having a brain in some kind of working order. It did not follow from this that everyday living is being a brain in some kind of working order. The brain is a necessary condition for ordinary consciousness, but not a sufficient condition.

You don’t have to be a Cartesian dualist to accept that we are more than our brains. It’s enough to acknowledge that our consciousness is not tucked away in a particular space, but is irreducibly relational. [Continue reading…]

Facebooktwittermail

Scientists trace memories of things that never happened

The New York Times reports: The vagaries of human memory are notorious. A friend insists you were at your 15th class reunion when you know it was your 10th. You distinctly remember that another friend was at your wedding, until she reminds you that you didn’t invite her. Or, more seriously, an eyewitness misidentifies the perpetrator of a terrible crime.

Not only are false, or mistaken, memories common in normal life, but researchers have found it relatively easy to generate false memories of words and images in human subjects. But exactly what goes on in the brain when mistaken memories are formed has remained mysterious.

Now scientists at the Riken-M.I.T. Center for Neural Circuit Genetics at the Massachusetts Institute of Technology say they have created a false memory in a mouse, providing detailed clues to how such memories may form in human brains.

Steve Ramirez, Xu Liu and other scientists, led by Susumu Tonegawa, reported Thursday in the journal Science that they caused mice to remember being shocked in one location, when in reality the electric shock was delivered in a completely different location.

The finding, said Dr. Tonegawa, a Nobel laureate for his work in immunology, and founder of the Picower Institute for Learning and Memory, of which the center is a part, is yet another cautionary reminder of how unreliable memory can be in mice and humans. It adds to evidence he and others first presented last year in the journal Nature that the physical trace of a specific memory can be identified in a group of brain cells as it forms, and activated later by stimulating those same cells.

Although mice are not people, the basic mechanisms of memory formation in mammals are evolutionarily ancient, said Edvard I. Moser, a neuroscientist at the Norwegian University of Science and Technology, who studies spatial memory and navigation and was not part of Dr. Tonegawa’s team.

At this level of brain activity, he said, “the difference between a mouse and a human is quite small.” [Continue reading…]

Facebooktwittermail

How the microbiome provides food for thought

UCLA Newsroom: UCLA researchers now have the first evidence that bacteria ingested in food can affect brain function in humans. In an early proof-of-concept study of healthy women, they found that women who regularly consumed beneficial bacteria known as probiotics through yogurt showed altered brain function, both while in a resting state and in response to an emotion-recognition task.

The study, conducted by scientists with the Gail and Gerald Oppenheimer Family Center for Neurobiology of Stress, part of the UCLA Division of Digestive Diseases, and the Ahmanson–Lovelace Brain Mapping Center at UCLA, appears in the current online edition of the peer-reviewed journal Gastroenterology.

The discovery that changing the bacterial environment, or microbiota, in the gut can affect the brain carries significant implications for future research that could point the way toward dietary or drug interventions to improve brain function, the researchers said.

“Many of us have a container of yogurt in our refrigerator that we may eat for enjoyment, for calcium or because we think it might help our health in other ways,” said Dr. Kirsten Tillisch, an associate professor of medicine in the digestive diseases division at UCLA’s David Geffen School of Medicine and lead author of the study. “Our findings indicate that some of the contents of yogurt may actually change the way our brain responds to the environment. When we consider the implications of this work, the old sayings ‘you are what you eat’ and ‘gut feelings’ take on new meaning.”

Researchers have known that the brain sends signals to the gut, which is why stress and other emotions can contribute to gastrointestinal symptoms. This study shows what has been suspected but until now had been proved only in animal studies: that signals travel the opposite way as well.

“Time and time again, we hear from patients that they never felt depressed or anxious until they started experiencing problems with their gut,” Tillisch said. “Our study shows that the gut–brain connection is a two-way street.” [Continue reading…]

An even better source of gut-mediated brain food than store-bought yoghurt is homemade kefir. Very easy to make, the “grains” of this probiotic superfood can be used again and again.

Facebooktwittermail