The healing power of silence

Daniel A. Gross writes: One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme — it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

A few months later, the delegation issued a slick “Country Brand Report.” It highlighted a host of marketable themes, including Finland’s renowned educational system and school of functional design. One key theme was brand new: silence. As the report explained, modern society often seems intolerably loud and busy. “Silence is a resource,” it said. It could be marketed just like clean water or wild mushrooms. “In the future, people will be prepared to pay for the experience of silence.”

People already do. In a loud world, silence sells. Noise-canceling headphones retail for hundreds of dollars; the cost of some weeklong silent meditation courses can run into the thousands. Finland saw that it was possible to quite literally make something out of nothing.

In 2011, the Finnish Tourist Board released a series of photographs of lone figures in the wilderness, with the caption “Silence, Please.” An international “country branding” consultant, Simon Anholt, proposed the playful tagline “No talking, but action.” And a Finnish watch company, Rönkkö, launched its own new slogan: “Handmade in Finnish silence.”

“We decided, instead of saying that it’s really empty and really quiet and nobody is talking about anything here, let’s embrace it and make it a good thing,” explains Eva Kiviranta, who manages social media for VisitFinland.com.

Silence is a peculiar starting point for a marketing campaign. After all, you can’t weigh, record, or export it. You can’t eat it, collect it, or give it away. The Finland campaign raises the question of just what the tangible effects of silence really are. Science has begun to pipe up on the subject. In recent years researchers have highlighted the peculiar power of silence to calm our bodies, turn up the volume on our inner thoughts, and attune our connection to the world. Their findings begin where we might expect: with noise.

The word “noise” comes from a Latin root meaning either queasiness or pain. According to the historian Hillel Schwartz, there’s even a Mesopotamian legend in which the gods grow so angry at the clamor of earthly humans that they go on a killing spree. (City-dwellers with loud neighbors may empathize, though hopefully not too closely.)

Dislike of noise has produced some of history’s most eager advocates of silence, as Schwartz explains in his book Making Noise: From Babel to the Big Bang and Beyond. In 1859, the British nurse and social reformer Florence Nightingale wrote, “Unnecessary noise is the most cruel absence of care that can be inflicted on sick or well.” Every careless clatter or banal bit of banter, Nightingale argued, can be a source of alarm, distress, and loss of sleep for recovering patients. She even quoted a lecture that identified “sudden noises” as a cause of death among sick children. [Continue reading...]

facebooktwittermail

What Shakespeare can teach science about language and the limits of the human mind

Jillian Hinchliffe and Seth Frey write: Although [Stephen] Booth is now retired [from the University of California, Berkeley], his work [on Shakespeare] couldn’t be more relevant. In the study of the human mind, old disciplinary boundaries have begun to dissolve and fruitful new relationships between the sciences and humanities have sprung up in their place. When it comes to the cognitive science of language, Booth may be the most prescient literary critic who ever put pen to paper. In his fieldwork in poetic experience, he unwittingly anticipated several language-processing phenomena that cognitive scientists have only recently begun to study. Booth’s work not only provides one of the most original and penetrating looks into the nature of Shakespeare’s genius, it has profound implications for understanding the processes that shape how we think.

Until the early decades of the 20th century, Shakespeare criticism fell primarily into two areas: textual, which grapples with the numerous variants of published works in order to produce an edition as close as possible to the original, and biographical. Scholarship took a more political turn beginning in the 1960s, providing new perspectives from various strains of feminist, Marxist, structuralist, and queer theory. Booth is resolutely dismissive of most of these modes of study. What he cares about is poetics. Specifically, how poetic language operates on and in audiences of a literary work.

Close reading, the school that flourished mid-century and with which Booth’s work is most nearly affiliated, has never gone completely out of style. But Booth’s approach is even more minute—microscopic reading, according to fellow Shakespeare scholar Russ McDonald. And as the microscope opens up new worlds, so does Booth’s critical lens. What makes him radically different from his predecessors is that he doesn’t try to resolve or collapse his readings into any single interpretation. That people are so hung up on interpretation, on meaning, Booth maintains, is “no more than habit.” Instead, he revels in the uncertainty caused by the myriad currents of phonetic, semantic, and ideational patterns at play. [Continue reading...]

facebooktwittermail

Don’t overestimate your untapped brain power

Nathalia Gjersoe writes: Luc Besson’s latest sci-fi romp, Lucy, is based on the premise that the average person only uses 10% of their brain. This brain-myth has been fodder for books and movies for decades and is a tantalizing plot-device. Alarmingly, however, it seems to be widely accepted as fact. Of those asked, 48% of teachers in the UK, 65% of Americans and 30% of American Psychology students endorsed the myth.

In the movie, Lucy absorbs vast quantities of a nootropic that triggers rampant production of new connections between her neurons. As her brain becomes more and more densely connected, Lucy experiences omniscience, omnipotence and omnipresence. Telepathy, telekinesis and time-travel all become possible.

It’s true that increased connectivity between neurons is associated with greater expertise. Musicians who train for years have greater connectivity and activation of those regions of the brain that control their finger movements and those that bind sensory and motor information. This is the first principle of neural connectivity: cells that fire together wire together.

But resources are limited and the brain is incredibly hungry. It takes a huge amount of energy just to keep it electrically ticking over. There is an excellent TEDEd animation here that explains this nicely. The human adult brain makes up only 2% of the body’s mass yet uses 20% of energy intake. Babies’ brains use 60%! Evolution would necessarily cull any redundant parts of such an expensive organ. [Continue reading...]

facebooktwittermail

The orchestration of attention

The New Yorker: Every moment, our brains are bombarded with information, from without and within. The eyes alone convey more than a hundred billion signals to the brain every second. The ears receive another avalanche of sounds. Then there are the fragments of thoughts, conscious and unconscious, that race from one neuron to the next. Much of this data seems random and meaningless. Indeed, for us to function, much of it must be ignored. But clearly not all. How do our brains select the relevant data? How do we decide to pay attention to the turn of a doorknob and ignore the drip of a leaky faucet? How do we become conscious of a certain stimulus, or indeed “conscious” at all?

For decades, philosophers and scientists have debated the process by which we pay attention to things, based on cognitive models of the mind. But, in the view of many modern psychologists and neurobiologists, the “mind” is not some nonmaterial and exotic essence separate from the body. All questions about the mind must ultimately be answered by studies of physical cells, explained in terms of the detailed workings of the more than eighty billion neurons in the brain. At this level, the question is: How do neurons signal to one another and to a cognitive command center that they have something important to say?

“Years ago, we were satisfied to know which areas of the brain light up under various stimuli,” the neuroscientist Robert Desimone told me during a recent visit to his office. “Now we want to know mechanisms.” Desimone directs the McGovern Institute for Brain Research at the Massachusetts Institute of Technology; youthful and trim at the age of sixty-two, he was dressed casually, in a blue pinstripe shirt, and had only the slightest gray in his hair. On the bookshelf of his tidy office were photographs of his two young children; on the wall was a large watercolor titled “Neural Gardens,” depicting a forest of tangled neurons, their spindly axons and dendrites wending downward like roots in rich soil.

Earlier this year, in an article published in the journal Science, Desimone and his colleague Daniel Baldauf reported on an experiment that shed light on the physical mechanism of paying attention. The researchers presented a series of two kinds of images — faces and houses — to their subjects in rapid succession, like passing frames of a movie, and asked them to concentrate on the faces but disregard the houses (or vice versa). The images were “tagged” by being presented at two frequencies — a new face every two-thirds of a second, a new house every half second. By monitoring the frequencies of the electrical activity of the subjects’ brains with magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), Desimone and Baldauf could determine where in the brain the images were being directed.

The scientists found that, even though the two sets of images were presented to the eye almost on top of each other, they were processed by different places in the brain — the face images by a particular region on the surface of the temporal lobe that is known to specialize in facial recognition, and the house images by a neighboring but separate group of neurons specializing in place recognition.

Most importantly, the neurons in the two regions behaved differently. When the subjects were told to concentrate on the faces and to disregard the houses, the neurons in the face location fired in synchrony, like a group of people singing in unison, while the neurons in the house location fired like a group of people singing out of synch, each beginning at a random point in the score. When the subjects concentrated instead on houses, the reverse happened. [Continue reading...]

facebooktwittermail

Brain shrinkage, poor concentration, anxiety, and depression linked to media-multitasking

Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.

A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.

The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.

But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking. [Continue reading...]

facebooktwittermail

We are more rational than those who nudge us

Steven Poole writes: Humanity’s achievements and its self-perception are today at curious odds. We can put autonomous robots on Mars and genetically engineer malarial mosquitoes to be sterile, yet the news from popular psychology, neuroscience, economics and other fields is that we are not as rational as we like to assume. We are prey to a dismaying variety of hard-wired errors. We prefer winning to being right. At best, so the story goes, our faculty of reason is at constant war with an irrational darkness within. At worst, we should abandon the attempt to be rational altogether.

The present climate of distrust in our reasoning capacity draws much of its impetus from the field of behavioural economics, and particularly from work by Daniel Kahneman and Amos Tversky in the 1980s, summarised in Kahneman’s bestselling Thinking, Fast and Slow (2011). There, Kahneman divides the mind into two allegorical systems, the intuitive ‘System 1’, which often gives wrong answers, and the reflective reasoning of ‘System 2’. ‘The attentive System 2 is who we think we are,’ he writes; but it is the intuitive, biased, ‘irrational’ System 1 that is in charge most of the time.

Other versions of the message are expressed in more strongly negative terms. You Are Not So Smart (2011) is a bestselling book by David McRaney on cognitive bias. According to the study ‘Why Do Humans Reason?’ (2011) by the cognitive scientists Hugo Mercier and Dan Sperber, our supposedly rational faculties evolved not to find ‘truth’ but merely to win arguments. And in The Righteous Mind (2012), the psychologist Jonathan Haidt calls the idea that reason is ‘our most noble attribute’ a mere ‘delusion’. The worship of reason, he adds, ‘is an example of faith in something that does not exist’. Your brain, runs the now-prevailing wisdom, is mainly a tangled, damp and contingently cobbled-together knot of cognitive biases and fear.

This is a scientised version of original sin. And its eager adoption by today’s governments threatens social consequences that many might find troubling. A culture that believes its citizens are not reliably competent thinkers will treat those citizens differently to one that respects their reflective autonomy. Which kind of culture do we want to be? And we do have a choice. Because it turns out that the modern vision of compromised rationality is more open to challenge than many of its followers accept. [Continue reading...]

facebooktwittermail

Your brain on metaphors

Michael Chorost writes:

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

The verbs are the same. The syntax is identical. Does the brain notice, or care, that the first is literal, the second
metaphorical, the third idiomatic?

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

But in their 1980 book, Metaphors We Live By, the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote. [Continue reading...]

facebooktwittermail

Humans are wired for bad news

Jacob Burak writes: I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why?

Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it.

Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first. [Continue reading...]

facebooktwittermail

Study reveals rats show regret, a cognitive behavior once thought to be uniquely human

EurekAlert!: New research from the Department of Neuroscience at the University of Minnesota reveals that rats show regret, a cognitive behavior once thought to be uniquely and fundamentally human.

Research findings were recently published in Nature Neuroscience.

To measure the cognitive behavior of regret, A. David Redish, Ph.D., a professor of neuroscience in the University of Minnesota Department of Neuroscience, and Adam Steiner, a graduate student in the Graduate Program in Neuroscience, who led the study, started from the definitions of regret that economists and psychologists have identified in the past.

“Regret is the recognition that you made a mistake, that if you had done something else, you would have been better off,” said Redish. “The difficult part of this study was separating regret from disappointment, which is when things aren’t as good as you would have hoped. The key to distinguishing between the two was letting the rats choose what to do.” [Continue reading...]

The boundaries delineating what is taken to be uniquely human are constantly being challenged by new scientific findings. But it’s worth asking why those boundaries were there in the first place.

Surely the scientific approach when investigating a cognitive state such as regret would be to start out without making any suppositions about what non-humans do or don’t experience.

The idea that there is something uniquely human about regret, seems like a vestige of biblically inspired notions of human uniqueness.

That as humans we might be unaware of the regrets of rats says much less about what rats are capable of experiencing than it says about our capacity to imagine non-human experience.

Yet at least rationally, it seems no great leap is required in assuming that any creature that makes choices will also experience something resembling regret.

A cat learning to hunt, surely feels something when it makes a premature strike, having yet to master the right balance between stalking and attacking its prey. That feeling is most likely some form of discomfort that spurs learning. The cat has no names for its feelings yet feels them nonetheless.

That animals lack some of the means through which humans convey their own feelings says much more about our powers of description than their capacity to feel.

facebooktwittermail

Cynicism is toxic

Cynics fool themselves by thinking they can’t be fooled.

The cynic imagines he’s guarding himself against being duped. He’s not naive, he’s worldly wise, so he’s not about to get taken in — but this psychic insulation comes at a price.

The cynic is cautious and mistrustful. Worst of all, the cynic by relying too much on his own counsel, saps the foundation of curiosity, which is the ability to be surprised.

While the ability to develop and sustain an open mind has obvious psychological value, neurologists now say that it’s also necessary for the health of the brain. Cynicism leads towards dementia.

One of the researchers in a new study suggests that the latest findings may offer insights on how to reduce the risks of dementia, yet that seems to imply that people might be less inclined to become cynical simply by knowing that its bad for their health. How are we to reduce the risks of becoming cynical in the first place?

One of the most disturbing findings of a recent Pew Research Center survey, Millenials in Adulthood, was this:

In response to a long-standing social science survey question, “Generally speaking, would you say that most people can be trusted or that you can’t be too careful in dealing with people,” just 19% of Millennials say most people can be trusted, compared with 31% of Gen Xers, 37% of Silents and 40% of Boomers.

While this trust deficit among Millennials no doubt has multiple causes, such as the socially fragmented nature of our digital world, I don’t believe that there has ever before been a generation so thoroughly trained in fear. Beneath cynicism lurks fear.

The fear may have calmed greatly since the days of post-9/11 hysteria, yet it has not gone away. It’s the background noise of American life. It might no longer be focused so strongly on terrorism, since there are plenty of other reasons to fear — some baseless, some over-stated, and some underestimated. But the aggregation of all these fears produces a pervasive mistrust of life.

ScienceDaily: People with high levels of cynical distrust may be more likely to develop dementia, according to a study published in the May 28, 2014, online issue of Neurology®, the medical journal of the American Academy of Neurology.

Cynical distrust, which is defined as the belief that others are mainly motivated by selfish concerns, has been associated with other health problems, such as heart disease. This is the first study to look at the relationship between cynicism and dementia.

“These results add to the evidence that people’s view on life and personality may have an impact on their health,” said study author Anna-Maija Tolppanen, PhD, of the University of Eastern Finland in Kuopio. “Understanding how a personality trait like cynicism affects risk for dementia might provide us with important insights on how to reduce risks for dementia.”

For the study, 1,449 people with an average age of 71 were given tests for dementia and a questionnaire to measure their level of cynicism. The questionnaire has been shown to be reliable, and people’s scores tend to remain stable over periods of several years. People are asked how much they agree with statements such as “I think most people would lie to get ahead,” “It is safer to trust nobody” and “Most people will use somewhat unfair reasons to gain profit or an advantage rather than lose it.” Based on their scores, participants were grouped in low, moderate and high levels of cynical distrust.

A total of 622 people completed two tests for dementia, with the last one an average of eight years after the study started. During that time, 46 people were diagnosed with dementia. Once researchers adjusted for other factors that could affect dementia risk, such as high blood pressure, high cholesterol and smoking, people with high levels of cynical distrust were three times more likely to develop dementia than people with low levels of cynicism. Of the 164 people with high levels of cynicism, 14 people developed dementia, compared to nine of the 212 people with low levels of cynicism.

The study also looked at whether people with high levels of cynicism were more likely to die sooner than people with low levels of cynicism. A total of 1,146 people were included in this part of the analysis, and 361 people died during the average of 10 years of follow-up. High cynicism was initially associated with earlier death, but after researchers accounted for factors such as socioeconomic status, behaviors such as smoking and health status, there was no longer any link between cynicism and earlier death.

facebooktwittermail

How the brain creates personality: A new theory

Stephen M. Kosslyn and G. Wayne Miller write: It is possible to examine any object — including a brain — at different levels. Take the example of a building. If we want to know whether the house will have enough space for a family of five, we want to focus on the architectural level; if we want to know how easily it could catch fire, we want to focus on the materials level; and if we want to engineer a product for a brick manufacturer, we focus on molecular structure.

Similarly, if we want to know how the brain gives rise to thoughts, feelings, and behaviors, we want to focus on the bigger picture of how its structure allows it to store and process information — the architecture, as it were. To understand the brain at this level, we don’t have to know everything about the individual connections among brain cells or about any other biochemical process. We use a rela­tively high level of analysis, akin to architecture in buildings, to characterize relatively large parts of the brain.

To explain the Theory of Cognitive Modes, which specifies general ways of thinking that underlie how a person approaches the world and interacts with other people, we need to provide you with a lot of information. We want you to understand where this theory came from — that we didn’t just pull it out of a hat or make it up out of whole cloth. But there’s no need to lose the forest for the trees: there are only three key points that you will really need to keep in mind.

First, the top parts and the bottom parts of the brain have differ­ent functions. The top brain formulates and executes plans (which often involve deciding where to move objects or how to move the body in space), whereas the bottom brain classifies and interprets incoming information about the world. The two halves always work together; most important, the top brain uses information from the bottom brain to formulate its plans (and to reformulate them, as they unfold over time).

Second, according to the theory, people vary in the degree that they tend to rely on each of the two brain systems for functions that are optional (i.e., not dictated by the immediate situation): Some people tend to rely heavily on both brain systems, some rely heavily on the bottom brain system but not the top, some rely heavily on the top but not the bottom, and some don’t rely heavily on either system.

Third, these four scenarios define four basic cognitive modes— general ways of thinking that underlie how a person approaches the world and interacts with other people. According to the Theory of Cognitive Modes, each of us has a particular dominant cognitive mode, which affects how we respond to situations we encounter and how we relate to others. The possible modes are: Mover Mode, Perceiver Mode, Stimulator Mode, and Adaptor Mode. [Continue reading...]

facebooktwittermail

How memory speaks

Jerome Groopman writes: I began writing these words on what appeared to be an unremarkable Sunday morning. Shortly before sunrise, the bedroom still dim, I awoke and quietly made my way to the kitchen, careful not to disturb my still-sleeping wife. The dark-roast coffee was retrieved from its place in the pantry, four scoops then placed in a filter. While the coffee was brewing, I picked up The New York Times at the door. Scanning the front page, my eyes rested on an article mentioning Svoboda, the far-right Ukrainian political party (svoboda, means, I remembered, “freedom”).

I prepared an egg-white omelette and toasted two slices of multigrain bread. After a few sips of coffee, fragments of the night’s dream came to mind: I am rushing to take my final examination in college chemistry, but as I enter the amphitheater where the test is given, no one is there. Am I early? Or in the wrong room? The dream was not new to me. It often occurs before I embark on a project, whether it’s an experiment in the laboratory, a drug to be tested in the clinic, or an article to write on memory.

The start of that Sunday morning seems quite mundane. But when we reflect on the manifold manifestations of memory, the mundane becomes marvelous. Memory is operative not only in recalling the meaning of svoboda, knowing who was sleeping with me in bed, and registering my dream as recurrent, but also in rote tasks: navigating the still-dark bedroom, scooping the coffee, using a knife and fork to eat breakfast. Simple activities of life, hardly noticed, reveal memory as a map, clock, and mirror, vital to our sense of place, time, and person.

This role of memory in virtually every activity of our day is put in sharp focus when it is lost. Su Meck, in I Forgot to Remember, pieces together a fascinating tale of life after suffering head trauma as a young mother. A ceiling fan fell and struck her head:

You might wonder how it feels to wake up one morning and not know who you are. I don’t know. The accident didn’t just wipe out all my memories; it hindered me from making new ones for quite some time. I awoke each day to a house full of strangers…. And this wasn’t just a few days. It was weeks before I recognized my boys when they toddled into the room, months before I knew my own telephone number, years before I was able to find my way home from anywhere. I have no more memory of those first several years after the accident than my own kids have of their first years of life.

A computed tomography (CT) scan of Meck’s brain showed swelling over the right frontal area. But neurologists were at a loss to explain the genesis of her amnesia. Memory does not exist in a single site or region of the central nervous system. There are estimated to be 10 to 100 billion neurons in the human brain, each neuron making about one thousand connections to other neurons at the junctions termed synapses. Learning, and then storing what we learn through life, involve intricate changes in the nature and number of these trillions of neuronal connections. But memory is made not only via alterations at the synaptic level. It also involves regional remodeling of parts of our cortex. Our brain is constantly changing in its elaborate circuitry and, to some degree, configuration. [Continue reading...]

facebooktwittermail

How music hijacks our perception of time

Jonathan Berger writes: One evening, some 40 years ago, I got lost in time. I was at a performance of Schubert’s String Quintet in C major. During the second movement I had the unnerving feeling that time was literally grinding to a halt. The sensation was powerful, visceral, overwhelming. It was a life-changing moment, or, as it felt at the time, a life-changing eon.

It has been my goal ever since to compose music that usurps the perceived flow of time and commandeers the sense of how time passes. Although I’ve learned to manipulate subjective time, I still stand in awe of Schubert’s unparalleled power. Nearly two centuries ago, the composer anticipated the neurological underpinnings of time perception that science has underscored in the past few decades.

The human brain, we have learned, adjusts and recalibrates temporal perception. Our ability to encode and decode sequential information, to integrate and segregate simultaneous signals, is fundamental to human survival. It allows us to find our place in, and navigate, our physical world. But music also demonstrates that time perception is inherently subjective—and an integral part of our lives. “For the time element in music is single,” wrote Thomas Mann in his novel, The Magic Mountain. “Into a section of mortal time music pours itself, thereby inexpressibly enhancing and ennobling what it fills.” [Continue reading...]

facebooktwittermail

Searching for the elephant’s genius inside the largest brain on land

elephant

Ferris Jabr writes: Many years ago, while wandering through Amboseli National Park in Kenya, an elephant matriarch named Echo came upon the bones of her former companion Emily. Echo and her family slowed down and began to inspect the remains. They stroked Emily’s skull with their trunks, investigating every crevice; they touched her skeleton gingerly with their padded hind feet; they carried around her tusks. Elephants consistently react this way to other dead elephants, but do not show much interest in deceased rhinos, buffalo or other species. Sometimes elephants will even cover their dead with soil and leaves.

What is going through an elephant’s mind in these moments? We cannot explain their behavior as an instinctual and immediate reaction to a dying or recently perished compatriot. Rather, they seem to understand—even years and years after a friend or relative’s death—that an irreversible change has taken place, that, here on the ground, is an elephant who used to be alive, but no longer is. In other words, elephants grieve.

Such grief is but one of many indications that elephants are exceptionally intelligent, social and empathic creatures. After decades of observing wild elephants—and a series of carefully controlled experiments in the last eight years—scientists now agree that elephants form lifelong kinships, talk to one another with a large vocabulary of rumbles and trumpets and make group decisions; elephants play, mimic their parents and cooperate to solve problems; they use tools, console one another when distressed, and probably have a sense of self (See: The Science Is In: Elephants Are Even Smarter Than We Realized)

All this intellect must emerge, in one way or another, from the elephant brain—the largest of any land animal, three times as big as the human brain with individual neurons that seem to be three to five times the size of human brain cells. [Continue reading...]

facebooktwittermail

Too much to remember?

AnalysisBenedict Carey writes: People of a certain age (and we know who we are) don’t spend much leisure time reviewing the research into cognitive performance and aging. The story is grim, for one thing: Memory’s speed and accuracy begin to slip around age 25 and keep on slipping.

The story is familiar, too, for anyone who is over 50 and, having finally learned to live fully in the moment, discovers it’s a senior moment. The finding that the brain slows with age is one of the strongest in all of psychology.

Over the years, some scientists have questioned this dotage curve. But these challenges have had an ornery-old-person slant: that the tests were biased toward the young, for example. Or that older people have learned not to care about clearly trivial things, like memory tests. Or that an older mind must organize information differently from one attached to some 22-year-old who records his every Ultimate Frisbee move on Instagram.

Now comes a new kind of challenge to the evidence of a cognitive decline, from a decidedly digital quarter: data mining, based on theories of information processing. In a paper published in Topics in Cognitive Science, a team of linguistic researchers from the University of Tübingen in Germany used advanced learning models to search enormous databases of words and phrases.

Since educated older people generally know more words than younger people, simply by virtue of having been around longer, the experiment simulates what an older brain has to do to retrieve a word. And when the researchers incorporated that difference into the models, the aging “deficits” largely disappeared.

“What shocked me, to be honest, is that for the first half of the time we were doing this project, I totally bought into the idea of age-related cognitive decline in healthy adults,” the lead author, Michael Ramscar, said by email. But the simulations, he added, “fit so well to human data that it slowly forced me to entertain this idea that I didn’t need to invoke decline at all.” [Continue reading...]

facebooktwittermail

Why we find it difficult to face the future

Alisa Opar writes: The British philosopher Derek Parfit espoused a severely reductionist view of personal identity in his seminal book, Reasons and Persons: It does not exist, at least not in the way we usually consider it. We humans, Parfit argued, are not a consistent identity moving through time, but a chain of successive selves, each tangentially linked to, and yet distinct from, the previous and subsequent ones. The boy who begins to smoke despite knowing that he may suffer from the habit decades later should not be judged harshly: “This boy does not identify with his future self,” Parfit wrote. “His attitude towards this future self is in some ways like his attitude to other people.”

Parfit’s view was controversial even among philosophers. But psychologists are beginning to understand that it may accurately describe our attitudes towards our own decision-making: It turns out that we see our future selves as strangers. Though we will inevitably share their fates, the people we will become in a decade, quarter century, or more, are unknown to us. This impedes our ability to make good choices on their—which of course is our own—behalf. That bright, shiny New Year’s resolution? If you feel perfectly justified in breaking it, it may be because it feels like it was a promise someone else made.

“It’s kind of a weird notion,” says Hal Hershfield, an assistant professor at New York University’s Stern School of Business. “On a psychological and emotional level we really consider that future self as if it’s another person.”

Using fMRI, Hershfield and colleagues studied brain activity changes when people imagine their future and consider their present. They homed in on two areas of the brain called the medial prefrontal cortex and the rostral anterior cingulate cortex, which are more active when a subject thinks about himself than when he thinks of someone else. They found these same areas were more strongly activated when subjects thought of themselves today, than of themselves in the future. Their future self “felt” like somebody else. In fact, their neural activity when they described themselves in a decade was similar to that when they described Matt Damon or Natalie Portman. [Continue reading...]

facebooktwittermail

Understanding the psychology shaping negotiations with Iran

“The only way for interaction with Iran is dialogue on an equal footing, confidence-building and mutual respect as well as reducing antagonism and aggression,” Iranian President Hassan Rouhani said in a speech after taking the oath of office last August.

“If you want the right response, don’t speak with Iran in the language of sanctions, speak in the language of respect.”

In the following article, Nicholas Wright and Karim Sadjadpour describe how an understanding of neuroscience — or lack of it — may determine the outcome of negotiations with Iran.

The whole piece is worth reading, but keep this in mind: every single insight that gets attributed to neuroscience has been clearly established without the need to conduct a single brain scan. Indeed, everything that is here being attributed to the “exquisite neural machinery” of the brain can be understood by studying the workings of the human mind and how thought shapes behavior.

It is important to draw a sharp distinction between the examination of the mind and observing the workings of the brain because the latter is totally dependent on the output of intermediary electronic scanning devices, whereas minds can study themselves and each other directly and through shared language.

One of the insidious effects of neuroscience is that it promotes a view that understanding the ways brains work has greater intrinsic value than understanding how minds work. What the negotiations with Iran demonstrate, however, is that the exact opposite is true.

To the extent that through the development of trust, negotiations are able to advance, this will have nothing to do with anyone’s confidence about what is happening inside anyone’s brain. On the contrary, it will depend on a meeting of minds and mutual understanding. No one will need to understand what is happening in their own or anyone else’s insula cortex, but what will most likely make or break the talks will be whether the Iranians believe they are being treated fairly. The determination of fairness does not depend on the presence or absence of a particular configuration of neural activity but rather on an assessment of reality.

Treat us as equals, Iran’s president said — and that was almost 15 years ago!

Nicholas Wright and Karim Sadjadpour write: “Imagine being told that you cannot do what everyone else is doing,” appealed Iranian Foreign Minister Javad Zarif in a somber YouTube message defending the country’s nuclear program in November. “Would you back down? Would you relent? Or would you stand your ground?”

While only 14 nations, including Iran, enrich uranium (e.g. “what everyone else is doing”), Zarif’s message raises a question at the heart of ongoing talks to implement a final nuclear settlement with Tehran: Why has the Iranian government subjected its population to the most onerous sanctions regime in contemporary history in order to do this? Indeed, it’s estimated that Iran’s antiquated nuclear program needs one year to enrich as much uranium as Europe’s top facility produces in five hours.

To many, the answer is obvious: Iran is seeking a nuclear weapons capability (which it has arguably already attained), if not nuclear weapons. Yet the numerous frameworks used to explain Iranian motivations—including geopolitics, ideology, nationalism, domestic politics, and threat perception—lead analysts to different conclusions. Does Iran want nuclear weapons to dominate the Middle East, or does it simply want the option to defend itself from hostile opponents both near and far? While there’s no single explanation for Tehran’s actions, if there is a common thread that connects these frameworks and may help illuminate Iranian thinking, it is the brain.

Although neuroscience can’t be divorced from culture, history, and geography, there is no Orientalism of the brain: The fundamental biology of social motivations is the same in Tokyo, Tehran, and Tennessee. It anticipates, for instance, how the mind’s natural instinct to reject perceived unfairness can impede similarly innate desires for accommodation, and how fairness can lead to tragedy. It tells us that genuinely conciliatory gestures are more likely and natural than many believe, and how to make our own conciliatory gestures more effective.

Distilled to their essence, nations are led by and comprised of humans, and the success of social animals like humans rests on our ability to control the balance between cooperation and self-interest. The following four lessons from neuroscience may help us understand the obstacles that were surmounted to reach an interim nuclear deal with Iran, and the enormous challenges that still must be overcome in order to reach a comprehensive agreement. [Continue reading...]

facebooktwittermail

How we feel at home

Moheb Costandi writes: Home is more than a place on a map. It evokes a particular set of feelings, and a sense of safety and belonging. Location, memories, and emotions are intertwined within those walls. Over the past few decades, this sentiment has gained solid scientific grounding. And earlier this year, researchers identified some of the cells that help encode our multifaceted homes in the human brain.

In the early 1970s, neuroscientist John O’Keefe of University College London and his colleagues began to uncover the brain mechanisms responsible for navigating space. They monitored the electrical activity of neurons within a part of the brain called the hippocampus. As the animals moved around an enclosure with electrodes implanted in their hippocampus, specific neurons fired in response to particular locations. These neurons, which came to be known as place cells, each had a unique “place field” where it fired: For example, neuron A might be active when the rat was in the far right corner, near the edge of the enclosure, while neuron B fired when the rat was in the opposite corner.

Since then, further experiments have shown that the hippocampus contains at least two other types of brain cells involved in navigation. Grid cells fire periodically as an animal traverses a space, and head direction cells fire when the animal faces a certain direction. Together, place cells, grid cells, and head direction cells form the brain’s GPS, mapping the space around an animal and its location within it.

Neuroscientists assumed that these three types of cells in the hippocampus are how we humans, too, navigate our surroundings. But solid evidence of these cell types came only recently, when a research team implanted electrodes into the brains of epilepsy patients being evaluated before surgery. They measured the activity of neurons in the hippocampus while the patients navigated a computer-generated environment, and found that some of the cells fired at regular intervals, as grid cells in rodents did. The authors of the study, published last August, conclude that the mechanisms of spatial navigation in mice and humans are likely the same. [Continue reading...]

facebooktwittermail