Category Archives: Neuroscience

Neurological conductors that keep the brain in time and tune

Harvard Gazette: Like musical sounds, different states of mind are defined by distinct, characteristic waveforms, recognizable frequencies and rhythms in the brain’s electrical field. When the brain is alert and performing complex computations, the cerebral cortex — the wrinkled outer surface of the brain — thrums with cortical band oscillations in the gamma wavelength. In some neurological disorders like schizophrenia, however, these waves are out of tune and the rhythm is out of sync.

New research led by Harvard Medical School (HMS) scientists at the VA Boston Healthcare System (VABHS) has identified a specific class of neurons — basal forebrain GABA parvalbumin neurons, or PV neurons — that trigger these waves, acting as neurological conductors that trigger the cortex to hum rhythmically and in tune. (GABA is gamma-amniobutyric acid, a major neurotransmitter in the brain.)

The results appear this week in the journal Proceedings of the National Academy of Sciences.

“This is a move toward a unified theory of consciousness control,” said co-senior author Robert McCarley, HMS professor of psychiatry and head of the Department of Psychiatry at VA Boston Healthcare. “We’ve known that the basal forebrain is important in turning consciousness on and off in sleep and wake, but now we’ve found that these specific cells also play a key role in triggering the synchronized rhythms that characterize conscious thought, perception, and problem-solving.” [Continue reading…]

Facebooktwittermail

Music permeates our brain

Jonathan Berger writes: Neurological research has shown that vivid musical hallucinations are more than metaphorical. They don’t just feel real, they are, from a cognitive perspective, entirely real. In the absence of sound waves, brain activation is strikingly similar to that triggered by external auditory sounds. Why should that be?

Music, repetitive and patterned by nature, provides structure within which we find anchors, context, and a basis for organizing time. In the prehistory of civilization, humans likely found comfort in the audible patterns and structures that accompanied their circadian rhythms — from the coo of a morning dove to the nocturnal chirps of crickets. With the evolution of music a more malleable framework for segmenting and structuring time developed. Humans generated predictable and replicable temporal patterns by drumming, vocalizing, blowing, and plucking. This metered, temporal framework provides an internal world in which we construct predictions about the future — what will happen next, and when it will happen.

This process spotlights the brain itself. The composer Karlheinz Stockhausen hyphenated the term for his craft to underscore the literal meaning of “com-pose” — to put together elements, from com (“with” or “together”) and pose (“put” or “place”). When we imagine music, we literally compose — sometimes recognizable tunes, other times novel combinations of patterns and musical ideas. Toddlers sing themselves to sleep with vocalizations of musical snippets they are conjuring up in their imagination. Typically, these “spontaneous melodies,” as they are referred to by child psychologists, comprise fragments of salient features of multiple songs that the baby is piecing together. In short, we do not merely retrieve music that we store in memory. Rather, a supremely complex web of associations can be stirred and generated as we compose music in our minds.

Today, amid widely disseminated music, we are barraged by a cacophony of disparate musical patterns — more often than not uninvited and unwanted — and likely spend more time than ever obsessing over imagined musical fragments. The brain is a composer whose music orchestrates our lives. And right now the brain is working overtime. [Continue reading…]

Facebooktwittermail

Meet Walter Pitts, the homeless genius who revolutionized artificial intelligence

Amanda Gefter writes: Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker, had no trouble raising his fists to get his way. The neighborhood boys weren’t much better. One afternoon in 1935, they chased him through the streets until he ducked into the local library to hide. The library was familiar ground, where he had taught himself Greek, Latin, logic, and mathematics—better than home, where his father insisted he drop out of school and go to work. Outside, the world was messy. Inside, it all made sense.

Not wanting to risk another run-in that night, Pitts stayed hidden until the library closed for the evening. Alone, he wandered through the stacks of books until he came across Principia Mathematica, a three-volume tome written by Bertrand Russell and Alfred Whitehead between 1910 and 1913, which attempted to reduce all of mathematics to pure logic. Pitts sat down and began to read. For three days he remained in the library until he had read each volume cover to cover — nearly 2,000 pages in all — and had identified several mistakes. Deciding that Bertrand Russell himself needed to know about these, the boy drafted a letter to Russell detailing the errors. Not only did Russell write back, he was so impressed that he invited Pitts to study with him as a graduate student at Cambridge University in England. Pitts couldn’t oblige him, though — he was only 12 years old. But three years later, when he heard that Russell would be visiting the University of Chicago, the 15-year-old ran away from home and headed for Illinois. He never saw his family again. [Continue reading…]

Facebooktwittermail

A universal logic of discernment

nebula

Natalie Wolchover writes: When in 2012 a computer learned to recognize cats in YouTube videos and just last month another correctly captioned a photo of “a group of young people playing a game of Frisbee,” artificial intelligence researchers hailed yet more triumphs in “deep learning,” the wildly successful set of algorithms loosely modeled on the way brains grow sensitive to features of the real world simply through exposure.

Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.

Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos.

The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, “a cat” regardless of its color, size or posture in a given video.

“They actually wrote down on paper, with exact proofs, something that people only dreamed existed,” said Ilya Nemenman, a biophysicist at Emory University. “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

As for our own remarkable knack for spotting a cat in the bushes, a familiar face in a crowd or indeed any object amid the swirl of color, texture and sound that surrounds us, strong similarities between deep learning and biological learning suggest that the brain may also employ a form of renormalization to make sense of the world. [Continue reading…]

Facebooktwittermail

Mirror neurons may reveal more about neurons than they do about people

Jason G. Goldman writes: In his 2011 book, The Tell-Tale Brain, neuroscientist V. S. Ramachandran says that some of the cells in your brain are of a special variety. He calls them the “neurons that built civilization,” but you might know them as mirror neurons. They’ve been implicated in just about everything from the development of empathy in earlier primates, millions of years ago, to the emergence of complex culture in our species.

Ramachandran says that mirror neurons help explain the things that make us so apparently unique: tool use, cooking with fire, using complex linguistics to communicate.

It’s an inherently seductive idea: that one small tweak to a particular set of brain cells could have transformed an early primate into something that was somehow more. Indeed, experimental psychologist Cecilia Hayes wrote in 2010 (pdf), “[mirror neurons] intrigue both specialists and non-specialists, celebrated as a ‘revolution’ in understanding social behaviour and ‘the driving force’ behind ‘the great leap forward’ in human evolution.”

The story of mirror neurons begins in the 1990s at the University of Parma in Italy. A group of neuroscientists were studying rhesus monkeys by implanting small electrodes in their brains, and they found that some cells exhibited a curious kind of behavior. They fired both when the monkey executed a movement, such as grasping a banana, and also when the monkey watched the experimenter execute that very same movement.

It was immediately an exciting find. These neurons were located in a part of the brain thought solely responsible for sending motor commands out from the brain, through the brainstem to the spine, and out to the nerves that control the body’s muscles. This finding suggested that they’re not just used for executing actions, but are somehow involved in understanding the observed actions of others.

After that came a flood of research connecting mirror neurons to the development of empathy, autism, language, tool use, fire, and more. Psychologist and science writer Christian Jarrett has twice referred to mirror neurons as “the most hyped concept in neuroscience.” Is he right? Where does empirical evidence end and overheated speculation begin? [Continue reading…]

Facebooktwittermail

Cognitive disinhibition: the kernel of genius and madness

Dean Keith Simonton writes: When John Forbes Nash, the Nobel Prize-winning mathematician, schizophrenic, and paranoid delusional, was asked how he could believe that space aliens had recruited him to save the world, he gave a simple response. “Because the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

Nash is hardly the only so-called mad genius in history. Suicide victims like painters Vincent Van Gogh and Mark Rothko, novelists Virginia Woolf and Ernest Hemingway, and poets Anne Sexton and Sylvia Plath all offer prime examples. Even ignoring those great creators who did not kill themselves in a fit of deep depression, it remains easy to list persons who endured well-documented psychopathology, including the composer Robert Schumann, the poet Emily Dickinson, and Nash. Creative geniuses who have succumbed to alcoholism or other addictions are also legion.

Instances such as these have led many to suppose that creativity and psychopathology are intimately related. Indeed, the notion that creative genius might have some touch of madness goes back to Plato and Aristotle. But some recent psychologists argue that the whole idea is a pure hoax. After all, it is certainly no problem to come up with the names of creative geniuses who seem to have displayed no signs or symptoms of mental illness.

Opponents of the mad genius idea can also point to two solid facts. First, the number of creative geniuses in the entire history of human civilization is very large. Thus, even if these people were actually less prone to psychopathology than the average person, the number with mental illness could still be extremely large. Second, the permanent inhabitants of mental asylums do not usually produce creative masterworks. The closest exception that anyone might imagine is the notorious Marquis de Sade. Even in his case, his greatest (or rather most sadistic) works were written while he was imprisoned as a criminal rather than institutionalized as a lunatic.

So should we believe that creative genius is connected with madness or not? Modern empirical research suggests that we should because it has pinpointed the connection between madness and creativity clearly. The most important process underlying strokes of creative genius is cognitive disinhibition — the tendency to pay attention to things that normally should be ignored or filtered out by attention because they appear irrelevant. [Continue reading…]

Facebooktwittermail

How we use memory to look at the future

Virginia Hughes writes: Over the past few decades, researchers have worked to uncover the details of how the brain organizes memories. Much remains a mystery, but scientists have identified a key event: the formation of an intense brain wave called a “sharp-wave ripple” (SWR). This process is the brain’s version of an instant replay — a sped-up version of the neural activity that occurred during a recent experience. These ripples are a strikingly synchronous neural symphony, the product of tens of thousands of cells firing over just 100 milliseconds. Any more activity than that could trigger a seizure.

Now researchers have begun to realize that SWRs may be involved in much more than memory formation. Recently, a slew of high-profile rodent studies have suggested that the brain uses SWRs to anticipate future events. A recent experiment, for example, finds that SWRs connect to activity in the prefrontal cortex, a region at the front of the brain that is involved in planning for the future.

Studies such as this one have begun to illuminate the complex relationship between memory and the decision-making process. Until a few years ago, most studies on SWRs focused only on their role in creating and consolidating memories, said Loren Frank, a neuroscientist at the University of California, San Francisco. “None of them really dealt with this issue of: How does the animal actually pull [the memory] back up again? How does it actually use this to figure out what to do?” [Continue reading…]

Facebooktwittermail

The healing power of silence

Daniel A. Gross writes: One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme — it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

A few months later, the delegation issued a slick “Country Brand Report.” It highlighted a host of marketable themes, including Finland’s renowned educational system and school of functional design. One key theme was brand new: silence. As the report explained, modern society often seems intolerably loud and busy. “Silence is a resource,” it said. It could be marketed just like clean water or wild mushrooms. “In the future, people will be prepared to pay for the experience of silence.”

People already do. In a loud world, silence sells. Noise-canceling headphones retail for hundreds of dollars; the cost of some weeklong silent meditation courses can run into the thousands. Finland saw that it was possible to quite literally make something out of nothing.

In 2011, the Finnish Tourist Board released a series of photographs of lone figures in the wilderness, with the caption “Silence, Please.” An international “country branding” consultant, Simon Anholt, proposed the playful tagline “No talking, but action.” And a Finnish watch company, Rönkkö, launched its own new slogan: “Handmade in Finnish silence.”

“We decided, instead of saying that it’s really empty and really quiet and nobody is talking about anything here, let’s embrace it and make it a good thing,” explains Eva Kiviranta, who manages social media for VisitFinland.com.

Silence is a peculiar starting point for a marketing campaign. After all, you can’t weigh, record, or export it. You can’t eat it, collect it, or give it away. The Finland campaign raises the question of just what the tangible effects of silence really are. Science has begun to pipe up on the subject. In recent years researchers have highlighted the peculiar power of silence to calm our bodies, turn up the volume on our inner thoughts, and attune our connection to the world. Their findings begin where we might expect: with noise.

The word “noise” comes from a Latin root meaning either queasiness or pain. According to the historian Hillel Schwartz, there’s even a Mesopotamian legend in which the gods grow so angry at the clamor of earthly humans that they go on a killing spree. (City-dwellers with loud neighbors may empathize, though hopefully not too closely.)

Dislike of noise has produced some of history’s most eager advocates of silence, as Schwartz explains in his book Making Noise: From Babel to the Big Bang and Beyond. In 1859, the British nurse and social reformer Florence Nightingale wrote, “Unnecessary noise is the most cruel absence of care that can be inflicted on sick or well.” Every careless clatter or banal bit of banter, Nightingale argued, can be a source of alarm, distress, and loss of sleep for recovering patients. She even quoted a lecture that identified “sudden noises” as a cause of death among sick children. [Continue reading…]

Facebooktwittermail

What Shakespeare can teach science about language and the limits of the human mind

Jillian Hinchliffe and Seth Frey write: Although [Stephen] Booth is now retired [from the University of California, Berkeley], his work [on Shakespeare] couldn’t be more relevant. In the study of the human mind, old disciplinary boundaries have begun to dissolve and fruitful new relationships between the sciences and humanities have sprung up in their place. When it comes to the cognitive science of language, Booth may be the most prescient literary critic who ever put pen to paper. In his fieldwork in poetic experience, he unwittingly anticipated several language-processing phenomena that cognitive scientists have only recently begun to study. Booth’s work not only provides one of the most original and penetrating looks into the nature of Shakespeare’s genius, it has profound implications for understanding the processes that shape how we think.

Until the early decades of the 20th century, Shakespeare criticism fell primarily into two areas: textual, which grapples with the numerous variants of published works in order to produce an edition as close as possible to the original, and biographical. Scholarship took a more political turn beginning in the 1960s, providing new perspectives from various strains of feminist, Marxist, structuralist, and queer theory. Booth is resolutely dismissive of most of these modes of study. What he cares about is poetics. Specifically, how poetic language operates on and in audiences of a literary work.

Close reading, the school that flourished mid-century and with which Booth’s work is most nearly affiliated, has never gone completely out of style. But Booth’s approach is even more minute—microscopic reading, according to fellow Shakespeare scholar Russ McDonald. And as the microscope opens up new worlds, so does Booth’s critical lens. What makes him radically different from his predecessors is that he doesn’t try to resolve or collapse his readings into any single interpretation. That people are so hung up on interpretation, on meaning, Booth maintains, is “no more than habit.” Instead, he revels in the uncertainty caused by the myriad currents of phonetic, semantic, and ideational patterns at play. [Continue reading…]

Facebooktwittermail

Don’t overestimate your untapped brain power

Nathalia Gjersoe writes: Luc Besson’s latest sci-fi romp, Lucy, is based on the premise that the average person only uses 10% of their brain. This brain-myth has been fodder for books and movies for decades and is a tantalizing plot-device. Alarmingly, however, it seems to be widely accepted as fact. Of those asked, 48% of teachers in the UK, 65% of Americans and 30% of American Psychology students endorsed the myth.

In the movie, Lucy absorbs vast quantities of a nootropic that triggers rampant production of new connections between her neurons. As her brain becomes more and more densely connected, Lucy experiences omniscience, omnipotence and omnipresence. Telepathy, telekinesis and time-travel all become possible.

It’s true that increased connectivity between neurons is associated with greater expertise. Musicians who train for years have greater connectivity and activation of those regions of the brain that control their finger movements and those that bind sensory and motor information. This is the first principle of neural connectivity: cells that fire together wire together.

But resources are limited and the brain is incredibly hungry. It takes a huge amount of energy just to keep it electrically ticking over. There is an excellent TEDEd animation here that explains this nicely. The human adult brain makes up only 2% of the body’s mass yet uses 20% of energy intake. Babies’ brains use 60%! Evolution would necessarily cull any redundant parts of such an expensive organ. [Continue reading…]

Facebooktwittermail

The orchestration of attention

The New Yorker: Every moment, our brains are bombarded with information, from without and within. The eyes alone convey more than a hundred billion signals to the brain every second. The ears receive another avalanche of sounds. Then there are the fragments of thoughts, conscious and unconscious, that race from one neuron to the next. Much of this data seems random and meaningless. Indeed, for us to function, much of it must be ignored. But clearly not all. How do our brains select the relevant data? How do we decide to pay attention to the turn of a doorknob and ignore the drip of a leaky faucet? How do we become conscious of a certain stimulus, or indeed “conscious” at all?

For decades, philosophers and scientists have debated the process by which we pay attention to things, based on cognitive models of the mind. But, in the view of many modern psychologists and neurobiologists, the “mind” is not some nonmaterial and exotic essence separate from the body. All questions about the mind must ultimately be answered by studies of physical cells, explained in terms of the detailed workings of the more than eighty billion neurons in the brain. At this level, the question is: How do neurons signal to one another and to a cognitive command center that they have something important to say?

“Years ago, we were satisfied to know which areas of the brain light up under various stimuli,” the neuroscientist Robert Desimone told me during a recent visit to his office. “Now we want to know mechanisms.” Desimone directs the McGovern Institute for Brain Research at the Massachusetts Institute of Technology; youthful and trim at the age of sixty-two, he was dressed casually, in a blue pinstripe shirt, and had only the slightest gray in his hair. On the bookshelf of his tidy office were photographs of his two young children; on the wall was a large watercolor titled “Neural Gardens,” depicting a forest of tangled neurons, their spindly axons and dendrites wending downward like roots in rich soil.

Earlier this year, in an article published in the journal Science, Desimone and his colleague Daniel Baldauf reported on an experiment that shed light on the physical mechanism of paying attention. The researchers presented a series of two kinds of images — faces and houses — to their subjects in rapid succession, like passing frames of a movie, and asked them to concentrate on the faces but disregard the houses (or vice versa). The images were “tagged” by being presented at two frequencies — a new face every two-thirds of a second, a new house every half second. By monitoring the frequencies of the electrical activity of the subjects’ brains with magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), Desimone and Baldauf could determine where in the brain the images were being directed.

The scientists found that, even though the two sets of images were presented to the eye almost on top of each other, they were processed by different places in the brain — the face images by a particular region on the surface of the temporal lobe that is known to specialize in facial recognition, and the house images by a neighboring but separate group of neurons specializing in place recognition.

Most importantly, the neurons in the two regions behaved differently. When the subjects were told to concentrate on the faces and to disregard the houses, the neurons in the face location fired in synchrony, like a group of people singing in unison, while the neurons in the house location fired like a group of people singing out of synch, each beginning at a random point in the score. When the subjects concentrated instead on houses, the reverse happened. [Continue reading…]

Facebooktwittermail

Brain shrinkage, poor concentration, anxiety, and depression linked to media-multitasking

Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.

A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.

The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.

But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking. [Continue reading…]

Facebooktwittermail

We are more rational than those who nudge us

Steven Poole writes: Humanity’s achievements and its self-perception are today at curious odds. We can put autonomous robots on Mars and genetically engineer malarial mosquitoes to be sterile, yet the news from popular psychology, neuroscience, economics and other fields is that we are not as rational as we like to assume. We are prey to a dismaying variety of hard-wired errors. We prefer winning to being right. At best, so the story goes, our faculty of reason is at constant war with an irrational darkness within. At worst, we should abandon the attempt to be rational altogether.

The present climate of distrust in our reasoning capacity draws much of its impetus from the field of behavioural economics, and particularly from work by Daniel Kahneman and Amos Tversky in the 1980s, summarised in Kahneman’s bestselling Thinking, Fast and Slow (2011). There, Kahneman divides the mind into two allegorical systems, the intuitive ‘System 1’, which often gives wrong answers, and the reflective reasoning of ‘System 2’. ‘The attentive System 2 is who we think we are,’ he writes; but it is the intuitive, biased, ‘irrational’ System 1 that is in charge most of the time.

Other versions of the message are expressed in more strongly negative terms. You Are Not So Smart (2011) is a bestselling book by David McRaney on cognitive bias. According to the study ‘Why Do Humans Reason?’ (2011) by the cognitive scientists Hugo Mercier and Dan Sperber, our supposedly rational faculties evolved not to find ‘truth’ but merely to win arguments. And in The Righteous Mind (2012), the psychologist Jonathan Haidt calls the idea that reason is ‘our most noble attribute’ a mere ‘delusion’. The worship of reason, he adds, ‘is an example of faith in something that does not exist’. Your brain, runs the now-prevailing wisdom, is mainly a tangled, damp and contingently cobbled-together knot of cognitive biases and fear.

This is a scientised version of original sin. And its eager adoption by today’s governments threatens social consequences that many might find troubling. A culture that believes its citizens are not reliably competent thinkers will treat those citizens differently to one that respects their reflective autonomy. Which kind of culture do we want to be? And we do have a choice. Because it turns out that the modern vision of compromised rationality is more open to challenge than many of its followers accept. [Continue reading…]

Facebooktwittermail

Your brain on metaphors

Michael Chorost writes:

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

The verbs are the same. The syntax is identical. Does the brain notice, or care, that the first is literal, the second
metaphorical, the third idiomatic?

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

But in their 1980 book, Metaphors We Live By, the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote. [Continue reading…]

Facebooktwittermail

Humans are wired for bad news

Jacob Burak writes: I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why?

Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it.

Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first. [Continue reading…]

Facebooktwittermail

Study reveals rats show regret, a cognitive behavior once thought to be uniquely human

EurekAlert!: New research from the Department of Neuroscience at the University of Minnesota reveals that rats show regret, a cognitive behavior once thought to be uniquely and fundamentally human.

Research findings were recently published in Nature Neuroscience.

To measure the cognitive behavior of regret, A. David Redish, Ph.D., a professor of neuroscience in the University of Minnesota Department of Neuroscience, and Adam Steiner, a graduate student in the Graduate Program in Neuroscience, who led the study, started from the definitions of regret that economists and psychologists have identified in the past.

“Regret is the recognition that you made a mistake, that if you had done something else, you would have been better off,” said Redish. “The difficult part of this study was separating regret from disappointment, which is when things aren’t as good as you would have hoped. The key to distinguishing between the two was letting the rats choose what to do.” [Continue reading…]

The boundaries delineating what is taken to be uniquely human are constantly being challenged by new scientific findings. But it’s worth asking why those boundaries were there in the first place.

Surely the scientific approach when investigating a cognitive state such as regret would be to start out without making any suppositions about what non-humans do or don’t experience.

The idea that there is something uniquely human about regret, seems like a vestige of biblically inspired notions of human uniqueness.

That as humans we might be unaware of the regrets of rats says much less about what rats are capable of experiencing than it says about our capacity to imagine non-human experience.

Yet at least rationally, it seems no great leap is required in assuming that any creature that makes choices will also experience something resembling regret.

A cat learning to hunt, surely feels something when it makes a premature strike, having yet to master the right balance between stalking and attacking its prey. That feeling is most likely some form of discomfort that spurs learning. The cat has no names for its feelings yet feels them nonetheless.

That animals lack some of the means through which humans convey their own feelings says much more about our powers of description than their capacity to feel.

Facebooktwittermail

Cynicism is toxic

Cynics fool themselves by thinking they can’t be fooled.

The cynic imagines he’s guarding himself against being duped. He’s not naive, he’s worldly wise, so he’s not about to get taken in — but this psychic insulation comes at a price.

The cynic is cautious and mistrustful. Worst of all, the cynic by relying too much on his own counsel, saps the foundation of curiosity, which is the ability to be surprised.

While the ability to develop and sustain an open mind has obvious psychological value, neurologists now say that it’s also necessary for the health of the brain. Cynicism leads towards dementia.

One of the researchers in a new study suggests that the latest findings may offer insights on how to reduce the risks of dementia, yet that seems to imply that people might be less inclined to become cynical simply by knowing that its bad for their health. How are we to reduce the risks of becoming cynical in the first place?

One of the most disturbing findings of a recent Pew Research Center survey, Millenials in Adulthood, was this:

In response to a long-standing social science survey question, “Generally speaking, would you say that most people can be trusted or that you can’t be too careful in dealing with people,” just 19% of Millennials say most people can be trusted, compared with 31% of Gen Xers, 37% of Silents and 40% of Boomers.

While this trust deficit among Millennials no doubt has multiple causes, such as the socially fragmented nature of our digital world, I don’t believe that there has ever before been a generation so thoroughly trained in fear. Beneath cynicism lurks fear.

The fear may have calmed greatly since the days of post-9/11 hysteria, yet it has not gone away. It’s the background noise of American life. It might no longer be focused so strongly on terrorism, since there are plenty of other reasons to fear — some baseless, some over-stated, and some underestimated. But the aggregation of all these fears produces a pervasive mistrust of life.

ScienceDaily: People with high levels of cynical distrust may be more likely to develop dementia, according to a study published in the May 28, 2014, online issue of Neurology®, the medical journal of the American Academy of Neurology.

Cynical distrust, which is defined as the belief that others are mainly motivated by selfish concerns, has been associated with other health problems, such as heart disease. This is the first study to look at the relationship between cynicism and dementia.

“These results add to the evidence that people’s view on life and personality may have an impact on their health,” said study author Anna-Maija Tolppanen, PhD, of the University of Eastern Finland in Kuopio. “Understanding how a personality trait like cynicism affects risk for dementia might provide us with important insights on how to reduce risks for dementia.”

For the study, 1,449 people with an average age of 71 were given tests for dementia and a questionnaire to measure their level of cynicism. The questionnaire has been shown to be reliable, and people’s scores tend to remain stable over periods of several years. People are asked how much they agree with statements such as “I think most people would lie to get ahead,” “It is safer to trust nobody” and “Most people will use somewhat unfair reasons to gain profit or an advantage rather than lose it.” Based on their scores, participants were grouped in low, moderate and high levels of cynical distrust.

A total of 622 people completed two tests for dementia, with the last one an average of eight years after the study started. During that time, 46 people were diagnosed with dementia. Once researchers adjusted for other factors that could affect dementia risk, such as high blood pressure, high cholesterol and smoking, people with high levels of cynical distrust were three times more likely to develop dementia than people with low levels of cynicism. Of the 164 people with high levels of cynicism, 14 people developed dementia, compared to nine of the 212 people with low levels of cynicism.

The study also looked at whether people with high levels of cynicism were more likely to die sooner than people with low levels of cynicism. A total of 1,146 people were included in this part of the analysis, and 361 people died during the average of 10 years of follow-up. High cynicism was initially associated with earlier death, but after researchers accounted for factors such as socioeconomic status, behaviors such as smoking and health status, there was no longer any link between cynicism and earlier death.

Facebooktwittermail