A deficit in patience produces the illusion of a shortage of time

Chelsea Wald writes: Not long ago I diagnosed myself with the recently identified condition of sidewalk rage. It’s most pronounced when it comes to a certain friend who is a slow walker. Last month, as we sashayed our way to dinner, I found myself biting my tongue, thinking, I have to stop going places with her if I ever want to … get there!

You too can measure yourself on the “Pedestrian Aggressiveness Syndrome Scale,” a tool developed by University of Hawaii psychologist Leon James. While walking in a crowd, do you find yourself “acting in a hostile manner (staring, presenting a mean face, moving closer or faster than expected)” and “enjoying thoughts of violence?”

Slowness rage is not confined to the sidewalk, of course. Slow drivers, slow Internet, slow grocery lines — they all drive us crazy. Even the opening of this article may be going on a little too long for you. So I’ll get to the point. Slow things drive us crazy because the fast pace of society has warped our sense of timing. Things that our great-great-grandparents would have found miraculously efficient now drive us around the bend. Patience is a virtue that’s been vanquished in the Twitter age.

Once upon a time, cognitive scientists tell us, patience and impatience had an evolutionary purpose. They constituted a yin and yang balance, a finely tuned internal timer that tells when we’ve waited too long for something and should move on. When that timer went buzz, it was time to stop foraging at an unproductive patch or abandon a failing hunt.

“Why are we impatient? It’s a heritage from our evolution,” says Marc Wittmann, a psychologist at the Institute for Frontier Areas of Psychology and Mental Health in Freiburg, Germany. Impatience made sure we didn’t die from spending too long on a single unrewarding activity. It gave us the impulse to act.

But that good thing is gone. The fast pace of society has thrown our internal timer out of balance. It creates expectations that can’t be rewarded fast enough — or rewarded at all. When things move more slowly than we expect, our internal timer even plays tricks on us, stretching out the wait, summoning anger out of proportion to the delay. [Continue reading…]

facebooktwittermail

Your gut tells your mind, more than you may imagine

Charles Schmidt writes: The notion that the state of our gut governs our state of mind dates back more than 100 years. Many 19th- and early 20th-century scientists believed that accumulating wastes in the colon triggered a state of “auto-intoxication,” whereby poisons emanating from the gut produced infections that were in turn linked with depression, anxiety and psychosis. Patients were treated with colonic purges and even bowel surgeries until these practices were dismissed as quackery.

The ongoing exploration of the human microbiome promises to bring the link between the gut and the brain into clearer focus. Scientists are increasingly convinced that the vast assemblage of microfauna in our intestines may have a major impact on our state of mind. The gut-brain axis seems to be bidirectional — the brain acts on gastrointestinal and immune functions that help to shape the gut’s microbial makeup, and gut microbes make neuroactive compounds, including neurotransmitters and metabolites that also act on the brain. These interactions could occur in various ways: microbial compounds communicate via the vagus nerve, which connects the brain and the digestive tract, and microbially derived metabolites interact with the immune system, which maintains its own communication with the brain. Sven Pettersson, a microbiologist at the Karolinska Institute in Stockholm, has recently shown that gut microbes help to control leakage through both the intestinal lining and the blood-brain barrier, which ordinarily protects the brain from potentially harmful agents.

Microbes may have their own evolutionary reasons for communicating with the brain. They need us to be social, says John Cryan, a neuroscientist at University College Cork in Ireland, so that they can spread through the human population. Cryan’s research shows that when bred in sterile conditions, germ-free mice lacking in intestinal microbes also lack an ability to recognize other mice with whom they interact. In other studies, disruptions of the microbiome induced mice behavior that mimics human anxiety, depression and even autism. In some cases, scientists restored more normal behavior by treating their test subjects with certain strains of benign bacteria. Nearly all the data so far are limited to mice, but Cryan believes the findings provide fertile ground for developing analogous compounds, which he calls psychobiotics, for humans. “That dietary treatments could be used as either adjunct or sole therapy for mood disorders is not beyond the realm of possibility,” he says. [Continue reading…]

facebooktwittermail

Neurological conductors that keep the brain in time and tune

Harvard Gazette: Like musical sounds, different states of mind are defined by distinct, characteristic waveforms, recognizable frequencies and rhythms in the brain’s electrical field. When the brain is alert and performing complex computations, the cerebral cortex — the wrinkled outer surface of the brain — thrums with cortical band oscillations in the gamma wavelength. In some neurological disorders like schizophrenia, however, these waves are out of tune and the rhythm is out of sync.

New research led by Harvard Medical School (HMS) scientists at the VA Boston Healthcare System (VABHS) has identified a specific class of neurons — basal forebrain GABA parvalbumin neurons, or PV neurons — that trigger these waves, acting as neurological conductors that trigger the cortex to hum rhythmically and in tune. (GABA is gamma-amniobutyric acid, a major neurotransmitter in the brain.)

The results appear this week in the journal Proceedings of the National Academy of Sciences.

“This is a move toward a unified theory of consciousness control,” said co-senior author Robert McCarley, HMS professor of psychiatry and head of the Department of Psychiatry at VA Boston Healthcare. “We’ve known that the basal forebrain is important in turning consciousness on and off in sleep and wake, but now we’ve found that these specific cells also play a key role in triggering the synchronized rhythms that characterize conscious thought, perception, and problem-solving.” [Continue reading…]

facebooktwittermail

Music permeates our brain

Jonathan Berger writes: Neurological research has shown that vivid musical hallucinations are more than metaphorical. They don’t just feel real, they are, from a cognitive perspective, entirely real. In the absence of sound waves, brain activation is strikingly similar to that triggered by external auditory sounds. Why should that be?

Music, repetitive and patterned by nature, provides structure within which we find anchors, context, and a basis for organizing time. In the prehistory of civilization, humans likely found comfort in the audible patterns and structures that accompanied their circadian rhythms — from the coo of a morning dove to the nocturnal chirps of crickets. With the evolution of music a more malleable framework for segmenting and structuring time developed. Humans generated predictable and replicable temporal patterns by drumming, vocalizing, blowing, and plucking. This metered, temporal framework provides an internal world in which we construct predictions about the future — what will happen next, and when it will happen.

This process spotlights the brain itself. The composer Karlheinz Stockhausen hyphenated the term for his craft to underscore the literal meaning of “com-pose” — to put together elements, from com (“with” or “together”) and pose (“put” or “place”). When we imagine music, we literally compose — sometimes recognizable tunes, other times novel combinations of patterns and musical ideas. Toddlers sing themselves to sleep with vocalizations of musical snippets they are conjuring up in their imagination. Typically, these “spontaneous melodies,” as they are referred to by child psychologists, comprise fragments of salient features of multiple songs that the baby is piecing together. In short, we do not merely retrieve music that we store in memory. Rather, a supremely complex web of associations can be stirred and generated as we compose music in our minds.

Today, amid widely disseminated music, we are barraged by a cacophony of disparate musical patterns — more often than not uninvited and unwanted — and likely spend more time than ever obsessing over imagined musical fragments. The brain is a composer whose music orchestrates our lives. And right now the brain is working overtime. [Continue reading…]

facebooktwittermail

Meet Walter Pitts, the homeless genius who revolutionized artificial intelligence

Amanda Gefter writes: Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker, had no trouble raising his fists to get his way. The neighborhood boys weren’t much better. One afternoon in 1935, they chased him through the streets until he ducked into the local library to hide. The library was familiar ground, where he had taught himself Greek, Latin, logic, and mathematics—better than home, where his father insisted he drop out of school and go to work. Outside, the world was messy. Inside, it all made sense.

Not wanting to risk another run-in that night, Pitts stayed hidden until the library closed for the evening. Alone, he wandered through the stacks of books until he came across Principia Mathematica, a three-volume tome written by Bertrand Russell and Alfred Whitehead between 1910 and 1913, which attempted to reduce all of mathematics to pure logic. Pitts sat down and began to read. For three days he remained in the library until he had read each volume cover to cover — nearly 2,000 pages in all — and had identified several mistakes. Deciding that Bertrand Russell himself needed to know about these, the boy drafted a letter to Russell detailing the errors. Not only did Russell write back, he was so impressed that he invited Pitts to study with him as a graduate student at Cambridge University in England. Pitts couldn’t oblige him, though — he was only 12 years old. But three years later, when he heard that Russell would be visiting the University of Chicago, the 15-year-old ran away from home and headed for Illinois. He never saw his family again. [Continue reading…]

facebooktwittermail

The conception of perception shaped by context

facebooktwittermail

A universal logic of discernment

nebula

Natalie Wolchover writes: When in 2012 a computer learned to recognize cats in YouTube videos and just last month another correctly captioned a photo of “a group of young people playing a game of Frisbee,” artificial intelligence researchers hailed yet more triumphs in “deep learning,” the wildly successful set of algorithms loosely modeled on the way brains grow sensitive to features of the real world simply through exposure.

Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.

Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos.

The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, “a cat” regardless of its color, size or posture in a given video.

“They actually wrote down on paper, with exact proofs, something that people only dreamed existed,” said Ilya Nemenman, a biophysicist at Emory University. “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

As for our own remarkable knack for spotting a cat in the bushes, a familiar face in a crowd or indeed any object amid the swirl of color, texture and sound that surrounds us, strong similarities between deep learning and biological learning suggest that the brain may also employ a form of renormalization to make sense of the world. [Continue reading…]

facebooktwittermail

Mirror neurons may reveal more about neurons than they do about people

Jason G. Goldman writes: In his 2011 book, The Tell-Tale Brain, neuroscientist V. S. Ramachandran says that some of the cells in your brain are of a special variety. He calls them the “neurons that built civilization,” but you might know them as mirror neurons. They’ve been implicated in just about everything from the development of empathy in earlier primates, millions of years ago, to the emergence of complex culture in our species.

Ramachandran says that mirror neurons help explain the things that make us so apparently unique: tool use, cooking with fire, using complex linguistics to communicate.

It’s an inherently seductive idea: that one small tweak to a particular set of brain cells could have transformed an early primate into something that was somehow more. Indeed, experimental psychologist Cecilia Hayes wrote in 2010 (pdf), “[mirror neurons] intrigue both specialists and non-specialists, celebrated as a ‘revolution’ in understanding social behaviour and ‘the driving force’ behind ‘the great leap forward’ in human evolution.”

The story of mirror neurons begins in the 1990s at the University of Parma in Italy. A group of neuroscientists were studying rhesus monkeys by implanting small electrodes in their brains, and they found that some cells exhibited a curious kind of behavior. They fired both when the monkey executed a movement, such as grasping a banana, and also when the monkey watched the experimenter execute that very same movement.

It was immediately an exciting find. These neurons were located in a part of the brain thought solely responsible for sending motor commands out from the brain, through the brainstem to the spine, and out to the nerves that control the body’s muscles. This finding suggested that they’re not just used for executing actions, but are somehow involved in understanding the observed actions of others.

After that came a flood of research connecting mirror neurons to the development of empathy, autism, language, tool use, fire, and more. Psychologist and science writer Christian Jarrett has twice referred to mirror neurons as “the most hyped concept in neuroscience.” Is he right? Where does empirical evidence end and overheated speculation begin? [Continue reading…]

facebooktwittermail

Cognitive disinhibition: the kernel of genius and madness

Dean Keith Simonton writes: When John Forbes Nash, the Nobel Prize-winning mathematician, schizophrenic, and paranoid delusional, was asked how he could believe that space aliens had recruited him to save the world, he gave a simple response. “Because the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

Nash is hardly the only so-called mad genius in history. Suicide victims like painters Vincent Van Gogh and Mark Rothko, novelists Virginia Woolf and Ernest Hemingway, and poets Anne Sexton and Sylvia Plath all offer prime examples. Even ignoring those great creators who did not kill themselves in a fit of deep depression, it remains easy to list persons who endured well-documented psychopathology, including the composer Robert Schumann, the poet Emily Dickinson, and Nash. Creative geniuses who have succumbed to alcoholism or other addictions are also legion.

Instances such as these have led many to suppose that creativity and psychopathology are intimately related. Indeed, the notion that creative genius might have some touch of madness goes back to Plato and Aristotle. But some recent psychologists argue that the whole idea is a pure hoax. After all, it is certainly no problem to come up with the names of creative geniuses who seem to have displayed no signs or symptoms of mental illness.

Opponents of the mad genius idea can also point to two solid facts. First, the number of creative geniuses in the entire history of human civilization is very large. Thus, even if these people were actually less prone to psychopathology than the average person, the number with mental illness could still be extremely large. Second, the permanent inhabitants of mental asylums do not usually produce creative masterworks. The closest exception that anyone might imagine is the notorious Marquis de Sade. Even in his case, his greatest (or rather most sadistic) works were written while he was imprisoned as a criminal rather than institutionalized as a lunatic.

So should we believe that creative genius is connected with madness or not? Modern empirical research suggests that we should because it has pinpointed the connection between madness and creativity clearly. The most important process underlying strokes of creative genius is cognitive disinhibition — the tendency to pay attention to things that normally should be ignored or filtered out by attention because they appear irrelevant. [Continue reading…]

facebooktwittermail

How we use memory to look at the future

Virginia Hughes writes: Over the past few decades, researchers have worked to uncover the details of how the brain organizes memories. Much remains a mystery, but scientists have identified a key event: the formation of an intense brain wave called a “sharp-wave ripple” (SWR). This process is the brain’s version of an instant replay — a sped-up version of the neural activity that occurred during a recent experience. These ripples are a strikingly synchronous neural symphony, the product of tens of thousands of cells firing over just 100 milliseconds. Any more activity than that could trigger a seizure.

Now researchers have begun to realize that SWRs may be involved in much more than memory formation. Recently, a slew of high-profile rodent studies have suggested that the brain uses SWRs to anticipate future events. A recent experiment, for example, finds that SWRs connect to activity in the prefrontal cortex, a region at the front of the brain that is involved in planning for the future.

Studies such as this one have begun to illuminate the complex relationship between memory and the decision-making process. Until a few years ago, most studies on SWRs focused only on their role in creating and consolidating memories, said Loren Frank, a neuroscientist at the University of California, San Francisco. “None of them really dealt with this issue of: How does the animal actually pull [the memory] back up again? How does it actually use this to figure out what to do?” [Continue reading…]

facebooktwittermail

The healing power of silence

Daniel A. Gross writes: One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme — it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

A few months later, the delegation issued a slick “Country Brand Report.” It highlighted a host of marketable themes, including Finland’s renowned educational system and school of functional design. One key theme was brand new: silence. As the report explained, modern society often seems intolerably loud and busy. “Silence is a resource,” it said. It could be marketed just like clean water or wild mushrooms. “In the future, people will be prepared to pay for the experience of silence.”

People already do. In a loud world, silence sells. Noise-canceling headphones retail for hundreds of dollars; the cost of some weeklong silent meditation courses can run into the thousands. Finland saw that it was possible to quite literally make something out of nothing.

In 2011, the Finnish Tourist Board released a series of photographs of lone figures in the wilderness, with the caption “Silence, Please.” An international “country branding” consultant, Simon Anholt, proposed the playful tagline “No talking, but action.” And a Finnish watch company, Rönkkö, launched its own new slogan: “Handmade in Finnish silence.”

“We decided, instead of saying that it’s really empty and really quiet and nobody is talking about anything here, let’s embrace it and make it a good thing,” explains Eva Kiviranta, who manages social media for VisitFinland.com.

Silence is a peculiar starting point for a marketing campaign. After all, you can’t weigh, record, or export it. You can’t eat it, collect it, or give it away. The Finland campaign raises the question of just what the tangible effects of silence really are. Science has begun to pipe up on the subject. In recent years researchers have highlighted the peculiar power of silence to calm our bodies, turn up the volume on our inner thoughts, and attune our connection to the world. Their findings begin where we might expect: with noise.

The word “noise” comes from a Latin root meaning either queasiness or pain. According to the historian Hillel Schwartz, there’s even a Mesopotamian legend in which the gods grow so angry at the clamor of earthly humans that they go on a killing spree. (City-dwellers with loud neighbors may empathize, though hopefully not too closely.)

Dislike of noise has produced some of history’s most eager advocates of silence, as Schwartz explains in his book Making Noise: From Babel to the Big Bang and Beyond. In 1859, the British nurse and social reformer Florence Nightingale wrote, “Unnecessary noise is the most cruel absence of care that can be inflicted on sick or well.” Every careless clatter or banal bit of banter, Nightingale argued, can be a source of alarm, distress, and loss of sleep for recovering patients. She even quoted a lecture that identified “sudden noises” as a cause of death among sick children. [Continue reading…]

facebooktwittermail

What Shakespeare can teach science about language and the limits of the human mind

Jillian Hinchliffe and Seth Frey write: Although [Stephen] Booth is now retired [from the University of California, Berkeley], his work [on Shakespeare] couldn’t be more relevant. In the study of the human mind, old disciplinary boundaries have begun to dissolve and fruitful new relationships between the sciences and humanities have sprung up in their place. When it comes to the cognitive science of language, Booth may be the most prescient literary critic who ever put pen to paper. In his fieldwork in poetic experience, he unwittingly anticipated several language-processing phenomena that cognitive scientists have only recently begun to study. Booth’s work not only provides one of the most original and penetrating looks into the nature of Shakespeare’s genius, it has profound implications for understanding the processes that shape how we think.

Until the early decades of the 20th century, Shakespeare criticism fell primarily into two areas: textual, which grapples with the numerous variants of published works in order to produce an edition as close as possible to the original, and biographical. Scholarship took a more political turn beginning in the 1960s, providing new perspectives from various strains of feminist, Marxist, structuralist, and queer theory. Booth is resolutely dismissive of most of these modes of study. What he cares about is poetics. Specifically, how poetic language operates on and in audiences of a literary work.

Close reading, the school that flourished mid-century and with which Booth’s work is most nearly affiliated, has never gone completely out of style. But Booth’s approach is even more minute—microscopic reading, according to fellow Shakespeare scholar Russ McDonald. And as the microscope opens up new worlds, so does Booth’s critical lens. What makes him radically different from his predecessors is that he doesn’t try to resolve or collapse his readings into any single interpretation. That people are so hung up on interpretation, on meaning, Booth maintains, is “no more than habit.” Instead, he revels in the uncertainty caused by the myriad currents of phonetic, semantic, and ideational patterns at play. [Continue reading…]

facebooktwittermail

Don’t overestimate your untapped brain power

Nathalia Gjersoe writes: Luc Besson’s latest sci-fi romp, Lucy, is based on the premise that the average person only uses 10% of their brain. This brain-myth has been fodder for books and movies for decades and is a tantalizing plot-device. Alarmingly, however, it seems to be widely accepted as fact. Of those asked, 48% of teachers in the UK, 65% of Americans and 30% of American Psychology students endorsed the myth.

In the movie, Lucy absorbs vast quantities of a nootropic that triggers rampant production of new connections between her neurons. As her brain becomes more and more densely connected, Lucy experiences omniscience, omnipotence and omnipresence. Telepathy, telekinesis and time-travel all become possible.

It’s true that increased connectivity between neurons is associated with greater expertise. Musicians who train for years have greater connectivity and activation of those regions of the brain that control their finger movements and those that bind sensory and motor information. This is the first principle of neural connectivity: cells that fire together wire together.

But resources are limited and the brain is incredibly hungry. It takes a huge amount of energy just to keep it electrically ticking over. There is an excellent TEDEd animation here that explains this nicely. The human adult brain makes up only 2% of the body’s mass yet uses 20% of energy intake. Babies’ brains use 60%! Evolution would necessarily cull any redundant parts of such an expensive organ. [Continue reading…]

facebooktwittermail

The orchestration of attention

The New Yorker: Every moment, our brains are bombarded with information, from without and within. The eyes alone convey more than a hundred billion signals to the brain every second. The ears receive another avalanche of sounds. Then there are the fragments of thoughts, conscious and unconscious, that race from one neuron to the next. Much of this data seems random and meaningless. Indeed, for us to function, much of it must be ignored. But clearly not all. How do our brains select the relevant data? How do we decide to pay attention to the turn of a doorknob and ignore the drip of a leaky faucet? How do we become conscious of a certain stimulus, or indeed “conscious” at all?

For decades, philosophers and scientists have debated the process by which we pay attention to things, based on cognitive models of the mind. But, in the view of many modern psychologists and neurobiologists, the “mind” is not some nonmaterial and exotic essence separate from the body. All questions about the mind must ultimately be answered by studies of physical cells, explained in terms of the detailed workings of the more than eighty billion neurons in the brain. At this level, the question is: How do neurons signal to one another and to a cognitive command center that they have something important to say?

“Years ago, we were satisfied to know which areas of the brain light up under various stimuli,” the neuroscientist Robert Desimone told me during a recent visit to his office. “Now we want to know mechanisms.” Desimone directs the McGovern Institute for Brain Research at the Massachusetts Institute of Technology; youthful and trim at the age of sixty-two, he was dressed casually, in a blue pinstripe shirt, and had only the slightest gray in his hair. On the bookshelf of his tidy office were photographs of his two young children; on the wall was a large watercolor titled “Neural Gardens,” depicting a forest of tangled neurons, their spindly axons and dendrites wending downward like roots in rich soil.

Earlier this year, in an article published in the journal Science, Desimone and his colleague Daniel Baldauf reported on an experiment that shed light on the physical mechanism of paying attention. The researchers presented a series of two kinds of images — faces and houses — to their subjects in rapid succession, like passing frames of a movie, and asked them to concentrate on the faces but disregard the houses (or vice versa). The images were “tagged” by being presented at two frequencies — a new face every two-thirds of a second, a new house every half second. By monitoring the frequencies of the electrical activity of the subjects’ brains with magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), Desimone and Baldauf could determine where in the brain the images were being directed.

The scientists found that, even though the two sets of images were presented to the eye almost on top of each other, they were processed by different places in the brain — the face images by a particular region on the surface of the temporal lobe that is known to specialize in facial recognition, and the house images by a neighboring but separate group of neurons specializing in place recognition.

Most importantly, the neurons in the two regions behaved differently. When the subjects were told to concentrate on the faces and to disregard the houses, the neurons in the face location fired in synchrony, like a group of people singing in unison, while the neurons in the house location fired like a group of people singing out of synch, each beginning at a random point in the score. When the subjects concentrated instead on houses, the reverse happened. [Continue reading…]

facebooktwittermail

Brain shrinkage, poor concentration, anxiety, and depression linked to media-multitasking

Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.

A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.

The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.

But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking. [Continue reading…]

facebooktwittermail

We are more rational than those who nudge us

Steven Poole writes: Humanity’s achievements and its self-perception are today at curious odds. We can put autonomous robots on Mars and genetically engineer malarial mosquitoes to be sterile, yet the news from popular psychology, neuroscience, economics and other fields is that we are not as rational as we like to assume. We are prey to a dismaying variety of hard-wired errors. We prefer winning to being right. At best, so the story goes, our faculty of reason is at constant war with an irrational darkness within. At worst, we should abandon the attempt to be rational altogether.

The present climate of distrust in our reasoning capacity draws much of its impetus from the field of behavioural economics, and particularly from work by Daniel Kahneman and Amos Tversky in the 1980s, summarised in Kahneman’s bestselling Thinking, Fast and Slow (2011). There, Kahneman divides the mind into two allegorical systems, the intuitive ‘System 1’, which often gives wrong answers, and the reflective reasoning of ‘System 2’. ‘The attentive System 2 is who we think we are,’ he writes; but it is the intuitive, biased, ‘irrational’ System 1 that is in charge most of the time.

Other versions of the message are expressed in more strongly negative terms. You Are Not So Smart (2011) is a bestselling book by David McRaney on cognitive bias. According to the study ‘Why Do Humans Reason?’ (2011) by the cognitive scientists Hugo Mercier and Dan Sperber, our supposedly rational faculties evolved not to find ‘truth’ but merely to win arguments. And in The Righteous Mind (2012), the psychologist Jonathan Haidt calls the idea that reason is ‘our most noble attribute’ a mere ‘delusion’. The worship of reason, he adds, ‘is an example of faith in something that does not exist’. Your brain, runs the now-prevailing wisdom, is mainly a tangled, damp and contingently cobbled-together knot of cognitive biases and fear.

This is a scientised version of original sin. And its eager adoption by today’s governments threatens social consequences that many might find troubling. A culture that believes its citizens are not reliably competent thinkers will treat those citizens differently to one that respects their reflective autonomy. Which kind of culture do we want to be? And we do have a choice. Because it turns out that the modern vision of compromised rationality is more open to challenge than many of its followers accept. [Continue reading…]

facebooktwittermail

Your brain on metaphors

Michael Chorost writes:

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

The verbs are the same. The syntax is identical. Does the brain notice, or care, that the first is literal, the second
metaphorical, the third idiomatic?

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

But in their 1980 book, Metaphors We Live By, the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote. [Continue reading…]

facebooktwittermail

Humans are wired for bad news

Jacob Burak writes: I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why?

Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it.

Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first. [Continue reading…]

facebooktwittermail