Category Archives: Psychology

The healing power of silence

Daniel A. Gross writes: One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme — it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

A few months later, the delegation issued a slick “Country Brand Report.” It highlighted a host of marketable themes, including Finland’s renowned educational system and school of functional design. One key theme was brand new: silence. As the report explained, modern society often seems intolerably loud and busy. “Silence is a resource,” it said. It could be marketed just like clean water or wild mushrooms. “In the future, people will be prepared to pay for the experience of silence.”

People already do. In a loud world, silence sells. Noise-canceling headphones retail for hundreds of dollars; the cost of some weeklong silent meditation courses can run into the thousands. Finland saw that it was possible to quite literally make something out of nothing.

In 2011, the Finnish Tourist Board released a series of photographs of lone figures in the wilderness, with the caption “Silence, Please.” An international “country branding” consultant, Simon Anholt, proposed the playful tagline “No talking, but action.” And a Finnish watch company, Rönkkö, launched its own new slogan: “Handmade in Finnish silence.”

“We decided, instead of saying that it’s really empty and really quiet and nobody is talking about anything here, let’s embrace it and make it a good thing,” explains Eva Kiviranta, who manages social media for VisitFinland.com.

Silence is a peculiar starting point for a marketing campaign. After all, you can’t weigh, record, or export it. You can’t eat it, collect it, or give it away. The Finland campaign raises the question of just what the tangible effects of silence really are. Science has begun to pipe up on the subject. In recent years researchers have highlighted the peculiar power of silence to calm our bodies, turn up the volume on our inner thoughts, and attune our connection to the world. Their findings begin where we might expect: with noise.

The word “noise” comes from a Latin root meaning either queasiness or pain. According to the historian Hillel Schwartz, there’s even a Mesopotamian legend in which the gods grow so angry at the clamor of earthly humans that they go on a killing spree. (City-dwellers with loud neighbors may empathize, though hopefully not too closely.)

Dislike of noise has produced some of history’s most eager advocates of silence, as Schwartz explains in his book Making Noise: From Babel to the Big Bang and Beyond. In 1859, the British nurse and social reformer Florence Nightingale wrote, “Unnecessary noise is the most cruel absence of care that can be inflicted on sick or well.” Every careless clatter or banal bit of banter, Nightingale argued, can be a source of alarm, distress, and loss of sleep for recovering patients. She even quoted a lecture that identified “sudden noises” as a cause of death among sick children. [Continue reading…]

Facebooktwittermail

What Shakespeare can teach science about language and the limits of the human mind

Jillian Hinchliffe and Seth Frey write: Although [Stephen] Booth is now retired [from the University of California, Berkeley], his work [on Shakespeare] couldn’t be more relevant. In the study of the human mind, old disciplinary boundaries have begun to dissolve and fruitful new relationships between the sciences and humanities have sprung up in their place. When it comes to the cognitive science of language, Booth may be the most prescient literary critic who ever put pen to paper. In his fieldwork in poetic experience, he unwittingly anticipated several language-processing phenomena that cognitive scientists have only recently begun to study. Booth’s work not only provides one of the most original and penetrating looks into the nature of Shakespeare’s genius, it has profound implications for understanding the processes that shape how we think.

Until the early decades of the 20th century, Shakespeare criticism fell primarily into two areas: textual, which grapples with the numerous variants of published works in order to produce an edition as close as possible to the original, and biographical. Scholarship took a more political turn beginning in the 1960s, providing new perspectives from various strains of feminist, Marxist, structuralist, and queer theory. Booth is resolutely dismissive of most of these modes of study. What he cares about is poetics. Specifically, how poetic language operates on and in audiences of a literary work.

Close reading, the school that flourished mid-century and with which Booth’s work is most nearly affiliated, has never gone completely out of style. But Booth’s approach is even more minute—microscopic reading, according to fellow Shakespeare scholar Russ McDonald. And as the microscope opens up new worlds, so does Booth’s critical lens. What makes him radically different from his predecessors is that he doesn’t try to resolve or collapse his readings into any single interpretation. That people are so hung up on interpretation, on meaning, Booth maintains, is “no more than habit.” Instead, he revels in the uncertainty caused by the myriad currents of phonetic, semantic, and ideational patterns at play. [Continue reading…]

Facebooktwittermail

Brain shrinkage, poor concentration, anxiety, and depression linked to media-multitasking

Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.

A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.

The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.

But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking. [Continue reading…]

Facebooktwittermail

The bonding power of shared suffering

Pacific Standard: A new study from Australia suggests rituals such as arduous initiation rites serve a real purpose. It reports experiencing physical discomfort is an effective way for a group of strangers to cohere into a close-knit group.

“Shared pain may be an important trigger for group formation,” a research team led by psychologist Brock Bastian of the University of New South Wales writes in the journal Psychological Science. “Pain, it seems, has the capacity to act as social glue, building cooperation within novel social collectives.”
The researchers argue that pain promotes cooperation because of its “well-demonstrated capacity to capture attention and focus awareness.”

Bastian and his colleagues describe three experiments that provide evidence for this proposition, which was first proposed by such social theorists as Emile Durkheim. [Continue reading…]

Facebooktwittermail

Humans are wired for bad news

Jacob Burak writes: I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why?

Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it.

Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first. [Continue reading…]

Facebooktwittermail

Why do laughter, smiles and tears look so similar?

Michael Graziano writes: About four thousand years ago, somewhere in the Middle East — we don’t know where or when, exactly — a scribe drew a picture of an ox head. The picture was rather simple: just a face with two horns on top. It was used as part of an abjad, a set of characters that represent the consonants in a language. Over thousands of years, that ox-head icon gradually changed as it found its way into many different abjads and alphabets. It became more angular, then rotated to its side. Finally it turned upside down entirely, so that it was resting on its horns. Today it no longer represents an ox head or even a consonant. We know it as the capital letter A.

The moral of this story is that symbols evolve.

Long before written symbols, even before spoken language, our ancestors communicated by gesture. Even now, a lot of what we communicate to each other is non-verbal, partly hidden beneath the surface of awareness. We smile, laugh, cry, cringe, stand tall, shrug. These behaviours are natural, but they are also symbolic. Some of them, indeed, are pretty bizarre when you think about them. Why do we expose our teeth to express friendliness? Why do we leak lubricant from our eyes to communicate a need for help? Why do we laugh?

One of the first scientists to think about these questions was Charles Darwin. In his 1872 book, The Expression of the Emotions in Man and Animals, Darwin observed that all people express their feelings in more or less the same ways. He argued that we probably evolved these gestures from precursor actions in ancestral animals. A modern champion of the same idea is Paul Ekman, the American psychologist. Ekman categorised a basic set of human facial expressions — happy, frightened, disgusted, and so on — and found that they were the same across widely different cultures. People from tribal Papua New Guinea make the same smiles and frowns as people from the industrialised USA.

Our emotional expressions seem to be inborn, in other words: they are part of our evolutionary heritage. And yet their etymology, if I can put it that way, remains a mystery. Can we trace these social signals back to their evolutionary root, to some original behaviour of our ancestors? To explain them fully, we would have to follow the trail back until we left the symbolic realm altogether, until we came face to face with something that had nothing to do with communication. We would have to find the ox head in the letter A.

I think we can do that. [Continue reading…]

Facebooktwittermail

Why bystanders are reluctant are reluctant to help those in need

Dwyer Gunn writes: Bethesda in the state of Maryland is the kind of safe, upscale Washington DC suburb that well-educated, high-earning professionals retreat to when it’s time to raise a family. Some 80 per cent of the city’s adult residents have college degrees. Bethesda’s posh Bradley Manor-Longwood neighbourhood was recently ranked the second richest in the country. And yet, on 11 March 2011, a young woman was brutally murdered by a fellow employee at a local Lululemon store (where yoga pants retail for about $100 each). Two employees of the Apple store next door heard the murder as it occurred, debated, and ultimately decided not to call the police.

If the attack had occurred in poor, crowded, crime-ridden Rio de Janeiro, the outcome might have been different: in one series of experiments, researchers found bystanders in the Brazilian city to be extraordinarily helpful, stepping in to offer a hand to a blind person and aiding a stranger who dropped a pen nearly 100 per cent of the time. This apparent paradox reflects a nuanced understanding of ‘bystander apathy’, the term coined by the US psychologists John Darley and Bibb Latané in the 1960s to describe the puzzling, and often horrifying, inaction of witnesses to intervene in violent crimes or other tragedies.

The phenomenon first received widespread attention in 1964, when the New York bar manager Kitty Genovese was sexually assaulted and murdered outside her apartment building in the borough of Queens. Media coverage focused on the alleged inaction of her neighbours – The New York Times’s defining story opened with the chilling assertion that: ‘For more than half an hour, 38 respectable, law-abiding citizens in Queens watched a killer stalk and stab a woman in three separate attacks.’ Over the years, that media account has been largely debunked, but the incident served to establish a narrative that persists today: society has changed irrevocably for the worse, and the days of neighbour helping neighbour are a nicety of the past. True or not, the Genovese story became a cultural meme for callousness and man’s inhumanity to man, a trend said to signify our modern age.

It also launched a whole new field of study. [Continue reading…]

Facebooktwittermail

The value of intuition in an uncertain world

Harvard Business Review: Researchers have confronted us in recent years with example after example of how we humans get things wrong when it comes to making decisions. We misunderstand probability, we’re myopic, we pay attention to the wrong things, and we just generally mess up. This popular triumph of the “heuristics and biases” literature pioneered by psychologists Daniel Kahneman and Amos Tversky has made us aware of flaws that economics long glossed over, and led to interesting innovations in retirement planning and government policy.

It is not, however, the only lens through which to view decision-making. Psychologist Gerd Gigerenzer has spent his career focusing on the ways in which we get things right, or could at least learn to. In Gigerenzer’s view, using heuristics, rules of thumb, and other shortcuts often leads to better decisions than the models of “rational” decision-making developed by mathematicians and statisticians.

Gerd Gigerenzer: Gut feelings are tools for an uncertain world. They’re not caprice. They are not a sixth sense or God’s voice. They are based on lots of experience, an unconscious form of intelligence.

I’ve worked with large companies and asked decision makers how often they base an important professional decision on that gut feeling. In the companies I’ve worked with, which are large international companies, about 50% of all decisions are at the end a gut decision.

But the same managers would never admit this in public. There’s fear of being made responsible if something goes wrong, so they have developed a few strategies to deal with this fear. One is to find reasons after the fact. A top manager may have a gut feeling, but then he asks an employee to find facts the next two weeks, and thereafter the decision is presented as a fact-based, big-data-based decision. That’s a waste of time, intelligence, and money. The more expensive version is to hire a consulting company, which will provide a 200-page document to justify the gut feeling. And then there is the most expensive version, namely defensive decision making. Here, a manager feels he should go with option A, but if something goes wrong, he can’t explain it, so that’s not good. So he recommends option B, something of a secondary or third-class choice. Defensive decision-making hurts the company and protects the decision maker. In the studies I’ve done with large companies, it happens in about a third to half of all important decisions. You can imagine how much these companies lose.

HBR: But there is a move in business towards using data more intelligently. There’s exploding amounts of it in certain industries, and definitely in the pages of HBR, it’s all about Gee, how do I automate more of these decisions?

GG: That’s a good strategy if you have a business in a very stable world. Big data has a long tradition in astronomy. For thousands of years, people have collected amazing data, and the heavenly bodies up there are fairly stable, relative to our short time of lives. But if you deal with an uncertain world, big data will provide an illusion of certainty. For instance, in Risk Savvy I’ve analyzed the predictions of the top investment banks worldwide on exchange rates. If you look at that, then you know that big data fails. In an uncertain world you need something else. Good intuitions, smart heuristics. [Continue reading…]

Facebooktwittermail

How the brain creates personality: A new theory

Stephen M. Kosslyn and G. Wayne Miller write: It is possible to examine any object — including a brain — at different levels. Take the example of a building. If we want to know whether the house will have enough space for a family of five, we want to focus on the architectural level; if we want to know how easily it could catch fire, we want to focus on the materials level; and if we want to engineer a product for a brick manufacturer, we focus on molecular structure.

Similarly, if we want to know how the brain gives rise to thoughts, feelings, and behaviors, we want to focus on the bigger picture of how its structure allows it to store and process information — the architecture, as it were. To understand the brain at this level, we don’t have to know everything about the individual connections among brain cells or about any other biochemical process. We use a rela­tively high level of analysis, akin to architecture in buildings, to characterize relatively large parts of the brain.

To explain the Theory of Cognitive Modes, which specifies general ways of thinking that underlie how a person approaches the world and interacts with other people, we need to provide you with a lot of information. We want you to understand where this theory came from — that we didn’t just pull it out of a hat or make it up out of whole cloth. But there’s no need to lose the forest for the trees: there are only three key points that you will really need to keep in mind.

First, the top parts and the bottom parts of the brain have differ­ent functions. The top brain formulates and executes plans (which often involve deciding where to move objects or how to move the body in space), whereas the bottom brain classifies and interprets incoming information about the world. The two halves always work together; most important, the top brain uses information from the bottom brain to formulate its plans (and to reformulate them, as they unfold over time).

Second, according to the theory, people vary in the degree that they tend to rely on each of the two brain systems for functions that are optional (i.e., not dictated by the immediate situation): Some people tend to rely heavily on both brain systems, some rely heavily on the bottom brain system but not the top, some rely heavily on the top but not the bottom, and some don’t rely heavily on either system.

Third, these four scenarios define four basic cognitive modes— general ways of thinking that underlie how a person approaches the world and interacts with other people. According to the Theory of Cognitive Modes, each of us has a particular dominant cognitive mode, which affects how we respond to situations we encounter and how we relate to others. The possible modes are: Mover Mode, Perceiver Mode, Stimulator Mode, and Adaptor Mode. [Continue reading…]

Facebooktwittermail

How memory speaks

Jerome Groopman writes: I began writing these words on what appeared to be an unremarkable Sunday morning. Shortly before sunrise, the bedroom still dim, I awoke and quietly made my way to the kitchen, careful not to disturb my still-sleeping wife. The dark-roast coffee was retrieved from its place in the pantry, four scoops then placed in a filter. While the coffee was brewing, I picked up The New York Times at the door. Scanning the front page, my eyes rested on an article mentioning Svoboda, the far-right Ukrainian political party (svoboda, means, I remembered, “freedom”).

I prepared an egg-white omelette and toasted two slices of multigrain bread. After a few sips of coffee, fragments of the night’s dream came to mind: I am rushing to take my final examination in college chemistry, but as I enter the amphitheater where the test is given, no one is there. Am I early? Or in the wrong room? The dream was not new to me. It often occurs before I embark on a project, whether it’s an experiment in the laboratory, a drug to be tested in the clinic, or an article to write on memory.

The start of that Sunday morning seems quite mundane. But when we reflect on the manifold manifestations of memory, the mundane becomes marvelous. Memory is operative not only in recalling the meaning of svoboda, knowing who was sleeping with me in bed, and registering my dream as recurrent, but also in rote tasks: navigating the still-dark bedroom, scooping the coffee, using a knife and fork to eat breakfast. Simple activities of life, hardly noticed, reveal memory as a map, clock, and mirror, vital to our sense of place, time, and person.

This role of memory in virtually every activity of our day is put in sharp focus when it is lost. Su Meck, in I Forgot to Remember, pieces together a fascinating tale of life after suffering head trauma as a young mother. A ceiling fan fell and struck her head:

You might wonder how it feels to wake up one morning and not know who you are. I don’t know. The accident didn’t just wipe out all my memories; it hindered me from making new ones for quite some time. I awoke each day to a house full of strangers…. And this wasn’t just a few days. It was weeks before I recognized my boys when they toddled into the room, months before I knew my own telephone number, years before I was able to find my way home from anywhere. I have no more memory of those first several years after the accident than my own kids have of their first years of life.

A computed tomography (CT) scan of Meck’s brain showed swelling over the right frontal area. But neurologists were at a loss to explain the genesis of her amnesia. Memory does not exist in a single site or region of the central nervous system. There are estimated to be 10 to 100 billion neurons in the human brain, each neuron making about one thousand connections to other neurons at the junctions termed synapses. Learning, and then storing what we learn through life, involve intricate changes in the nature and number of these trillions of neuronal connections. But memory is made not only via alterations at the synaptic level. It also involves regional remodeling of parts of our cortex. Our brain is constantly changing in its elaborate circuitry and, to some degree, configuration. [Continue reading…]

Facebooktwittermail

The living death of solitary confinement

Lisa Guenther writes: I first met Five Omar Mualimm-ak at a forum on solitary confinement in New York City. He wore track shoes with his tailored suit. ‘As long as the Prison Industrial Complex keeps running, so will I,’ he explained. After hearing him speak about the connections between racism, poverty, mass incarceration and police violence, I invited Five to speak at a conference I was organising in Nashville, Tennessee. He arrived, as always, in a suit and track shoes. As we walked across campus to a conference reception, I worked up the courage to ask him how he got his name. He told me: ‘I spent five years in solitary confinement, and when I came out I was a different person.’

In an article for The Guardian last October, Five described his isolation as a process of sensory and existential annihilation:

After only a short time in solitary, I felt all of my senses begin to diminish. There was nothing to see but grey walls. In New York’s so-called special housing units, or SHUs, most cells have solid steel doors, and many do not have windows. You cannot even tape up pictures or photographs; they must be kept in an envelope. To fight the blankness, I counted bricks and measured the walls. I stared obsessively at the bolts on the door to my cell.

There was nothing to hear except empty, echoing voices from other parts of the prison. I was so lonely that I hallucinated words coming out of the wind. They sounded like whispers. Sometimes, I smelled the paint on the wall, but more often, I just smelled myself, revolted by my own scent.

There was no touch. My food was pushed through a slot. Doors were activated by buzzers, even the one that led to a literal cage directly outside of my cell for one hour per day of ‘recreation’.

Even time had no meaning in the SHU. The lights were kept on for 24 hours. I often found myself wondering if an event I was recollecting had happened that morning or days before. I talked to myself. I began to get scared that the guards would come in and kill me and leave me hanging in the cell. Who would know if something happened to me? Just as I was invisible, so was the space I inhabited.

The very essence of life, I came to learn during those seemingly endless days, is human contact, and the affirmation of existence that comes with it. Losing that contact, you lose your sense of identity. You become nothing.

Five’s experience of solitary confinement is extreme, but it’s not atypical. His feeling of disconnection from the world, to the point of losing his capacity to make sense of his own identity and existence, raises philosophical questions about the relation between sense perception, sociality, and a meaningful life. Why does prolonged isolation typically corrode a prisoner’s ability to perceive the world and to sustain a meaningful connection with his own existence? The short answer to this question is that we are social beings who rely on our interactions with other people to make sense of things. But what does it mean to exist socially, and what is the precise connection between our relations with others, our perception of the world, and the affirmation of our own existence?

My response to this question is shaped by the philosophical practice of phenomenology. Phenomenology begins with a description of lived experience and reflects on the structures that make this experience possible and meaningful. The main insight of phenomenology is that consciousness is relational. [Continue reading…]

Facebooktwittermail

America’s huge appetite for conspiracy theories

Conspiracy Theories and the Paranoid Style(s) of Mass Opinion,” a paper recently published in the American Journal of Political Science, finds that half of Americans consistently endorse at least one conspiracy theory.

Tom Jacobs writes: It’s easy to assume this represents widespread ignorance, but these findings suggest otherwise. Oliver and Wood report that, except for the Obama “birthers” and the 9/11 “truthers,” “respondents who endorse conspiracy theories are not less-informed about basic political facts than average citizens.”

So what does drive belief in these contrived explanations? The researchers argue the tendency to accept them is “derived from two innate psychological predispositions.”

The first, which has an evolutionary explanation, is an “unconscious cognitive bias to draw causal connections between seemingly related phenomena.” Jumping to conclusions based on weak evidence allows us to “project feelings of control in uncertain situations,” the researchers note.

The second is our “natural attraction towards melodramatic narratives as explanations for prominent events — particularly those that interpret history (in terms of) universal struggles between good and evil.”

Stories that fit that pattern “provide compelling explanations for otherwise confusing or ambiguous events, they write, noting that “many predominant beliefs systems … draw heavily upon the idea of unseen, intentional forces shaping contemporary events.”

“For many Americans, complicated or nuanced explanations for political events are both cognitively taxing and have limited appeal,” write Oliver and Wood. “A conspiracy narrative may provide a more accessible and convincing account of political events.”

That said, they add, “Even highly engaged or ideological segments of the population can be swayed by the power of these narratives, particularly when they coincide with their other political views.”

Facebooktwittermail

How we understand what others think, believe, feel, and want

f13-iconNicholas Epley writes: One of the most amazing court cases you probably have never heard of had come down to this. Standing Bear, the reluctant chief of the Ponca tribe, rose on May 2, 1879, to address a packed audience in a Nebraska courtroom. At issue was the existence of a mind that many were unable to see.

Standing Bear’s journey to this courtroom had been excruciating. The U.S. government had decided several years earlier to force the 752 Ponca Native Americans off their lands along the fertile Niobrara River and move them to the desolate Indian Territory, in what is now northern Oklahoma. Standing Bear surrendered everything he owned, assembled his tribe, and began marching a six-hundred-mile “trail of tears.” If the walk didn’t kill them (as it did Standing Bear’s daughter), then the parched Indian Territory would. Left with meager provisions and fields of parched rock to farm, nearly a third of the Poncas died within the first year. This included Standing Bear’s son. As his son lay dying, Standing Bear promised to return his son’s bones to the tribe’s burial grounds so that his son could walk the afterlife with his ancestors, according to their religion. Desperate, Standing Bear decided to go home.

Carrying his son’s bones in a bag clutched to his chest, Standing Bear and twenty-seven others began their return in the dead of winter. Word spread of the group’s travel as they approached the Omaha Indian reservation, midway through their journey. The Omahas welcomed them with open arms, but U.S. officials welcomed them with open handcuffs. General George Crook was ordered by government officials to return the beleaguered Poncas to the Indian Territory.

Crook couldn’t bear the thought. “I’ve been forced many times by orders from Washington to do most inhuman things in dealings with the Indians,” he said, “but now I’m ordered to do a more cruel thing than ever before.” Crook was an honorable man who could no more disobey direct orders than he could fly, so instead he stalled, encouraging a newspaper editor from Omaha to enlist lawyers who would then sue General Crook (as the U.S. government’s representative) on Standing Bear’s behalf. The suit? To have the U.S. government recognize Standing Bear as a person, as a human being. [Continue reading…]

Facebooktwittermail

What, me? Biased?

a13-iconTom Jacobs writes: Pretty much all of us are prone to “bias blindness.” We can easily spot prejudice in others, but we’re oblivious to our own, insisting on our impartiality in spite of any and all evidence to the contrary.

Newly published research suggests this problem is actually worse than we thought. It finds that even when people use an evaluation strategy they concede is biased, they continue to insist their judgments are objective.

“Recognizing one’s bias is a critical first step in trying to correct for it,” writes a research team led by Emily Pronin and Katherine Hansen of Princeton University. “These experiments make clear how difficult that first step can be to reach.”

Although their findings have clear implications regarding political opinions, the researchers avoided such fraught topics and focused on art. In two experiments, participants (74 Princeton undergraduates in the first, 85 adults recruited online in the second) looked at a series of 80 paintings and rated the artistic merit of each on a one-to-nine scale. [Continue reading…]

Facebooktwittermail

The roots of America’s narcissism epidemic

f13-iconWill Storr writes: For much of human history, our beliefs have been based on the assumption that people are fundamentally bad. Strip away a person’s smile and you’ll find a grotesque, writhing animal-thing. Human instincts have to be controlled, and religions have often been guides for containing the demons. Sigmund Freud held a similar view: Psychotherapy was his method of making the unconscious conscious, helping people restrain their bestial desires and accord with the moral laws of civilization.

In the middle of the 20th century, an alternative school of thought appeared. It was popularized by Carl Rogers, an influential psychotherapist at the University of Chicago, and it reversed the presumption of original sin. Rogers argued that people are innately decent. Children, he believed, should be raised in an environment of “unconditional positive regard”. They should be liberated from the inhibitions and restraints that prevented them from attaining their full potential.

It was a characteristically American idea — perhaps even the American idea. Underneath it all, people are good, and to get the best out of themselves, they just need to be free.

Economic change gave Rogers’s theory traction. It was the 1950s, and a nation of workmen was turning into a nation of salesmen. To make good in life, interpersonal sunniness was becoming essential. Meanwhile, rising divorce rates and the surge of women into the workplace were triggering anxieties about the lives of children born into the baby boom. Parents wanted to counteract the stresses of modern family life, and boosting their children’s self-esteem seemed like the solution.

By the early 1960s, wild thinkers in California were pushing Rogers’s idea even further. The “human potential movement” argued that most people were using just 10 percent of their intellectual capacity. It leaned on the work of Abraham Maslow, who studied exceptional people such as Albert Einstein and Eleanor Roosevelt and said there were five human needs, the most important of which was self-actualization—the realization of one’s maximum potential. Number two on the list was esteem.

At the close of the decade, the idea that self-esteem was the key to psychological riches finally exploded. The trigger was Nathaniel Branden, a handsome Canadian psychotherapist who had moved to Los Angeles as a disciple of the philosopher Ayn Rand. One of Rand’s big ideas was that that moral good would arise when humans ruthlessly pursued their own self-interest. She and Branden began a tortuous love affair, and her theories had an intense impact on the young psychotherapist. In The Psychology of Self-Esteem, published in 1969, Branden argued that self-esteem “has profound effects on a man’s thinking processes, emotions, desires, values and goals. It is the single most significant key to his behavior.” It was an international bestseller, and it propelled the self-esteem movement out of the counterculture and into the mainstream.

The year that Branden published his book, a sixteen-year-old in Euclid, Ohio named Roy Baumeister was grappling with his own self-esteem problem: his Dad. [Continue reading…]

Facebooktwittermail

Study: ‘Trolls’ online appear to be sadists in real life

n13-iconThe Register reports: A group of Canadian researchers has given the imprimatur of social-science recognition to a fact that many of us who spend time in internet comment forums have suspected: there’s a strong correlation between online trolling and sadism.

“Both trolls and sadists feel sadistic glee at the distress of others. Sadists just want to have fun … and the Internet is their playground!” write Erin Buckels, Paul Trapnell, and Delroy Paulhus of the Universities of Manitoba, Winnipeg, and British Columbia, respectively, in a paper to be published in the journal Personality and Individual Differences.

The researchers define trolling as “the practice of behaving in a deceptive, destructive, or disruptive manner in a social setting on the Internet with no apparent instrumental purpose,” referring to trolls as “agents of chaos on the Internet, exploiting ‘hot-button issues’ to make users appear overly emotional or foolish in some manner. If an unfortunate person falls into their trap, trolling intensifies for further, merciless amusement.”

The Canadian psychologists’ paper is entitled “Trolls just want to have fun”, which is not merely a bit of boffinary humor at the expense of Cyndi Lauper, but rather a reference to one of the researchers’ findings. “We found clear evidence,” they write, “that sadists tend to troll because they enjoy it.” [Continue reading…]

Facebooktwittermail

Douglas Hofstadter — Research on artificial intelligence is sidestepping the core question: how do people think?

f13-iconDouglas Hofstadter is a cognitive scientist at Indiana University and the Pulitzer Prize-winning author of Gödel, Escher, Bach: An Eternal Golden Braid.

Popular Mechanics: You’ve said in the past that IBM’s Jeopardy-playing computer, Watson, isn’t deserving of the term artificial intelligence. Why?

Douglas Hofstadter: Well, artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial level — things that are impressive if you don’t look at the details. In that sense, we’ve already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinking — that’s inaccurate. Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.

Do you think we’ll start seeing diminishing returns from a Watson-like approach to AI?

I can’t really predict that. But what I can say is that I’ve monitored Google Translate — which uses a similar approach — for many years. Google Translate is developing and it’s making progress because the developers are inventing new, clever ways of milking the quickness of computers and the vastness of its database. But it’s not making progress at all in the sense of understanding your text, and you can still see it falling flat on its face a lot of the time. And I know it’ll never produce polished [translated] text, because real translating involves understanding what is being said and then reproducing the ideas that you just heard in a different language. Translation has to do with ideas, it doesn’t have to do with words, and Google Translate is about words triggering other words.

So why are AI researchers so focused on building programs and computers that don’t do anything like thinking?

They’re not studying the mind and they’re not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They’re doing product development.

I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn’t about making money, and the people in the field weren’t driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That’s a beautiful paradox and very exciting, philosophically. [Continue reading…]

Facebooktwittermail