The conception of perception shaped by context

facebooktwittermail

How Darkness Visible shined a light

Peter Fulham writes: Twenty-five years ago, in December, 1989, Darkness Visible, William Styron’s account of his descent into the depths of clinical depression and back, appeared in Vanity Fair. The piece revealed in unsparing detail how Styron’s lifelong melancholy at once gave way to a seductive urge to end his own life. A few months later, he released the essay as a book, augmenting the article with a recollection of when the illness first took hold of him: in Paris, as he was about to accept the 1985 Prix mondial Cino Del Duca, the French literary award. By the author’s own acknowledgement, the response from readers was unprecedented. “This was just overwhelming. It was just by the thousands that the letters came in,” he told Charlie Rose. “I had not really realized that it was going to touch that kind of a nerve.”

Styron may have been startled by the outpouring of mail, but in many ways, it’s easy to understand. The academic research on mental illness at the time was relatively comprehensive, but no one to date had offered the kind of report that Styron gave to the public: a firsthand account of what it’s like to have the monstrous condition overtake you. He also exposed the inadequacy of the word itself, which is still used interchangeably to describe a case of the blues, rather than the tempestuous agony sufferers know too well.

Depression is notoriously hard to describe, but Styron managed to split the atom. “I’d feel the horror, like some poisonous fogbank, roll in upon my mind,” he wrote in one chapter. In another: “It is not an immediately identifiable pain, like that of a broken limb. It may be more accurate to say that despair… comes to resemble the diabolical discomfort of being imprisoned in a fiercely overheated room. And because no breeze stirs this cauldron… it is entirely natural that the victim begins to think ceaselessly of oblivion.”

As someone who has fought intermittently with the same illness since college, those sentences were cathartic, just as I suspect they were for the many readers who wrote to Styron disclosing unequivocally that he had saved their lives. As brutal as depression can be, one of the main ways a person can restrain it is through solidarity. You are not alone, Styron reminded his readers, and the fog will lift. Patience is paramount. [Continue reading…]

facebooktwittermail

Why moral character is the key to personal identity

Nina Strohminger writes: One morning after her accident, a woman I’ll call Kate awoke in a daze. She looked at the man next to her in bed. He resembled her husband, with the same coppery beard and freckles dusted across his shoulders. But this man was definitely not her husband.

Panicked, she packed a small bag and headed to her psychiatrist’s office. On the bus, there was a man she had been encountering with increasing frequency over the past several weeks. The man was clever, he was a spy. He always appeared in a different form: one day as a little girl in a sundress, another time as a bike courier who smirked at her knowingly. She explained these bizarre developments to her doctor, who was quickly becoming one of the last voices in this world she could trust. But as he spoke, her stomach sank with a dreaded realisation: this man, too, was an impostor.

Kate has Capgras syndrome, the unshakeable belief that someone – often a loved one, sometimes oneself – has been replaced with an exact replica. She also has Fregoli syndrome, the delusion that the same person is taking on a variety of shapes, like an actor donning an expert disguise. Capgras and Fregoli delusions offer hints about an extraordinary cognitive mechanism active in the healthy mind, a mechanism so exquisitely tuned that we are hardly ever aware of it. This mechanism ascribes to each person a unique identity, and then meticulously tracks and updates it. This mechanism is crucial to virtually every human interaction, from navigating a party to navigating a marriage. Without it, we quickly fall apart. [Continue reading…]

facebooktwittermail

Gossip makes human society possible

Julie Beck writes: While gossiping is a behavior that has long been frowned upon, perhaps no one has frowned quite so intensely as the 16th- and 17th-century British. Back then, gossips, or “scolds” were sometimes forced to wear a menacing iron cage on their heads, called the “branks” or “scold’s bridle.” These masks purportedly had iron spikes or bits that went in the mouth and prevented the wearer from speaking. (And of course, of course, this ghastly punishment seems to have been mostly for women who were talking too much.)

Today, people who gossip are still not very well-liked, though we tend to resist the urge to cage their heads. Progress. And yet the reflexive distaste people feel for gossip and those who gossip in general is often nowhere to be found when people find themselves actually faced with a juicy morsel about someone they know. Social topics—personal relationships, likes and dislikes, anecdotes about social activities—made up about two-thirds of all conversations in analyses done by evolutionary psychologist Robin Dunbar. The remaining one-third of their time not spent talking about other people was devoted to discussing everything else: sports, music, politics, etc.

“Language in freely forming natural conversations is principally used for the exchange of social information,” Dunbar writes. “That such topics are so overwhelmingly important to us suggests that this is a primary function of language.” He even goes so far as to say: “Gossip is what makes human society as we know it possible.”

In recent years, research on the positive effects of gossip has proliferated. Rather than just a means to humiliate people and make them cry in the bathroom, gossip is now being considered by scientists as a way to learn about cultural norms, bond with others, promote cooperation, and even, as one recent study found, allow individuals to gauge their own success and social standing. [Continue reading…]

facebooktwittermail

Denying problems when we don’t like the political solutions

Phys.org: A new study from Duke University finds that people will evaluate scientific evidence based on whether they view its policy implications as politically desirable. If they don’t, then they tend to deny the problem even exists.

“Logically, the proposed solution to a problem, such as an increase in government regulation or an extension of the free market, should not influence one’s belief in the problem. However, we find it does,” said co-author Troy Campbell, a Ph.D. candidate at Duke’s Fuqua School of Business. “The cure can be more immediately threatening than the problem.”

The study, “Solution Aversion: On the Relation Between Ideology and Motivated Disbelief,” appears in the November issue of the Journal of Personality and Social Psychology.

The researchers conducted three experiments (with samples ranging from 120 to 188 participants) on three different issues—climate change, air pollution that harms lungs, and crime.

“The goal was to test, in a scientifically controlled manner, the question: Does the desirability of a solution affect beliefs in the existence of the associated problem? In other words, does what we call ‘solution aversion’ exist?” Campbell said.

“We found the answer is yes. And we found it occurs in response to some of the most common solutions for popularly discussed problems.”

For climate change, the researchers conducted an experiment to examine why more Republicans than Democrats seem to deny its existence, despite strong scientific evidence that supports it.

One explanation, they found, may have more to do with conservatives’ general opposition to the most popular solution—increasing government regulation—than with any difference in fear of the climate change problem itself, as some have proposed. [Continue reading…]

facebooktwittermail

Cognitive disinhibition: the kernel of genius and madness

Dean Keith Simonton writes: When John Forbes Nash, the Nobel Prize-winning mathematician, schizophrenic, and paranoid delusional, was asked how he could believe that space aliens had recruited him to save the world, he gave a simple response. “Because the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

Nash is hardly the only so-called mad genius in history. Suicide victims like painters Vincent Van Gogh and Mark Rothko, novelists Virginia Woolf and Ernest Hemingway, and poets Anne Sexton and Sylvia Plath all offer prime examples. Even ignoring those great creators who did not kill themselves in a fit of deep depression, it remains easy to list persons who endured well-documented psychopathology, including the composer Robert Schumann, the poet Emily Dickinson, and Nash. Creative geniuses who have succumbed to alcoholism or other addictions are also legion.

Instances such as these have led many to suppose that creativity and psychopathology are intimately related. Indeed, the notion that creative genius might have some touch of madness goes back to Plato and Aristotle. But some recent psychologists argue that the whole idea is a pure hoax. After all, it is certainly no problem to come up with the names of creative geniuses who seem to have displayed no signs or symptoms of mental illness.

Opponents of the mad genius idea can also point to two solid facts. First, the number of creative geniuses in the entire history of human civilization is very large. Thus, even if these people were actually less prone to psychopathology than the average person, the number with mental illness could still be extremely large. Second, the permanent inhabitants of mental asylums do not usually produce creative masterworks. The closest exception that anyone might imagine is the notorious Marquis de Sade. Even in his case, his greatest (or rather most sadistic) works were written while he was imprisoned as a criminal rather than institutionalized as a lunatic.

So should we believe that creative genius is connected with madness or not? Modern empirical research suggests that we should because it has pinpointed the connection between madness and creativity clearly. The most important process underlying strokes of creative genius is cognitive disinhibition — the tendency to pay attention to things that normally should be ignored or filtered out by attention because they appear irrelevant. [Continue reading…]

facebooktwittermail

We are all confident idiots

David Dunning writes: Last March, during the enormous South by Southwest music festival in Austin, Texas, the late-night talk show Jimmy Kimmel Live! sent a camera crew out into the streets to catch hipsters bluffing. “People who go to music festivals pride themselves on knowing who the next acts are,” Kimmel said to his studio audience, “even if they don’t actually know who the new acts are.” So the host had his crew ask festival-goers for their thoughts about bands that don’t exist.

“The big buzz on the street,” said one of Kimmel’s interviewers to a man wearing thick-framed glasses and a whimsical T-shirt, “is Contact Dermatitis. Do you think he has what it takes to really make it to the big time?”

“Absolutely,” came the dazed fan’s reply.

The prank was an installment of Kimmel’s recurring “Lie Witness News” feature, which involves asking pedestrians a variety of questions with false premises. In another episode, Kimmel’s crew asked people on Hollywood Boulevard whether they thought the 2014 film Godzilla was insensitive to survivors of the 1954 giant lizard attack on Tokyo; in a third, they asked whether Bill Clinton gets enough credit for ending the Korean War, and whether his appearance as a judge on America’s Got Talent would damage his legacy. “No,” said one woman to this last question. “It will make him even more popular.”

One can’t help but feel for the people who fall into Kimmel’s trap. Some appear willing to say just about anything on camera to hide their cluelessness about the subject at hand (which, of course, has the opposite effect). Others seem eager to please, not wanting to let the interviewer down by giving the most boringly appropriate response: I don’t know. But for some of these interviewees, the trap may be an even deeper one. The most confident-sounding respondents often seem to think they do have some clue—as if there is some fact, some memory, or some intuition that assures them their answer is reasonable. [Continue reading…]

facebooktwittermail

The biology of deceit

Daniel N Jones writes: It’s the friend who betrays you, the lover living a secret life, the job applicant with the fabricated résumé, or the sham sales pitch too good to resist. From the time humans learnt to co‑operate, we also learnt to deceive each other. For deception to be effective, individuals must hide their true intentions. But deception is hardly limited to humans. There is a never-ending arms race between the deceiver and the deceived among most living things. By studying different patterns of deception across the species, we can learn to better defend ourselves from dishonesty in the human world.

My early grasp of human deception came from the work of my adviser, the psychologist Delroy Paulhus at the University of British Columbia in Canada, who studied what he called the dark triad of personality: psychopathy, recognised by callous affect and reckless deceit; narcissism, a sense of grandiose entitlement and self-centered overconfidence; and Machiavellianism, the cynical and strategic manipulation of others.

If you look at the animal world, it’s clear that dark traits run through species from high to low. Some predators are fast, mobile and wide-ranging, executing their deceptions on as many others as they can; they resemble human psychopaths. Others are slow, stalking their prey in a specific, strategic (almost Machiavellian) way. Given the parallels between humans and other animals, I began to conceive my Mimicry Deception Theory, which argues that long- and short-term deceptive strategies cut across species, often by mimicking other lifestyles or forms.

Much of the foundational work for this idea comes from the evolutionary biologist Robert Trivers, who noted that many organisms gain an evolutionary advantage through deception. [Continue reading…]

facebooktwittermail

The healing power of silence

Daniel A. Gross writes: One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme — it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

A few months later, the delegation issued a slick “Country Brand Report.” It highlighted a host of marketable themes, including Finland’s renowned educational system and school of functional design. One key theme was brand new: silence. As the report explained, modern society often seems intolerably loud and busy. “Silence is a resource,” it said. It could be marketed just like clean water or wild mushrooms. “In the future, people will be prepared to pay for the experience of silence.”

People already do. In a loud world, silence sells. Noise-canceling headphones retail for hundreds of dollars; the cost of some weeklong silent meditation courses can run into the thousands. Finland saw that it was possible to quite literally make something out of nothing.

In 2011, the Finnish Tourist Board released a series of photographs of lone figures in the wilderness, with the caption “Silence, Please.” An international “country branding” consultant, Simon Anholt, proposed the playful tagline “No talking, but action.” And a Finnish watch company, Rönkkö, launched its own new slogan: “Handmade in Finnish silence.”

“We decided, instead of saying that it’s really empty and really quiet and nobody is talking about anything here, let’s embrace it and make it a good thing,” explains Eva Kiviranta, who manages social media for VisitFinland.com.

Silence is a peculiar starting point for a marketing campaign. After all, you can’t weigh, record, or export it. You can’t eat it, collect it, or give it away. The Finland campaign raises the question of just what the tangible effects of silence really are. Science has begun to pipe up on the subject. In recent years researchers have highlighted the peculiar power of silence to calm our bodies, turn up the volume on our inner thoughts, and attune our connection to the world. Their findings begin where we might expect: with noise.

The word “noise” comes from a Latin root meaning either queasiness or pain. According to the historian Hillel Schwartz, there’s even a Mesopotamian legend in which the gods grow so angry at the clamor of earthly humans that they go on a killing spree. (City-dwellers with loud neighbors may empathize, though hopefully not too closely.)

Dislike of noise has produced some of history’s most eager advocates of silence, as Schwartz explains in his book Making Noise: From Babel to the Big Bang and Beyond. In 1859, the British nurse and social reformer Florence Nightingale wrote, “Unnecessary noise is the most cruel absence of care that can be inflicted on sick or well.” Every careless clatter or banal bit of banter, Nightingale argued, can be a source of alarm, distress, and loss of sleep for recovering patients. She even quoted a lecture that identified “sudden noises” as a cause of death among sick children. [Continue reading…]

facebooktwittermail

What Shakespeare can teach science about language and the limits of the human mind

Jillian Hinchliffe and Seth Frey write: Although [Stephen] Booth is now retired [from the University of California, Berkeley], his work [on Shakespeare] couldn’t be more relevant. In the study of the human mind, old disciplinary boundaries have begun to dissolve and fruitful new relationships between the sciences and humanities have sprung up in their place. When it comes to the cognitive science of language, Booth may be the most prescient literary critic who ever put pen to paper. In his fieldwork in poetic experience, he unwittingly anticipated several language-processing phenomena that cognitive scientists have only recently begun to study. Booth’s work not only provides one of the most original and penetrating looks into the nature of Shakespeare’s genius, it has profound implications for understanding the processes that shape how we think.

Until the early decades of the 20th century, Shakespeare criticism fell primarily into two areas: textual, which grapples with the numerous variants of published works in order to produce an edition as close as possible to the original, and biographical. Scholarship took a more political turn beginning in the 1960s, providing new perspectives from various strains of feminist, Marxist, structuralist, and queer theory. Booth is resolutely dismissive of most of these modes of study. What he cares about is poetics. Specifically, how poetic language operates on and in audiences of a literary work.

Close reading, the school that flourished mid-century and with which Booth’s work is most nearly affiliated, has never gone completely out of style. But Booth’s approach is even more minute—microscopic reading, according to fellow Shakespeare scholar Russ McDonald. And as the microscope opens up new worlds, so does Booth’s critical lens. What makes him radically different from his predecessors is that he doesn’t try to resolve or collapse his readings into any single interpretation. That people are so hung up on interpretation, on meaning, Booth maintains, is “no more than habit.” Instead, he revels in the uncertainty caused by the myriad currents of phonetic, semantic, and ideational patterns at play. [Continue reading…]

facebooktwittermail

Brain shrinkage, poor concentration, anxiety, and depression linked to media-multitasking

Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.

A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.

The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.

But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking. [Continue reading…]

facebooktwittermail

The bonding power of shared suffering

Pacific Standard: A new study from Australia suggests rituals such as arduous initiation rites serve a real purpose. It reports experiencing physical discomfort is an effective way for a group of strangers to cohere into a close-knit group.

“Shared pain may be an important trigger for group formation,” a research team led by psychologist Brock Bastian of the University of New South Wales writes in the journal Psychological Science. “Pain, it seems, has the capacity to act as social glue, building cooperation within novel social collectives.”
The researchers argue that pain promotes cooperation because of its “well-demonstrated capacity to capture attention and focus awareness.”

Bastian and his colleagues describe three experiments that provide evidence for this proposition, which was first proposed by such social theorists as Emile Durkheim. [Continue reading…]

facebooktwittermail

Humans are wired for bad news

Jacob Burak writes: I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why?

Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it.

Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first. [Continue reading…]

facebooktwittermail

Why do laughter, smiles and tears look so similar?

Michael Graziano writes: About four thousand years ago, somewhere in the Middle East — we don’t know where or when, exactly — a scribe drew a picture of an ox head. The picture was rather simple: just a face with two horns on top. It was used as part of an abjad, a set of characters that represent the consonants in a language. Over thousands of years, that ox-head icon gradually changed as it found its way into many different abjads and alphabets. It became more angular, then rotated to its side. Finally it turned upside down entirely, so that it was resting on its horns. Today it no longer represents an ox head or even a consonant. We know it as the capital letter A.

The moral of this story is that symbols evolve.

Long before written symbols, even before spoken language, our ancestors communicated by gesture. Even now, a lot of what we communicate to each other is non-verbal, partly hidden beneath the surface of awareness. We smile, laugh, cry, cringe, stand tall, shrug. These behaviours are natural, but they are also symbolic. Some of them, indeed, are pretty bizarre when you think about them. Why do we expose our teeth to express friendliness? Why do we leak lubricant from our eyes to communicate a need for help? Why do we laugh?

One of the first scientists to think about these questions was Charles Darwin. In his 1872 book, The Expression of the Emotions in Man and Animals, Darwin observed that all people express their feelings in more or less the same ways. He argued that we probably evolved these gestures from precursor actions in ancestral animals. A modern champion of the same idea is Paul Ekman, the American psychologist. Ekman categorised a basic set of human facial expressions — happy, frightened, disgusted, and so on — and found that they were the same across widely different cultures. People from tribal Papua New Guinea make the same smiles and frowns as people from the industrialised USA.

Our emotional expressions seem to be inborn, in other words: they are part of our evolutionary heritage. And yet their etymology, if I can put it that way, remains a mystery. Can we trace these social signals back to their evolutionary root, to some original behaviour of our ancestors? To explain them fully, we would have to follow the trail back until we left the symbolic realm altogether, until we came face to face with something that had nothing to do with communication. We would have to find the ox head in the letter A.

I think we can do that. [Continue reading…]

facebooktwittermail

Humanity at the crossroads: Sheldon Solomon on the work of Ernest Becker

facebooktwittermail

Why bystanders are reluctant are reluctant to help those in need

Dwyer Gunn writes: Bethesda in the state of Maryland is the kind of safe, upscale Washington DC suburb that well-educated, high-earning professionals retreat to when it’s time to raise a family. Some 80 per cent of the city’s adult residents have college degrees. Bethesda’s posh Bradley Manor-Longwood neighbourhood was recently ranked the second richest in the country. And yet, on 11 March 2011, a young woman was brutally murdered by a fellow employee at a local Lululemon store (where yoga pants retail for about $100 each). Two employees of the Apple store next door heard the murder as it occurred, debated, and ultimately decided not to call the police.

If the attack had occurred in poor, crowded, crime-ridden Rio de Janeiro, the outcome might have been different: in one series of experiments, researchers found bystanders in the Brazilian city to be extraordinarily helpful, stepping in to offer a hand to a blind person and aiding a stranger who dropped a pen nearly 100 per cent of the time. This apparent paradox reflects a nuanced understanding of ‘bystander apathy’, the term coined by the US psychologists John Darley and Bibb Latané in the 1960s to describe the puzzling, and often horrifying, inaction of witnesses to intervene in violent crimes or other tragedies.

The phenomenon first received widespread attention in 1964, when the New York bar manager Kitty Genovese was sexually assaulted and murdered outside her apartment building in the borough of Queens. Media coverage focused on the alleged inaction of her neighbours – The New York Times’s defining story opened with the chilling assertion that: ‘For more than half an hour, 38 respectable, law-abiding citizens in Queens watched a killer stalk and stab a woman in three separate attacks.’ Over the years, that media account has been largely debunked, but the incident served to establish a narrative that persists today: society has changed irrevocably for the worse, and the days of neighbour helping neighbour are a nicety of the past. True or not, the Genovese story became a cultural meme for callousness and man’s inhumanity to man, a trend said to signify our modern age.

It also launched a whole new field of study. [Continue reading…]

facebooktwittermail

The value of intuition in an uncertain world

Harvard Business Review: Researchers have confronted us in recent years with example after example of how we humans get things wrong when it comes to making decisions. We misunderstand probability, we’re myopic, we pay attention to the wrong things, and we just generally mess up. This popular triumph of the “heuristics and biases” literature pioneered by psychologists Daniel Kahneman and Amos Tversky has made us aware of flaws that economics long glossed over, and led to interesting innovations in retirement planning and government policy.

It is not, however, the only lens through which to view decision-making. Psychologist Gerd Gigerenzer has spent his career focusing on the ways in which we get things right, or could at least learn to. In Gigerenzer’s view, using heuristics, rules of thumb, and other shortcuts often leads to better decisions than the models of “rational” decision-making developed by mathematicians and statisticians.

Gerd Gigerenzer: Gut feelings are tools for an uncertain world. They’re not caprice. They are not a sixth sense or God’s voice. They are based on lots of experience, an unconscious form of intelligence.

I’ve worked with large companies and asked decision makers how often they base an important professional decision on that gut feeling. In the companies I’ve worked with, which are large international companies, about 50% of all decisions are at the end a gut decision.

But the same managers would never admit this in public. There’s fear of being made responsible if something goes wrong, so they have developed a few strategies to deal with this fear. One is to find reasons after the fact. A top manager may have a gut feeling, but then he asks an employee to find facts the next two weeks, and thereafter the decision is presented as a fact-based, big-data-based decision. That’s a waste of time, intelligence, and money. The more expensive version is to hire a consulting company, which will provide a 200-page document to justify the gut feeling. And then there is the most expensive version, namely defensive decision making. Here, a manager feels he should go with option A, but if something goes wrong, he can’t explain it, so that’s not good. So he recommends option B, something of a secondary or third-class choice. Defensive decision-making hurts the company and protects the decision maker. In the studies I’ve done with large companies, it happens in about a third to half of all important decisions. You can imagine how much these companies lose.

HBR: But there is a move in business towards using data more intelligently. There’s exploding amounts of it in certain industries, and definitely in the pages of HBR, it’s all about Gee, how do I automate more of these decisions?

GG: That’s a good strategy if you have a business in a very stable world. Big data has a long tradition in astronomy. For thousands of years, people have collected amazing data, and the heavenly bodies up there are fairly stable, relative to our short time of lives. But if you deal with an uncertain world, big data will provide an illusion of certainty. For instance, in Risk Savvy I’ve analyzed the predictions of the top investment banks worldwide on exchange rates. If you look at that, then you know that big data fails. In an uncertain world you need something else. Good intuitions, smart heuristics. [Continue reading…]

facebooktwittermail

How the brain creates personality: A new theory

Stephen M. Kosslyn and G. Wayne Miller write: It is possible to examine any object — including a brain — at different levels. Take the example of a building. If we want to know whether the house will have enough space for a family of five, we want to focus on the architectural level; if we want to know how easily it could catch fire, we want to focus on the materials level; and if we want to engineer a product for a brick manufacturer, we focus on molecular structure.

Similarly, if we want to know how the brain gives rise to thoughts, feelings, and behaviors, we want to focus on the bigger picture of how its structure allows it to store and process information — the architecture, as it were. To understand the brain at this level, we don’t have to know everything about the individual connections among brain cells or about any other biochemical process. We use a rela­tively high level of analysis, akin to architecture in buildings, to characterize relatively large parts of the brain.

To explain the Theory of Cognitive Modes, which specifies general ways of thinking that underlie how a person approaches the world and interacts with other people, we need to provide you with a lot of information. We want you to understand where this theory came from — that we didn’t just pull it out of a hat or make it up out of whole cloth. But there’s no need to lose the forest for the trees: there are only three key points that you will really need to keep in mind.

First, the top parts and the bottom parts of the brain have differ­ent functions. The top brain formulates and executes plans (which often involve deciding where to move objects or how to move the body in space), whereas the bottom brain classifies and interprets incoming information about the world. The two halves always work together; most important, the top brain uses information from the bottom brain to formulate its plans (and to reformulate them, as they unfold over time).

Second, according to the theory, people vary in the degree that they tend to rely on each of the two brain systems for functions that are optional (i.e., not dictated by the immediate situation): Some people tend to rely heavily on both brain systems, some rely heavily on the bottom brain system but not the top, some rely heavily on the top but not the bottom, and some don’t rely heavily on either system.

Third, these four scenarios define four basic cognitive modes— general ways of thinking that underlie how a person approaches the world and interacts with other people. According to the Theory of Cognitive Modes, each of us has a particular dominant cognitive mode, which affects how we respond to situations we encounter and how we relate to others. The possible modes are: Mover Mode, Perceiver Mode, Stimulator Mode, and Adaptor Mode. [Continue reading…]

facebooktwittermail