Gemma Tarlach writes: Hey dog owners, you’re not imagining it: Researchers think your pooch may be trying to say something with a pout or pleading eyes.
Everyone who lives with dogs may be rolling their eyes right about now and saying “Of course Boopsie/Rex/Potato is smiling/frowning/expressing wide-eyed existential dread,” but heaps of anecdotal evidence don’t mean much in terms of scientific cred. A study out today, however, is a big step toward confirming that dogs use facial expressions in an attempt to communicate with humans.
Within our extended primate clan, particularly orangutans and gibbons, there is evidence that individuals modulate their facial expressions based on whether there’s an audience, which suggests they’re using the expressions as a form of communication. But there’s been no evidence that’s the case among non-primates — their facial expressions have generally been considered involuntary and reflexive displays of emotion.
Interested in testing that notion, reasearchers from the University of Portsmouth designed an experiment to determine whether the facial expressions of dogs change in the presence of a human audience. [Continue reading…]
Julie Sedivy writes: “[[[When in the course of human events it becomes necessary for one people [to dissolve the political bands [which have connected them with another]] and [to assume among the powers of the earth, the separate and equal station [to which the laws of Nature and of Nature’s God entitle them]]], a decent respect to the opinions of mankind requires [that they should declare the causes [which impel them to the separation]]].”
— Declaration of Independence, opening sentence
An iconic sentence, this. But how did it ever make its way into the world? At 71 words, it is composed of eight separate clauses, each anchored by its own verb, nested within one another in various arrangements. The main clause (a decent respect to the opinions of mankind requires …) hangs suspended above a 50-word subordinate clause that must first be unfurled. Like an intricate equation, the sentence exudes a mathematical sophistication, turning its face toward infinitude.
To some linguists, Noam Chomsky among them, sentences like these illustrate an essential property of human language. These scientists have argued that recursion, a technique that allows chunks of language such as sentences to be embedded inside each other (with no hard limit on the number of nestings) is a universal human ability, perhaps even the one uniquely human ability that supports language. It’s what allows us to create—literally—an infinite variety of novel sentences out of a limited inventory of words.
But that leads to a curious puzzle: Complex sentences are not ubiquitous among the world’s languages. Many languages have little use for them. They prefer to string together simple clauses. They may even lack certain words such as relative pronouns that and which or connectors like if, despite, and although—these words make it possible to link clauses together into larger sentences. Allegedly, the Pirahã language along the Maici River of Brazil lacks recursion altogether. According to linguist Dan Everett, Pirahã speakers avoid linguistic nesting of all kinds, even in structures such as John’s brother’s house. (Instead, they would say something like: Brother’s house. John has a brother. It is the same one.)
This can’t be pinned on biological evolution. All evidence suggests that humans around the world are born with more or less the same brains. Abundant childhood exposure to a language with layered sentences practically guarantees their mastery. Even adult Pirahã speakers, who have remained unusually isolated from European languages, pick up the trick of complex syntax, provided that they spend enough time interacting with speakers of Brazilian Portuguese, a language that offers an adequate diet of embedded structures.
More useful is the notion of linguistic evolution. It’s the languages themselves, rather than the brains, that have evolved along different paths. And just as different species are shaped by adaptations to specific ecological niches, certain linguistic features—like sentence complexity—survive and thrive under some circumstances, whereas other features take hold and spread within very different niches. [Continue reading…]
By Jay Schwartz
A chimpanzee is strolling along a trail through the lush Budongo Forest in Uganda when he spots a deadly Gaboon viper. Chimps have an alarm call for scenarios like these: a soft “hoo” grunt that alerts others to potential danger. But there’s no point in alerting his group mates if they’re already aware of the threat. So what does he do?
This is the question that Catherine Crockford, a primatologist at the Max Planck Institute for Evolutionary Anthropology, and her colleagues were keen to answer. They are the ones who’d put the viper—a convincing model made out of wire mesh and plaster—in the chimp’s path. It sounds like a silly prank, trying to surprise a chimp with a model snake. But the researchers were trying to get at an elusive and profound question: How much of what a chimp “says” is intentional communication?
Their findings, published in 2012, along with those of a 2013 follow-up study by University of York psychologist Katie Slocombe and colleagues, challenged long-held assumptions about what makes humans unique among our primate relatives.
Researchers have spent decades endeavoring to unravel the depth of communication that nonhuman primates can achieve. Do they have words as we would think of them? Do they have grammar? Since language is so integral to our identity as humans, these questions get to the heart of what it means to be human. While the public tends to imbue every cat meow and dog bark with meaning, scientists have traditionally taken a much more conservative approach, favoring the least cognitive explanations and assuming that animal vocalizations are involuntary and emotional. “Conservatism is essential if animal cognition work is to be taken seriously,” says Slocombe.
We can’t see inside primate brains (at least not without a lot of practical and ethical difficulty), or ask primates what they mean or why they vocalize. So primate-communication researchers have been forced to devise clever studies to work out what’s going on in their subjects’ minds.
Stephen T Asma writes: Richard Klein, Maurice Bloch and other prominent paleoanthropologists place the imagination quite late in the history of our species, thousands of years after the emergence of anatomically modern humans. In part, this theory reflects a bias that artistic faculties are a kind of evolutionary cheesecake – sweet desserts that emerge as byproducts of more serious cognitive adaptations such as language and logic. More importantly, it is premised on the relatively late appearance of cave art in the Upper Paleolithic period (c38,000 years ago). It is common for archaeologists to assume that imagination evolves late, after language, and the cave paintings are a sign of modern minds at work, thinking and creating just as we do today.
Contrary to this interpretation, I want to suggest that imagination, properly understood, is one of the earliest human abilities, not a recent arrival. Thinking and communicating are vastly improved by language, it is true. But ‘thinking with imagery’ and even ‘thinking with the body’ must have preceded language by hundreds of thousands of years. It is part of our mammalian inheritance to read, store and retrieve emotionally coded representations of the world, and we do this via conditioned associations, not propositional coding.
Lions on the savanna, for example, learn and make predictions because experience forges strong associations between perception and feeling. Animals appear to use images (visual, auditory, olfactory memories) to navigate novel territories and problems. For early humans, a kind of cognitive gap opened up between stimulus and response – a gap that created the possibility of having multiple responses to a perception, rather than one immediate response. This gap was crucial for the imagination: it created an inner space in our minds. The next step was that early human brains began to generate information, rather than merely record and process it – we began to create representations of things that never were but might be. On this view, imagination extends back into the Pleistocene, at least, and likely emerged slowly in our Homo erectus cousins. [Continue reading…]
Gabriel Popkin writes: Bacteria have an unfortunate — and inaccurate — public image as isolated cells twiddling about on microscope slides. The more that scientists learn about bacteria, however, the more they see that this hermitlike reputation is deeply misleading, like trying to understand human behavior without referring to cities, laws or speech. “People were treating bacteria as … solitary organisms that live by themselves,” said Gürol Süel, a biophysicist at the University of California, San Diego. “In fact, most bacteria in nature appear to reside in very dense communities.”
The preferred form of community for bacteria seems to be the biofilm. On teeth, on pipes, on rocks and in the ocean, microbes glom together by the billions and build sticky organic superstructures around themselves. In these films, bacteria can divide labor: Exterior cells may fend off threats, while interior cells produce food. And like humans, who have succeeded in large part by cooperating with each other, bacteria thrive in communities. Antibiotics that easily dispatch free-swimming cells often prove useless against the same types of cells when they’ve hunkered down in a film.
As in all communities, cohabiting bacteria need ways to exchange messages. Biologists have known for decades that bacteria can use chemical cues to coordinate their behavior. The best-known example, elucidated by Bonnie Bassler of Princeton University and others, is quorum sensing, a process by which bacteria extrude signaling molecules until a high enough concentration triggers cells to form a biofilm or initiate some other collective behavior. [Continue reading…]
Arika Okrent writes: Science is a messy business, but just like everything with loose ends and ragged edges, we tend to understand it by resorting to ideal types. On the one hand, there’s the archetype of the scientific method: a means of accounting for observations, generating precise, testable predictions, and yielding new discoveries about the natural consequences of natural laws. On the other, there’s our ever-replenishing font of story archetypes: the accidental event that results in a sudden clarifying insight; the hero who pursues the truth in the face of resistance or even danger; the surprising fact that challenges the dominant theory and brings it toppling to the ground.
The interplay of these archetypes has produced a spirited, long-running controversy about the nature and origins of language. Recently, it’s been flung back into public awareness following the publication of Tom Wolfe’s book The Kingdom of Speech (2016).
In Wolfe’s breathless re-telling, the dominant scientific theory is Noam Chomsky’s concept of a ‘universal grammar’ – the idea that all languages share a deep underlying structure that’s almost certainly baked into our biology by evolution. The crucial hypothesis is that its core, essential feature is recursion, the capacity to embed phrases within phrases ad infinitum, and so express complex relations between ideas (such as ‘Tom says that Dan claims that Noam believes that…’). And the challenging fact is the discovery of an Amazonian language, Pirahã, that does not have recursion. The scientific debate plays out as a classic David-and-Goliath story, with Chomsky as a famous, ivory-tower intellectual whose grand armchair proclamations are challenged by a rugged, lowly field linguist and former Christian missionary named Daniel Everett.
Stories this ripe for dramatisation come along rarely in any branch of science, much less the relatively obscure field of theoretical linguistics. But the truth will always be more complicated than the idealisations we use to understand it. In this case, the details lend themselves so well to juicy, edifice-crumbling story arcs that a deeper, more consequential point tends to be overlooked. It concerns not Everett’s challenge to Chomsky’s theory, but Chomsky’s challenge to the scientific method itself.
This counter-attack takes the form of the Chomskyans’ response to Everett. They say that even if Pirahã has no recursion, it matters not one bit for the theory of universal grammar. The capacity is intrinsic, even if it’s not always exploited. As Chomsky and his colleagues put it in a co-authored paper, ‘our language faculty provides us with a toolkit for building languages, but not all languages use all the tools’. This looks suspiciously like defiance of a central feature of the scientific archetype, one first put forward by the philosopher Karl Popper: theories are not scientific unless they have the potential to be falsified. If you claim that recursion is the essential feature of language, and if the existence of a recursionless language does not debunk your claim, then what could possibly invalidate it?
In an interview with Edge.org in 2007, Everett said he emailed Chomsky: ‘What is a single prediction that universal grammar makes that I could falsify? How could I test it?’ According to Everett, Chomsky replied to say that universal grammar doesn’t make any predictions; it’s a field of study, like biology.
The nub of the disagreement here boils down to what exactly linguistics says about the world, and the appropriate archetypes we should apply to make it effective. So just what kinds of questions does linguistics want to answer? What counts as evidence? Is universal grammar in particular – and theoretical linguistics in general – a science at all? [Continue reading…]
Joshua Roebke writes: We connect to each other through particles. Calls and texts ride flecks of light, Web sites and photographs load on electrons. All communication is, essentially, physical. Information is recorded and broadcast on actual objects, even those we cannot see.
Physicists also connect to the world when they communicate with it. They dispatch glints of light toward particles or atoms, and wait for this light to report back. The light interacts with the bits of matter, and how this interaction changes the light reveals a property or two of the bits—although this interaction often changes the bits, too. The term of art for such a candid affair is a measurement.
Particles even connect to each other using other particles. The force of electromagnetism between two electrons is conveyed by particles of light, and quarks huddle inside a proton because they exchange gluons. Physics is, essentially, the study of interactions.
Information is always conveyed through interactions, whether between particles or ourselves. We are compositions of particles who communicate with each other, and we learn about our surroundings by interacting with them. The better we understand such interactions, the better we understand the world and ourselves.
Physicists already know that interactions are local. As with city politics, the influence of particles is confined to their immediate precincts. Yet interactions remain difficult to describe. Physicists have to treat particles as individuals and add complex terms to their solitary existence to model their intimacies with other particles. The resulting equations are usually impossible to solve. So physicists have to approximate even for single particles, which can interact with themselves as a boat rolls in its own wake. Although physicists are meticulous, it is a wonder they ever succeed. Still, their contentions are the most accurate theories we have.
Quantum mechanics is the consummate theory of particles, so it naturally describes measurements and interactions. During the past few decades, as computers have nudged the quantum, the theory has been reframed to encompass information, too. What quantum mechanics implies for measurements and interactions is notoriously bizarre. Its implications for information are stranger still.
One of the strangest of these implications refutes the material basis of communication as well as common sense. Some physicists believe that we may be able to communicate without transmitting particles. In 2013 an amateur physicist named Hatim Salih even devised a protocol, alongside professionals, in which information is obtained from a place where particles never travel. Information can be disembodied. Communication may not be so physical after all.[Continue reading…]
Quartz reports: A new study shows that the words we use to talk about time also shape our view of its passage. This, say researchers, indicates that abstract concepts like duration are relative rather than universal, and that they are also influenced rather than solely innate.
The work, published in the American Psychological Association’s Journal of Experimental Psychology: General on April 27, examined how Spanish- and Swedish-speaking bilinguals conceived of time. The researchers—from University of Stockholm in Sweden and the University of Lancaster in the UK—found that their subjects, 40 of whom were native Swedish speakers and 40 of whom were native Spanish speakers—tended to think about time in terms that correspond to each language’s descriptors when linguistically prompted in that particular language but moved fluidly from one concept of time to another generally. This was true regardless of their native language.
Different languages describe time differently. For example, Swedish and English generally refer to time according to physical distance (“a long time,” “a short break”). Meanwhile, languages like Spanish or Greek, say, refer to time in volume generally (“a big chunk of time,” “a small moment”). [Continue reading…]
Deborah Tannen writes: At Thursday’s Senate hearing, Sen. James E. Risch (R-Idaho) sought former FBI director James B. Comey’s agreement that President Trump did not tell him to drop his investigation of fired national security adviser Michael Flynn: “He did not direct you to let it go.” Comey agreed, “Not in his words, no.” Risch pressed his point: “He did not order you to let it go?” Comey concurred: “Again, those words are not an order.” Yet later in the hearing, in response to Sen. Angus King (I-Maine) asking whether the president’s words were “a directive,” Comey said, “Yes.”
Was Comey contradicting himself? Based on decades of studying indirectness in conversation — and a lifetime of using language to communicate — I’d say no. Risch was talking about the message: the literal meaning of words spoken. King, and later Sen. Kamala D. Harris (D-Calif.), were referring to the metamessage: what it means to say those words in that way in that context. When people talk to each other, they glean meaning from metamessages. But messages come in handy when someone wants to deny a meaning that was obvious when the words were spoken.
The president’s “exact words,” according to Comey’s notes, were: “I hope you can see your way clear to letting this go, to letting Flynn go. He is a good guy. I hope you can let this go.” Risch cried literal meaning. Zeroing in on the word “hope,” he asked Comey if he knew of anyone being charged with a criminal offense because “they hoped for an outcome.” Though he confessed that he didn’t, Comey said, “I took it as, this is what he wants me to do.” Risch rested his case: “You may have taken it as a direction but that’s not what he said.” Donald Trump Jr., the president’s son, later made the same point in a tweet: “Hoping and telling are two very different things.”
Actually, they aren’t, when the speaker is in a position of power, as Harris noted. Referring to her experience as a prosecutor, she said, “When a robber held a gun to somebody’s head and said, ‘I hope you will give me your wallet,’ the word ‘hope’ was not the most operative word at that moment.” The gun gives the robber power to encourage another to make his hope a reality.
Trump Jr. also tweeted, “Knowing my father for 39 years when he ‘orders or tells’ you to do something there is no ambiguity, you will know exactly what he means.” He’s right. Comey knew exactly what he meant. [Continue reading…]
Masha Gessen writes: In the early 1990s, Russian journalists were engaged in the project of reinventing journalism—which itself had been used to perform the opposite of conveying reliable information. Language was a problem. The language of politics had been pillaged, as had the language of values and even the language of feelings: after decades of performing revolutionary passion, people had become weary of the very idea of passion. So the new Russian journalists opted for language that was descriptive in the most direct way: we tried to stick to verbs and nouns, and only to things that could be directly observed. It was the journalistic equivalent of the hardware store: if the shape of a word could not be clearly described and its weight could not be measured, it could not be used. This kind of language is good for describing things that are in front of your eyes and terrible for conveying the contents of your mind or heart. It was constraining.
Writing in Russian was a challenging exercise akin to navigating a mine field: one misstep could discredit the entire enterprise. Compared to this, writing in English was freedom. But then things in Russia got worse. A new government came in, and did new damage to the language. Vladimir Putin declared a “dictatorship of the law.” His main ideologue advanced the idea of “managed democracy.” Temporary president Dmitry Medvedev said, “Freedom is better than unfreedom.” Now words did not mean their opposite anymore. They just meant nothing. The phrase “dictatorship of the law” is so incoherent as to render both “dictatorship” and “law” meaningless.
Donald Trump has an instinct for doing both of these kinds of violence to language. He is particularly adept at taking words and phrases that deal with power relationships and turning them into their opposite. This was, for example, how he used the phrase “safe space” when talking about vice-president-elect Mike Pence’s visit to the musical Hamilton. Pence, if you recall, was booed and then passionately—and respectfully—addressed by the cast of the show. Trump was tweeting that this should not have happened. Now, the phrase “safe space” was coined to describe a place where people who usually feel unsafe and powerless would feel exceptionally safe. Claiming that the second most powerful man in the world should be granted a “safe space” in public turns the concept precisely on its head.
Trump performed the exact same trick on the phrase “witch hunt,” which he claimed was being carried out by Democrats to avenge their electoral loss. Witch hunts cannot actually be carried out by losers, big or small: the agent of a witch hunt must have power. And, of course, he has seized and flipped the term “fake news” in much the same way. [Continue reading…]
Ferris Jabr writes: [Con] Slobodchikoff, an emeritus professor of biology at Northern Arizona University, has been analyzing the sounds of prairie dogs for more than 30 years. Not long after he started, he learned that prairie dogs had distinct alarm calls for different predators. Around the same time, separate researchers found that a few other species had similar vocabularies of danger. What Slobodchikoff claimed to discover in the following decades, however, was extraordinary: Beyond identifying the type of predator, prairie-dog calls also specified its size, shape, color and speed; the animals could even combine the structural elements of their calls in novel ways to describe something they had never seen before. No scientist had ever put forward such a thorough guide to the native tongue of a wild species or discovered one so intricate. Prairie-dog communication is so complex, Slobodchikoff says — so expressive and rich in information — that it constitutes nothing less than language.
That would be an audacious claim to make about even the most overtly intelligent species — say, a chimpanzee or a dolphin — let alone some kind of dirt hamster with a brain that barely weighs more than a grape. The majority of linguists and animal-communication experts maintain that language is restricted to a single species: ourselves. Perhaps because it is so ostensibly entwined with thought, with consciousness and our sense of self, language is the last bastion encircling human exceptionalism. To concede that we share language with other species is to finally and fully admit that we are different from other animals only in degrees not in kind. In many people’s minds, language is the “cardinal distinction between man and animal, a sheerly dividing line as abrupt and immovable as a cliff,” as Tom Wolfe argues in his book “The Kingdom of Speech,” published last year.
Slobodchikoff thinks that dividing line is an illusion. To him, the idea that a human might have a two-way conversation with another species, even a humble prairie dog, is not a pretense; it’s an inevitability. And the notion that animals of all kinds routinely engage in sophisticated discourse with one another — that the world’s ecosystems reverberate with elaborate animal idioms just waiting to be translated — is not Doctor Dolittle-inspired nonsense; it is fact. [Continue reading…]
Nicholas Carr writes: Welcome to the global village. It’s a nasty place.
On Easter Sunday, a man in Cleveland filmed himself murdering a random 74-year-old and posted the video on Facebook. The social network took the grisly clip down within two or three hours, but not before users shared it on other websites — where people around the world can still view it.
Surely incidents like this aren’t what Mark Zuckerberg had in mind. In 2012, as his company was preparing to go public, the Facebook founder wrote an earnest letter to would-be shareholders explaining that his company was more than just a business. It was pursuing a “social mission” to make the world a better place by encouraging self-expression and conversation. “People sharing more,” the young entrepreneur wrote, “creates a more open culture and leads to a better understanding of the lives and perspectives of others.”
Earlier this year, Zuckerberg penned another public letter, expressing even grander ambitions. Facebook, he announced, is expanding its mission from “connecting friends and family” to building “a global community that works for everyone.” The ultimate goal is to turn the already vast social network into a sort of supranational state “spanning cultures, nations and regions.”
But the murder in Cleveland, and any similar incidents that inevitably follow, reveal the hollowness of Silicon Valley’s promise that digital networks would bring us together in a more harmonious world.
Whether he knows it or not, Zuckerberg is part of a long tradition in Western thought. Ever since the building of the telegraph system in the 19th century, people have believed that advances in communication technology would promote social harmony. The more we learned about each other, the more we would recognize that we’re all one. In an 1899 article celebrating the laying of transatlantic Western Union cables, a New York Times columnist expressed the popular assumption well: “Nothing so fosters and promotes a mutual understanding and a community of sentiment and interests as cheap, speedy, and convenient communication.”
The great networks of the 20th century — radio, telephone, TV — reinforced this sunny notion. Spanning borders and erasing distances, they shrank the planet. Guglielmo Marconi declared in 1912 that his invention of radio would “make war impossible, because it will make war ridiculous.” AT&T’s top engineer, J.J. Carty, predicted in a 1923 interview that the telephone system would “join all the peoples of the earth in one brotherhood.” In his 1962 book “The Gutenberg Galaxy,” the media theorist Marshall McLuhan gave us the memorable term “global village” to describe the world’s “new electronic interdependence.” Most people took the phrase optimistically, as a prophecy of inevitable social progress. What, after all, could be nicer than a village?
If our assumption that communication brings people together were true, we should today be seeing a planetary outbreak of peace, love, and understanding. Thanks to the Internet and cellular networks, humanity is more connected than ever. Of the world’s 7 billion people, 6 billion have access to a mobile phone — a billion and a half more, the United Nations reports, than have access to a working toilet. Nearly 2 billion are on Facebook, more than a billion upload and download YouTube videos, and billions more converse through messaging apps like WhatsApp and WeChat. With smartphone in hand, everyone becomes a media hub, transmitting and receiving ceaselessly.
Yet we live in a fractious time, defined not by concord but by conflict. Xenophobia is on the rise. Political and social fissures are widening. From the White House down, public discourse is characterized by vitriol and insult. We probably shouldn’t be surprised. [Continue reading…]
Julie Sedivy writes: Reading medieval literature, it’s hard not to be impressed with how much the characters get done—as when we read about King Harold doing battle in one of the Sagas of the Icelanders, written in about 1230. The first sentence bristles with purposeful action: “King Harold proclaimed a general levy, and gathered a fleet, summoning his forces far and wide through the land.” By the end of the third paragraph, the king has launched his fleet against a rebel army, fought numerous battles involving “much slaughter in either host,” bound up the wounds of his men, dispensed rewards to the loyal, and “was supreme over all Norway.” What the saga doesn’t tell us is how Harold felt about any of this, whether his drive to conquer was fueled by a tyrannical father’s barely concealed contempt, or whether his legacy ultimately surpassed or fell short of his deepest hopes.
Jump ahead about 770 years in time, to the fiction of David Foster Wallace. In his short story “Forever Overhead,” the 13-year-old protagonist takes 12 pages to walk across the deck of a public swimming pool, wait in line at the high diving board, climb the ladder, and prepare to jump. But over these 12 pages, we are taken into the burgeoning, buzzing mind of a boy just erupting into puberty—our attention is riveted to his newly focused attention on female bodies in swimsuits, we register his awareness that others are watching him as he hesitates on the diving board, we follow his undulating thoughts about whether it’s best to do something scary without thinking about it or whether it’s foolishly dangerous not to think about it.
These examples illustrate Western literature’s gradual progression from narratives that relate actions and events to stories that portray minds in all their meandering, many-layered, self-contradictory complexities. I’d often wondered, when reading older texts: Weren’t people back then interested in what characters thought and felt? [Continue reading…]
Ben Panko writes: Thirteen years ago, a deadly strain of avian flu known as H5N1 was tearing through Asia’s bird populations. In January 2004, Chinese scientists reported that pigs too had become infected with the virus—an alarming development, since pigs are susceptible to human viruses and could potentially act as a “mixing vessel” that would allow the virus to jump to humans. “Urgent attention should be paid to the pandemic preparedness of these two subtypes of influenza,” the scientists wrote in their study.
Yet at the time, little attention was paid outside of China—because the study was published only in Chinese, in a small Chinese journal of veterinary medicine.
It wasn’t until August of that year that the World Health Organization and the United Nations learned of the study’s results and rushed to have it translated. Those scientists and policy makers ran headlong into one of science’s biggest unsolved dilemmas: language. A new study in the journal PLOS Biology sheds light on how widespread the gulf can be between English-language science and any-other-language science, and how that gap can lead to situations like the avian flu case, or worse. [Continue reading…]
After one of the most divisive presidential elections in American history, many of us may be anxious about dinner-table dialogue with family and friends this Thanksgiving. There is no denying that the way we communicate about politics has fundamentally changed with the proliferation of technology and social media. Twitter bots, fake news and echo chambers are just a few of the highlights from this election season. Much of how we’re conversing online can’t – and shouldn’t – be replicated around the family table. We are getting out of practice at conducting meaningful, respectful conversation.
There’s not a quick fix. We need more empathic communication – the slow, deep (inter)personal discourse that can nurture identity and build and strengthen relationships. Yet contemporary communication platforms can make it harder to build empathy with conversational partners. Even the phrase “conversational partners” seems unfitting in the world of 140-character limits, followers, likes and shares. In many ways, our devices help us talk at (@?) instead of with one another.
Literally meaning “in-feeling,” empathy is a process of internalizing another person’s perspective. Empathy-building is unselfish; you suspend your own sensibilities and try to fully imagine and embrace those of someone else. You can gain empathy by learning about other cultures from different media, by experiencing what others have gone through personally, or by having deep conversations with others.
My research into cross-cultural communications has taught me that empathy is not only the key to feeling connected – “I understand you” – but also the foundation for changing our narratives about one another – “now I see we are not so different.” That’s an important point to remember after such a difficult political experience. Building empathy requires communication, specifically talking to one another. But, not just any talking will suffice – especially not the type of talking promoted by today’s highly popular communication technologies.
Financial Times reports: UK-based insurer Admiral has come up with a way to crunch through social media posts to work out who deserves a lower premium. People who seem cautious and deliberate in their choice of words are likely to pay a lot less than those with overconfident remarks. [Continue reading…]
Ivan Krastev writes: In our increasingly Anglophone world, Americans have become nakedly transparent to English speakers everywhere, yet the world remains bafflingly and often frighteningly opaque to monolingual Americans. While the world devours America’s movies and follows its politics closely, Americans know precious little about how non-Americans think and live. Americans have never heard of other countries’ movie stars and have only the vaguest notions of what their political conflicts are about.
This gross epistemic asymmetry is a real weakness. When WikiLeaks revealed the secret cables of the American State Department or leaked the emails of the Clinton campaign, it became a global news sensation and a major embarrassment for American diplomacy. Leaking Chinese diplomatic cables or Russian officials’ emails could never become a worldwide human-interest story, simply because only a relative handful of non-Chinese or non-Russians could read them, let alone make sense of them. [Continue reading…]
Although I’m pessimistic about the prospects of the meek inheriting the earth, the bi-lingual are in a very promising position. And Anglo-Americans should never forget that this is after all a country with a Spanish name. As for where I stand personally, I’m with the bi-lingual camp in spirit even if my own claim to be bi-lingual is a bit tenuous — an English-speaker who understands American-English but speaks British-English; does that count?