Author Archives: Attention to the Unseen

The surprising forces influencing the complexity of the language we speak and write

Julie Sedivy writes: “[[[When in the course of human events it becomes necessary for one people [to dissolve the political bands [which have connected them with another]] and [to assume among the powers of the earth, the separate and equal station [to which the laws of Nature and of Nature’s God entitle them]]], a decent respect to the opinions of mankind requires [that they should declare the causes [which impel them to the separation]]].”
— Declaration of Independence, opening sentence

An iconic sentence, this. But how did it ever make its way into the world? At 71 words, it is composed of eight separate clauses, each anchored by its own verb, nested within one another in various arrangements. The main clause (a decent respect to the opinions of mankind requires …) hangs suspended above a 50-word subordinate clause that must first be unfurled. Like an intricate equation, the sentence exudes a mathematical sophistication, turning its face toward infinitude.

To some linguists, Noam Chomsky among them, sentences like these illustrate an essential property of human language. These scientists have argued that recursion, a technique that allows chunks of language such as sentences to be embedded inside each other (with no hard limit on the number of nestings) is a universal human ability, perhaps even the one uniquely human ability that supports language. It’s what allows us to create—literally—an infinite variety of novel sentences out of a limited inventory of words.

But that leads to a curious puzzle: Complex sentences are not ubiquitous among the world’s languages. Many languages have little use for them. They prefer to string together simple clauses. They may even lack certain words such as relative pronouns that and which or connectors like if, despite, and although—these words make it possible to link clauses together into larger sentences. Allegedly, the Pirahã language along the Maici River of Brazil lacks recursion altogether. According to linguist Dan Everett, Pirahã speakers avoid linguistic nesting of all kinds, even in structures such as John’s brother’s house. (Instead, they would say something like: Brother’s house. John has a brother. It is the same one.)

This can’t be pinned on biological evolution. All evidence suggests that humans around the world are born with more or less the same brains. Abundant childhood exposure to a language with layered sentences practically guarantees their mastery. Even adult Pirahã speakers, who have remained unusually isolated from European languages, pick up the trick of complex syntax, provided that they spend enough time interacting with speakers of Brazilian Portuguese, a language that offers an adequate diet of embedded structures.

More useful is the notion of linguistic evolution. It’s the languages themselves, rather than the brains, that have evolved along different paths. And just as different species are shaped by adaptations to specific ecological niches, certain linguistic features—like sentence complexity—survive and thrive under some circumstances, whereas other features take hold and spread within very different niches. [Continue reading…]

Facebooktwittermail

Primate vocalizations are much more than gibberish

By Jay Schwartz

A chimpanzee is strolling along a trail through the lush Budongo Forest in Uganda when he spots a deadly Gaboon viper. Chimps have an alarm call for scenarios like these: a soft “hoo” grunt that alerts others to potential danger. But there’s no point in alerting his group mates if they’re already aware of the threat. So what does he do?

This is the question that Catherine Crockford, a primatologist at the Max Planck Institute for Evolutionary Anthropology, and her colleagues were keen to answer. They are the ones who’d put the viper—a convincing model made out of wire mesh and plaster—in the chimp’s path. It sounds like a silly prank, trying to surprise a chimp with a model snake. But the researchers were trying to get at an elusive and profound question: How much of what a chimp “says” is intentional communication?

Their findings, published in 2012, along with those of a 2013 follow-up study by University of York psychologist Katie Slocombe and colleagues, challenged long-held assumptions about what makes humans unique among our primate relatives.

Researchers have spent decades endeavoring to unravel the depth of communication that nonhuman primates can achieve. Do they have words as we would think of them? Do they have grammar? Since language is so integral to our identity as humans, these questions get to the heart of what it means to be human. While the public tends to imbue every cat meow and dog bark with meaning, scientists have traditionally taken a much more conservative approach, favoring the least cognitive explanations and assuming that animal vocalizations are involuntary and emotional. “Conservatism is essential if animal cognition work is to be taken seriously,” says Slocombe.

We can’t see inside primate brains (at least not without a lot of practical and ethical difficulty), or ask primates what they mean or why they vocalize. So primate-communication researchers have been forced to devise clever studies to work out what’s going on in their subjects’ minds.

Continue reading

Facebooktwittermail

What happens if China makes first contact?

Ross Andersen writes: Last January, the Chinese Academy of Sciences invited Liu Cixin, China’s preeminent science-fiction writer, to visit its new state-of-the-art radio dish in the country’s southwest. Almost twice as wide as the dish at America’s Arecibo Observatory, in the Puerto Rican jungle, the new Chinese dish is the largest in the world, if not the universe. Though it is sensitive enough to detect spy satellites even when they’re not broadcasting, its main uses will be scientific, including an unusual one: The dish is Earth’s first flagship observatory custom-built to listen for a message from an extraterrestrial intelligence. If such a sign comes down from the heavens during the next decade, China may well hear it first.

In some ways, it’s no surprise that Liu was invited to see the dish. He has an outsize voice on cosmic affairs in China, and the government’s aerospace agency sometimes asks him to consult on science missions. Liu is the patriarch of the country’s science-fiction scene. Other Chinese writers I met attached the honorific Da, meaning “Big,” to his surname. In years past, the academy’s engineers sent Liu illustrated updates on the dish’s construction, along with notes saying how he’d inspired their work.

But in other ways Liu is a strange choice to visit the dish. He has written a great deal about the risks of first contact. He has warned that the “appearance of this Other” might be imminent, and that it might result in our extinction. “Perhaps in ten thousand years, the starry sky that humankind gazes upon will remain empty and silent,” he writes in the postscript to one of his books. “But perhaps tomorrow we’ll wake up and find an alien spaceship the size of the Moon parked in orbit.”

In recent years, Liu has joined the ranks of the global literati. In 2015, his novel The Three-Body Problem became the first work in translation to win the Hugo Award, science fiction’s most prestigious prize. Barack Obama told The New York Times that the book—the first in a trilogy—gave him cosmic perspective during the frenzy of his presidency. Liu told me that Obama’s staff asked him for an advance copy of the third volume.

At the end of the second volume, one of the main characters lays out the trilogy’s animating philosophy. No civilization should ever announce its presence to the cosmos, he says. Any other civilization that learns of its existence will perceive it as a threat to expand—as all civilizations do, eliminating their competitors until they encounter one with superior technology and are themselves eliminated. This grim cosmic outlook is called “dark-forest theory,” because it conceives of every civilization in the universe as a hunter hiding in a moonless woodland, listening for the first rustlings of a rival. [Continue reading…]

Facebooktwittermail

Consciousness began when the gods stopped speaking

Veronique Greenwood writes: Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice. He was in his early 50s, a fairly heavy drinker, untenured, and apparently uninterested in tenure. His position was marginal. “I don’t think the university was paying him on a regular basis,” recalls Roy Baumeister, then a student at Princeton and today a professor of psychology at Florida State University. But among the youthful inhabitants of the dorm, Jaynes was working on his masterpiece, and had been for years.

From the age of 6, Jaynes had been transfixed by the singularity of conscious experience. Gazing at a yellow forsythia flower, he’d wondered how he could be sure that others saw the same yellow as he did. As a young man, serving three years in a Pennsylvania prison for declining to support the war effort, he watched a worm in the grass of the prison yard one spring, wondering what separated the unthinking earth from the worm and the worm from himself. It was the kind of question that dogged him for the rest of his life, and the book he was working on would grip a generation beginning to ask themselves similar questions.

The Origin of Consciousness in the Breakdown of the Bicameral Mind, when it finally came out in 1976, did not look like a best-seller. But sell it did. It was reviewed in science magazines and psychology journals, Time, The New York Times, and the Los Angeles Times. It was nominated for a National Book Award in 1978. New editions continued to come out, as Jaynes went on the lecture circuit. Jaynes died of a stroke in 1997; his book lived on. In 2000, another new edition hit the shelves. It continues to sell today.

In the beginning of the book, Jaynes asks, “This consciousness that is myself of selves, that is everything, and yet nothing at all—what is it? And where did it come from? And why?” Jaynes answers by unfurling a version of history in which humans were not fully conscious until about 3,000 years ago, instead relying on a two-part, or bicameral, mind, with one half speaking to the other in the voice of the gods with guidance whenever a difficult situation presented itself. The bicameral mind eventually collapsed as human societies became more complex, and our forebears awoke with modern self-awareness, complete with an internal narrative, which Jaynes believes has its roots in language. [Continue reading…]

Facebooktwittermail

The case against civilization

John Lanchester writes: Science and technology: we tend to think of them as siblings, perhaps even as twins, as parts of STEM (for “science, technology, engineering, and mathematics”). When it comes to the shiniest wonders of the modern world—as the supercomputers in our pockets communicate with satellites—science and technology are indeed hand in glove. For much of human history, though, technology had nothing to do with science. Many of our most significant inventions are pure tools, with no scientific method behind them. Wheels and wells, cranks and mills and gears and ships’ masts, clocks and rudders and crop rotation: all have been crucial to human and economic development, and none historically had any connection with what we think of today as science. Some of the most important things we use every day were invented long before the adoption of the scientific method. I love my laptop and my iPhone and my Echo and my G.P.S., but the piece of technology I would be most reluctant to give up, the one that changed my life from the first day I used it, and that I’m still reliant on every waking hour—am reliant on right now, as I sit typing—dates from the thirteenth century: my glasses. Soap prevented more deaths than penicillin. That’s technology, not science.

In “Against the Grain: A Deep History of the Earliest States,” James C. Scott, a professor of political science at Yale, presents a plausible contender for the most important piece of technology in the history of man. It is a technology so old that it predates Homo sapiens and instead should be credited to our ancestor Homo erectus. That technology is fire. We have used it in two crucial, defining ways. The first and the most obvious of these is cooking. As Richard Wrangham has argued in his book “Catching Fire,” our ability to cook allows us to extract more energy from the food we eat, and also to eat a far wider range of foods. Our closest animal relative, the chimpanzee, has a colon three times as large as ours, because its diet of raw food is so much harder to digest. The extra caloric value we get from cooked food allowed us to develop our big brains, which absorb roughly a fifth of the energy we consume, as opposed to less than a tenth for most mammals’ brains. That difference is what has made us the dominant species on the planet.

The other reason fire was central to our history is less obvious to contemporary eyes: we used it to adapt the landscape around us to our purposes. Hunter-gatherers would set fires as they moved, to clear terrain and make it ready for fast-growing, prey-attracting new plants. They would also drive animals with fire. They used this technology so much that, Scott thinks, we should date the human-dominated phase of earth, the so-called Anthropocene, from the time our forebears mastered this new tool.

We don’t give the technology of fire enough credit, Scott suggests, because we don’t give our ancestors much credit for their ingenuity over the long period—ninety-five per cent of human history—during which most of our species were hunter-gatherers. “Why human fire as landscape architecture doesn’t register as it ought to in our historical accounts is perhaps that its effects were spread over hundreds of millennia and were accomplished by ‘precivilized’ peoples also known as ‘savages,’ ” Scott writes. To demonstrate the significance of fire, he points to what we’ve found in certain caves in southern Africa. The earliest, oldest strata of the caves contain whole skeletons of carnivores and many chewed-up bone fragments of the things they were eating, including us. Then comes the layer from when we discovered fire, and ownership of the caves switches: the human skeletons are whole, and the carnivores are bone fragments. Fire is the difference between eating lunch and being lunch.

Anatomically modern humans have been around for roughly two hundred thousand years. For most of that time, we lived as hunter-gatherers. Then, about twelve thousand years ago, came what is generally agreed to be the definitive before-and-after moment in our ascent to planetary dominance: the Neolithic Revolution. This was our adoption of, to use Scott’s word, a “package” of agricultural innovations, notably the domestication of animals such as the cow and the pig, and the transition from hunting and gathering to planting and cultivating crops. The most important of these crops have been the cereals—wheat, barley, rice, and maize—that remain the staples of humanity’s diet. Cereals allowed population growth and the birth of cities, and, hence, the development of states and the rise of complex societies.

The story told in “Against the Grain” heavily revises this widely held account. Scott’s specialty is not early human history. His work has focussed on a skeptical, peasant’s-eye view of state formation; the trajectory of his interests can be traced in the titles of his books, from “The Moral Economy of the Peasant” to “The Art of Not Being Governed.” His best-known book, “Seeing Like a State,” has become a touchstone for political scientists, and amounts to a blistering critique of central planning and “high modernism,” the idea that officials at the center of a state know better than the people they are governing. Scott argues that a state’s interests and the interests of subjects are often not just different but opposite. Stalin’s project of farm collectivization “served well enough as a means whereby the state could determine cropping patterns, fix real rural wages, appropriate a large share of whatever grain was produced, and politically emasculate the countryside”; it also killed many millions of peasants.

Scott’s new book extends these ideas into the deep past, and draws on existing research to argue that ours is not a story of linear progress, that the time line is much more complicated, and that the causal sequences of the standard version are wrong. He focusses his account on Mesopotamia—roughly speaking, modern-day Iraq—because it is “the heartland of the first ‘pristine’ states in the world,” the term “pristine” here meaning that these states bore no watermark from earlier settlements and were the first time any such social organizations had existed. They were the first states to have written records, and they became a template for other states in the Near East and in Egypt, making them doubly relevant to later history.

The big news to emerge from recent archeological research concerns the time lag between “sedentism,” or living in settled communities, and the adoption of agriculture. Previous scholarship held that the invention of agriculture made sedentism possible. The evidence shows that this isn’t true: there’s an enormous gap—four thousand years—separating the “two key domestications,” of animals and cereals, from the first agrarian economies based on them. Our ancestors evidently took a good, hard look at the possibility of agriculture before deciding to adopt this new way of life. They were able to think it over for so long because the life they lived was remarkably abundant. Like the early civilization of China in the Yellow River Valley, Mesopotamia was a wetland territory, as its name (“between the rivers”) suggests. In the Neolithic period, Mesopotamia was a delta wetland, where the sea came many miles inland from its current shore.

This was a generous landscape for humans, offering fish and the animals that preyed on them, fertile soil left behind by regular flooding, migratory birds, and migratory prey travelling near river routes. The first settled communities were established here because the land offered such a diverse web of food sources. If one year a food source failed, another would still be present. The archeology shows, then, that the “Neolithic package” of domestication and agriculture did not lead to settled communities, the ancestors of our modern towns and cities and states. Those communities had been around for thousands of years, living in the bountiful conditions of the wetlands, before humanity committed to intensive agriculture. Reliance on a single, densely planted cereal crop was much riskier, and it’s no wonder people took a few millennia to make the change.

So why did our ancestors switch from this complex web of food supplies to the concentrated production of single crops? We don’t know, although Scott speculates that climatic stress may have been involved. Two things, however, are clear. The first is that, for thousands of years, the agricultural revolution was, for most of the people living through it, a disaster. The fossil record shows that life for agriculturalists was harder than it had been for hunter-gatherers. Their bones show evidence of dietary stress: they were shorter, they were sicker, their mortality rates were higher. Living in close proximity to domesticated animals led to diseases that crossed the species barrier, wreaking havoc in the densely settled communities. Scott calls them not towns but “late-Neolithic multispecies resettlement camps.” Who would choose to live in one of those? Jared Diamond called the Neolithic Revolution “the worst mistake in human history.” The startling thing about this claim is that, among historians of the era, it isn’t very controversial. [Continue reading…]

Facebooktwittermail

Sheep learned to recognize photos of Obama and other celebrities, neuroscientists say

Ben Guarino writes: Of the roughly 1.1 billion sheep on Earth, roughly 1.1 billion have no idea who Barack Obama is. But there are at least eight sheep who can recognize the former president by his face. After a few days of training at the University of Cambridge in England, the animals learned to select the former president’s portrait out of a collection of photos.

Recognizing Obama meant the sheep won a snack. The scientists, in turn, were rewarded with better ways to measure sheep brain function.

Sheep are about as capable of recognizing faces as monkeys or humans, University of Cambridge researchers report Tuesday in the journal Royal Society Open Science. The Cambridge flock, eight female Welsh Mountain sheep, successfully learned the faces of four celebrities in a recent experiment: Obama, British newscaster Fiona Bruce and actors Emma Watson and Jake Gyllenhaal.

“Sheep are capable of sophisticated decision making,” said study author Jenny Morton, a neurobiologist at the University of Cambridge. Seven years ago, she said, she bought these sheep out of the back of a truck on its way to a slaughterhouse. Morton, who studies Huntington’s disease, uses them as a stand-in for humans, in part because “sheep have large brains with humanlike anatomy.” [Continue reading…]

Like many other research findings, this will garner the response than animals turn out not to be as stupid as humans are inclined to believe.

This experiment, however, invites a rather obvious follow-up: a test to measure the human capacity to recognize sheep’s faces.

I predict that on this score, the human capacity is probably inferior to that of the sheep, which is to say that our anthropocentric habits have rendered us as among the least perceptive of creatures.

Facebooktwittermail

Food is about far more than bodily sustenance

By Tina Moffat and Charlene Mohammed

Fatima,* a refugee from Somalia who is a newcomer to Canada, has been having trouble in her local supermarket. Back home, she was accustomed to milk fresh from the cow. “In Canada I don’t even know if it’s real milk or fake milk,” she said. “I don’t know the difference. Is there milk that has pork-related ingredients in it?”

Life for new immigrants is hard in many ways. But one thing that is rarely recognized is the dramatic shift for newcomers in what they eat. People who are used to eating freshly killed chickens and seasonal vegetables—and drinking milk from their cows—are suddenly faced with an unfamiliar selection of produce, a range of processed foods, and a plethora of nonperishable goods from the food bank (if they need them) that are in some cases so odd that they are perceived as “poison.”

Food is at the heart of culture: It is at the center of gatherings ranging from weddings to funerals, and it’s a critical part of everyday life. Not only are ingredients and recipes important but so are people’s foodways and customs. In many countries, it is common to cook a large pot of food in anticipation of uninvited guests; those who have extra food share it, and they expect to have food shared in return. Such social arrangements can increase food security in the community.

Some immigrants and refugees who settle in Western urban centers find that they do not have enough resources to meet their food needs. As defined by the Food and Agricultural Organization (FAO), “food security exists when all people, at all times, have physical and economic access to sufficient, safe, and nutritious food to meet their dietary needs and food preferences for an active and healthy life.” Many officials, however, approach this problem as one of “hunger” with a limited understanding of food insecurity that focuses on providing sufficient food for survival—and nothing more. Our research shows that newcomers’ experiences with food insecurity—based on the stories they share—are about much more than satisfying their physical needs; food consumption has many social and cultural dimensions as well.

Continue reading

Facebooktwittermail