Moises Velesquez-Manoff writes: For the microbiologist Justin Sonnenburg, that career-defining moment — the discovery that changed the trajectory of his research, inspiring him to study how diet and native microbes shape our risk for disease — came from a village in the African hinterlands.
A group of Italian microbiologists had compared the intestinal microbes of young villagers in Burkina Faso with those of children in Florence, Italy. The villagers, who subsisted on a diet of mostly millet and sorghum, harbored far more microbial diversity than the Florentines, who ate a variant of the refined, Western diet. Where the Florentine microbial community was adapted to protein, fats, and simple sugars, the Burkina Faso microbiome was oriented toward degrading the complex plant carbohydrates we call fiber.
Scientists suspect our intestinal community of microbes, the human microbiota, calibrates our immune and metabolic function, and that its corruption or depletion can increase the risk of chronic diseases, ranging from asthma to obesity. One might think that if we coevolved with our microbes, they’d be more or less the same in healthy humans everywhere. But that’s not what the scientists observed. [Continue reading…]
For 95% of the history of modern humans we were exclusively hunter gatherers. Then suddenly about 12,000 years ago, something happened that revolutionised the way humans lived and enabled the complex societies we have today: farming.
But what triggered this revolution? Understanding this is incredibly challenging – because this occurred so far in the past, there are many factors to consider. However, by simulating the past using a complex computational model, we found that the switch from foraging to farming most likely began with very small groups of people that were using the concept of property rights.
Farming: an unlikely choice
It may seem obvious why we switched from foraging to farming: it made it possible to stay in one place, feed larger populations, have greater food security and build increasingly complex societies, political structures, economies and technologies. However, these advantages took time to develop and our early farmer ancestors would not have seen these coming.
Indeed, archaeological research suggests that when farming began it was not a particularly attractive lifestyle. It involved more work, a decrease in the quality of nutrition and health, an increase in disease and infection, and greater challenges in defending resources. For a hunter-gatherer at the cusp of the “agricultural revolution”, a switch to farming wasn’t the obvious choice.
Suzanne Sadedin writes: By making a few alterations to the composition of the justice system, corrupt societies could be made to transition to a state called ‘righteousness’. In righteous societies, police were not a separate, elite order. They were everybody. When virtually all of society stood ready to defend the common good, corruption didn’t pay.
Among honeybees and several ant species, this seems to be the status quo: all the workers police one another, making corruption an unappealing choice. In fact, the study showed that even if power inequalities later re-appeared, corruption would not return. The righteous community was extraordinarily stable.
Not all societies could make the transition. But those that did would reap the benefits of true, lasting harmony. An early tribe that made the transition to righteousness might out-compete more corrupt rivals, allowing righteousness to spread throughout the species. Such tribal selection is uncommon among animals other than eusocial insects, but many researchers think it could have played a role in human evolution. Hunter-gatherer societies commonly tend toward egalitarianism, with social norms enforced by the whole group rather than any specially empowered individuals. [Continue reading…]
One of the big questions in anthropology is why humans, unlike most animals, cooperate with those we are not closely related to. Exactly what has driven this behaviour is not well understood. Anthropologists suspect it could be down to the fact that women have usually left their homes after marriage to go and live with their husband’s family. This creates links between distant families, which may explain our tendency to cooperate beyond our own households.
Now our study on the Tibetan borderlands of China, published in Nature Communications, shows that it is indeed the case that cooperation is greater in populations where females disperse for marriage.
A natural experiment in social structure
There are a lot of different theories about the link between dispersal, kinship and cooperation, which is what we wanted to test. Anthropologists believe that dispersal leads to cooperation through links between families, and some evolutionary models predict that when nobody moves this leads to residents competing for the same resources and greater conflict between kin. But there are also models that suggest the opposite is true – that if nobody moves, neighbours are more likely to be related, leading to more cooperation in the neighbourhood.
LiveScience reports: Teeth from a cave in China suggest that modern humans lived in Asia much earlier than previously thought, and tens of thousands of years before they reached Europe, researchers say.
This discovery yields new information about the dispersal of modern humans from Africa to the rest of the world, and could shed light on how modern humans and Neanderthals interacted, the scientists added.
Modern humans first originated about 200,000 years ago in Africa. When and how the modern human lineage dispersed from Africa has long been controversial.
Previous research suggested the exodus from Africa began between 70,000 and 40,000 years ago. However, recent research hinted that modern humans might have begun their march across the globe as early as 130,000 years ago. [Continue reading…]
Robert Twigger writes: I travelled with Bedouin in the Western Desert of Egypt. When we got a puncture, they used tape and an old inner tube to suck air from three tyres to inflate a fourth. It was the cook who suggested the idea; maybe he was used to making food designed for a few go further. Far from expressing shame at having no pump, they told me that carrying too many tools is the sign of a weak man; it makes him lazy. The real master has no tools at all, only a limitless capacity to improvise with what is to hand. The more fields of knowledge you cover, the greater your resources for improvisation.
We hear the descriptive words psychopath and sociopath all the time, but here’s a new one: monopath. It means a person with a narrow mind, a one-track brain, a bore, a super-specialist, an expert with no other interests — in other words, the role-model of choice in the Western world. You think I jest? In June, I was invited on the Today programme on BBC Radio 4 to say a few words on the river Nile, because I had a new book about it. The producer called me ‘Dr Twigger’ several times. I was flattered, but I also felt a sense of panic. I have never sought or held a PhD. After the third ‘Dr’, I gently put the producer right. And of course, it was fine — he didn’t especially want me to be a doctor. The culture did. My Nile book was necessarily the work of a generalist. But the radio needs credible guests. It needs an expert — otherwise why would anyone listen?
The monopathic model derives some of its credibility from its success in business. In the late 18th century, Adam Smith (himself an early polymath who wrote not only on economics but also philosophy, astronomy, literature and law) noted that the division of labour was the engine of capitalism. His famous example was the way in which pin-making could be broken down into its component parts, greatly increasing the overall efficiency of the production process. But Smith also observed that ‘mental mutilation’ followed the too-strict division of labour. Or as Alexis de Tocqueville wrote: ‘Nothing tends to materialise man, and to deprive his work of the faintest trace of mind, more than extreme division of labour.’ [Continue reading…]
Yuval Noah Harari writes: This is the basic lesson of evolutionary psychology: a need shaped thousands of generations ago continues to be felt subjectively even if it is no longer necessary for survival and reproduction in the present. Tragically, the agricultural revolution gave humans the power to ensure the survival and reproduction of domesticated animals while ignoring their subjective needs. In consequence, domesticated animals are collectively the most successful animals in the world, and at the same time they are individually the most miserable animals that have ever existed.
The situation has only worsened over the last few centuries, during which time traditional agriculture gave way to industrial farming. In traditional societies such as ancient Egypt, the Roman empire or medieval China, humans had a very partial understanding of biochemistry, genetics, zoology and epidemiology. Consequently, their manipulative powers were limited. In medieval villages, chickens ran free between the houses, pecked seeds and worms from the garbage heap, and built nests in the barn. If an ambitious peasant tried to lock 1,000 chickens inside a crowded coop, a deadly bird-flu epidemic would probably have resulted, wiping out all the chickens, as well as many villagers. No priest, shaman or witch doctor could have prevented it. But once modern science had deciphered the secrets of birds, viruses and antibiotics, humans could begin to subject animals to extreme living conditions. With the help of vaccinations, medications, hormones, pesticides, central air-conditioning systems and automatic feeders, it is now possible to cram tens of thousands of chickens into tiny coops, and produce meat and eggs with unprecedented efficiency.
The fate of animals in such industrial installations has become one of the most pressing ethical issues of our time, certainly in terms of the numbers involved. These days, most big animals live on industrial farms. We imagine that our planet is populated by lions, elephants, whales and penguins. That may be true of the National Geographic channel, Disney movies and children’s fairytales, but it is no longer true of the real world. The world contains 40,000 lions but, by way of contrast, there are around 1 billion domesticated pigs; 500,000 elephants and 1.5 billion domesticated cows; 50 million penguins and 20 billion chickens. [Continue reading…]
From the earliest of times, philosophers and scientists have tried to understand the relationship between animate and inanimate matter. But the origin of life remains one of the major scientific riddles to be solved.
The building blocks of life as we know it essentially consist of four groups of chemicals: proteins, nucleic acids, lipids (fats) and carbohydrates. There was much excitement about the possibility of finding amino acids (the ingredients for proteins) on comets or distant planets because some scientists believe that life on Earth, or at least its building blocks, may have originally come from outer space and been deposited by meteorites.
But there are now extensive examples of how natural processes on Earth can convert simple molecules into these building blocks. Scientists have demonstrated in the lab how to make amino acids, simple sugars, lipids and even nucleotides – the basic units of DNA – from very simple chemicals, under conditions that could have existed on early earth. What still eludes them is the point in the process when a chemical stew becomes an organism. How did the first lifeforms become alive?
Paleogenetics is helping to solve the great mystery of prehistory: How did humans spread out over the earth?
Jacob Mikanowski writes: Most of human history is prehistory. Of the 200,000 or more years that humans have spent on Earth, only a tiny fraction have been recorded in writing. Even in our own little sliver of geologic time, the 12,000 years of the Holocene, whose warm weather and relatively stable climate incubated the birth of agriculture, cities, states, and most of the other hallmarks of civilisation, writing has been more the exception than the rule.
Professional historians can’t help but pity their colleagues on the prehistoric side of the fence. Historians are accustomed to drawing on vast archives, but archaeologists must assemble and interpret stories from scant material remains. In the annals of prehistory, cultures are designated according to modes of burial such as ‘Single Grave’, or after styles of arrowhead, such as ‘Western Stemmed Point’. Whole peoples are reduced to styles of pottery, such as Pitted Ware, Corded Ware or Funnel Beaker, all of them spread across the map in confusing, amoeba-like blobs.
In recent years, archaeologists have become reluctant to infer too much from assemblages of ceramics, weapons and grave goods. For at least a generation, they have been drilled on the mantra that ‘pots are not people’. Material culture is not a proxy for identity. Artefacts recovered from a dig can provide a wealth of information about a people’s mode of subsistence, funeral rites and trade contacts, but they are not a reliable guide to their language or ethnicity – or their patterns of migration.
Before the Second World War, prehistory was seen as a series of invasions, with proto-Celts and Indo-Aryans swooping down on unsuspecting swaths of Europe and Asia like so many Vikings, while megalith builders wandered between continents in indecisive meanders. After the Second World War, this view was replaced by the processual school, which attributed cultural changes to internal adaptations. Ideas and technologies might travel, but people by and large stayed put. Today, however, migration is making a comeback.
Much of this shift has to do with the introduction of powerful new techniques for studying ancient DNA. The past five years have seen a revolution in the availability and scope of genetic testing that can be performed on prehistoric human and animal remains. Ancient DNA is tricky to work with. Usually it’s degraded, chemically altered and cut into millions of short fragments. But recent advances in sequencing technology have made it possible to sequence whole genomes from samples reaching back thousands, and tens of thousands, of years. Whole-genome sequencing yields orders of magnitude more data than organelle-based testing, and allows geneticists to make detailed comparisons between individuals and populations. Those comparisons are now illuminating new branches of the human family tree. [Continue reading…]
Candida Moss writes: On Thursday morning The New York Times ran a high profile story about the discovery of a new human ancestor species — Homo naledi — in the Rising Star cave in South Africa. The discovery, announced by professor Lee Berger, was monumental because the evidence for Homo naledi were discovered in a burial chamber. Concern for burial is usually seen as distinctive characteristic of humankind, so the possibility that this new non-human hominid species was ”deliberately disposing of its dead” was especially exciting.
To anthropologists the article was not only newsworthy it was also humorous, for the Times illustrated the piece with a photograph of Australopithecus africanus, a species already well-known. This howler of a mistake (at least to self-identified science nerds) was also somewhat understandable because the differences between the two skulls are sufficiently subtle that a lay viewer can indeed easily mistake them for one another. In fact, some have pointed to that similarity and wondered (while acknowledging the importance of the discovery) if it is indeed a “new species.”And that gets to the deeper issue: What and who were our ancestors?
It might seem as if the answer to this question is simply a question of biology, but in his new book Tales of the Ex-Apes: How we think about human evolution anthropologist Jonathan Marks argues that the story we tell about our origins, the study of our evolutionary tree, has cultural roots. Evolution isn’t just a question of biology, he argues, it’s also a question of mythology. Our scientific facts, he says, are the product of bioculture and biopolitics. [Continue reading…]
We have all been raised to believe that civilization is, in large part, sustained by law and order. Without complex social institutions and some form of governance, we would be at the mercy of the law of the jungle — so the argument goes.
But there is a basic flaw in this Hobbesian view of a collective human need to tame the savagery in our nature.
For human beings to be vulnerable to the selfish drives of those around them, they generally need to possess things that are worth stealing. For things to be worth stealing, they must have durable value. People who own nothing, have little need to worry about thieves.
While Jared Diamond has argued that civilization arose in regions where agrarian societies could accumulate food surpluses, new research suggests that the value of cereal crops did not derive simply from the fact that the could be stored, but rather from the fact that having been stored they could subsequently be stolen or confiscated.
Joram Mayshar, Omer Moav, Zvika Neeman, and Luigi Pascali write: In a recent paper (Mayshar et al. 2015), we contend that fiscal capacity and viable state institutions are conditioned to a major extent by geography. Thus, like Diamond, we argue that geography matters a great deal. But in contrast to Diamond, and against conventional opinion, we contend that it is not high farming productivity and the availability of food surplus that accounts for the economic success of Eurasia.
- We propose an alternative mechanism by which environmental factors imply the appropriability of crops and thereby the emergence of complex social institutions.
To understand why surplus is neither necessary nor sufficient for the emergence of hierarchy, consider a hypothetical community of farmers who cultivate cassava (a major source of calories in sub-Saharan Africa, and the main crop cultivated in Nigeria), and assume that the annual output is well above subsistence. Cassava is a perennial root that is highly perishable upon harvest. Since this crop rots shortly after harvest, it isn’t stored and it is thus difficult to steal or confiscate. As a result, the assumed available surplus would not facilitate the emergence of a non-food producing elite, and may be expected to lead to a population increase.
Consider now another hypothetical farming community that grows a cereal grain – such as wheat, rice or maize – yet with an annual produce that just meets each family’s subsistence needs, without any surplus. Since the grain has to be harvested within a short period and then stored until the next harvest, a visiting robber or tax collector could readily confiscate part of the stored produce. Such ongoing confiscation may be expected to lead to a downward adjustment in population density, but it will nevertheless facilitate the emergence of non-producing elite, even though there was no surplus.
This simple scenario shows that surplus isn’t a precondition for taxation. It also illustrates our alternative theory that the transition to agriculture enabled hierarchy to emerge only where the cultivated crops were vulnerable to appropriation.
- In particular, we contend that the Neolithic emergence of fiscal capacity and hierarchy was conditioned on the cultivation of appropriable cereals as the staple crops, in contrast to less appropriable staples such as roots and tubers.
According to this theory, complex hierarchy did not emerge among hunter-gatherers because hunter-gatherers essentially live from hand-to-mouth, with little that can be expropriated from them to feed a would-be elite. [Continue reading…]
Ed Yong writes: Lee Berger put his ad up on Facebook on October 7th, 2013. He needed diggers for an exciting expedition. They had to have experience in palaeontology or archaeology, and they had to be willing to drop everything and fly to South Africa within the month. “The catch is this—the person must be skinny and preferably small,” he wrote. “They must not be claustrophobic, they must be fit, they should have some caving experience, climbing experience would be a bonus.”
“I thought maybe there were three or four people in the world who would fit that criteria,” Berger recalls. “Within a few days, I had 60 applicants, all qualified. I picked six.” They were all women and all skinny — fortunately so, given what happened next. Berger, a palaeoanthropologist at the University of the Witwatersrand, sent them into the Rising Star Cave, and asked them to squeeze themselves through a long vertical chute, which narrowed to a gap just 18 centimeters wide.
That gap was all that separated them from the bones a new species of ancient human, or hominin, which the team named Homo naledi after a local word for “star.” We don’t know when it lived, or how it was related to us. But we do know that it was a creature with a baffling mosaic of features, some of which were remarkably similar to modern humans, and others of which were more ape-like in character.
This we know because the six women who entered the cave excavated one of the richest collections of hominin fossils ever discovered — some 1,550 fossil fragments, belonging to at least 15 individual skeletons. To find one complete skeleton of a new hominin would be hitting the paleoanthropological jackpot. To find 15, and perhaps more, is like nuking the jackpot from orbit. [Continue reading…]
See also the research article announcing the discovery: Homo naledi, a new species of the genus Homo from the Dinaledi Chamber, South Africa.
Sandra Newman writes: One of our most firmly entrenched ideas of masculinity is that men don’t cry. Although he might shed a discreet tear at a funeral, and it’s acceptable for him to well up when he slams his fingers in a car door, a real man is expected to quickly regain control. Sobbing openly is strictly for girls.
This isn’t just a social expectation; it’s a scientific fact. All the research to date finds that women cry significantly more than men. A meta-study by the German Society of Ophthalmology in 2009 found that women weep, on average, five times as often, and almost twice as long per episode. The discrepancy is such a commonplace, we tend to assume it’s biologically hard-wired; that, whether you like it or not, this is one gender difference that isn’t going away.
But actually, the gender gap in crying seems to be a recent development. Historical and literary evidence suggests that, in the past, not only did men cry in public, but no one saw it as feminine or shameful. In fact, male weeping was regarded as normal in almost every part of the world for most of recorded history. [Continue reading…]
Roc Morin writes: One of the first words that Koko used to describe herself was Queen. The gorilla was only a few years old when she first made the gesture — sweeping a paw diagonally across her chest as if tracing a royal sash.
“It was a sign we almost never used!” Koko’s head-caretaker Francine Patterson laughed. “Koko understands that she’s special because of all the attention she’s had from professors, and caregivers, and the media.”
The cause of the primate’s celebrity is her extraordinary aptitude for language. Over the past 43 years, since Patterson began teaching Koko at the age of 1, the gorilla has learned more than 1,000 words of modified American Sign Language—a vocabulary comparable to that of a 3-year-old human child. While there have been many attempts to teach human languages to animals, none have been more successful than Patterson’s achievement with Koko.
If Koko is a queen, then her kingdom is a sprawling research facility in the mountains outside Santa Cruz, California. It was there, under a canopy of stately redwoods, that I met research-assistant Lisa Holliday.
“You came on a good day,” Holliday smiled. “Koko’s in a good mood. She was playing the spoon game all morning! That’s when she takes the spoon and runs off with it so you can’t give her another bite. She’s an active girl. She’s always got her dolls, and in the afternoon, her kittens — or as we call them, her kids.”
It was a winding stroll up a sun-spangled trail toward the cabin where Patterson was busy preparing a lunch of diced apples and nuts for Koko. The gorilla’s two kitten playmates romped in a crate by her feet. We would go deliver the meal together shortly, but first I had some questions for the 68-year-old researcher. I wanted to understand more about her famous charge and the rest of our closest living relatives. [Continue reading…]
Claire Cameron writes: English speakers and others are highly egocentric when it comes to orienting themselves in the world. Objects and people exist to the left, right, in front, and to the back of you. You move forward and backward in relation to the direction you are facing. For an aboriginal tribe in north Queensland, Australia, called the Guugu Ymithirr, such a “me me me” approach to spatial information makes no sense. Instead, they use cardinal directions to express spatial information (pdf). So rather than “Can you move to my left?” they would say “Can you move to the west?”
Linguist Guy Deustcher says that Guugu Ymithirr speakers have a kind of “internal compass” that is imprinted from an extremely young age. In the same way that English-speaking infants learn to use different tenses when they speak, so do Guugu Ymithirr children learn to orient themselves along compass lines, not relative to themselves. In fact, says Deustcher, if a Guugu Ymithirr speaker wants to direct your attention to the direction behind him, he “points through himself, as if he were thin air and his own existence were irrelevant.” Whether that translates into less egocentric worldviews is a matter for further study and debate.
Other studies have shown that speakers of languages that use cardinal directions to express locations have fantastic spatial memory and navigation skills — perhaps because their experience of an event is so well-defined by the directions it took place in. [Continue reading…]