Nessa Carey writes: When President Obama delivered a speech at MIT in 2009, he used a common science metaphor: “We have always been about innovation,” he said. “We have always been about discovery. That’s in our DNA.” Deoxyribonucleic acid, the chemical into which our genes are encoded, has become the metaphor of choice for a whole constellation of ideas about essence and identity. A certain mystique surrounds it. As Evelyn Fox Keller argues in her book The Century of the Gene, the genome is, in the popular imagination at least, the secret of life, the holy grail. It is a master builder, the ultimate computer program, and a modern-day echo of the soul, all wrapped up in one. This fantasy does not sit easily, however, with geneticists who have grown more aware over the last several decades that the relationship between genes and biological traits is much less than certain.
The popular understanding of DNA as a blueprint for organisms, with a one-to-one correspondence between genes and traits (called phenotypes), is the legacy of the early history of genetics. The term “gene” was coined in 1909 to refer to abstract units of inheritance, predating the discovery of DNA by forty years. Biologists came to think of genes like beads on a string that lined up neatly into chromosomes, with each gene determining a single phenotype. But, while some genes do correspond to traits in a straightforward way, as in eye color or blood group, most phenotypes are far more complex, set in motion by many different genes as well as by the environment in which the organism lives.
It turns out that the genetic code is less like a blueprint and more like a movie script, subject to revision and reinterpretation by a director. This process is called epigenetic modification (“epi” meaning “above” or “in addition to”). Just as a script can be altered with crossed-out words, sentences or scenes, epigenetic editing allows entire sections of DNA to be activated or de-activated. Genes can be as finely tuned as actors responding to stage directions to shout, whisper, or cackle. [Continue reading…]
Henry Nicholls writes: When the HMS Beagle dropped anchor on San Cristobal, the easternmost island in the Galapagos archipelago, in September 1835, the ship’s naturalist Charles Darwin eagerly went ashore to gather samples of the insects, birds, reptiles, and plants living there. At first, he didn’t think much of the arid landscape, which appeared to be “covered by stunted, sun-burnt brushwood…as leafless as our trees during winter” But this did not put him off. By the time the Beagle left these islands some five weeks later, he had amassed a spectacular collection of Galapagos plants.
It is fortunate that he took such trouble. Most popular narratives of Darwin and the Galapagos concentrate on the far more celebrated finches or the giant tortoises. Yet when he finally published On the Origin of Species almost 25 years later, Darwin made no mention of these creatures. In his discussion of the Galapagos, he dwelt almost exclusively on the islands’ plants.
By the early 19th century, there was increasing interest in what we now refer to as biogeography, the study of the distribution of species around the globe. Many people still imagined that God had been involved in the creation of species, putting fully formed versions down on Earth that continued to reproduce themselves, dispersing from a divine “center of creation” to occupy their current habitats. To explain how the plants and animals reached far-flung places such as the isolated Galapagos, several naturalists imagined that there had to have been land bridges, long-since subsided, that had once connected them to a continent. But in the wake of the Beagle voyage, the collection of Galapagos plants suggested an alternate scenario.
Even if there had once been a land bridge to the islands, it could not account for the fact that half of the plant species Darwin collected were unique to the Galapagos, and that most of them were particular to just one island. “I never dreamed that islands, about fifty or sixty miles apart, and most of them in sight of each other, formed of precisely the same rocks, placed under a quite similar climate, rising to a nearly equal height, would have been differently tenanted,” wrote Darwin in his Journal of Researches. His observations could be best explained if species were not fixed in nature but somehow changed as the seeds traveled to different locations. [Continue reading…]
Mark Oppenheimer writes: On Wednesday, in an interview in London, Gov. Scott Walker of Wisconsin, a potential Republican presidential candidate, sidestepped the question of whether he believed in evolution.
“I’m going to punt on that one,” he said to an audience at a research organization in London, which he was visiting for a trade mission. “I’m here to talk about trade, not to pontificate on other issues. I love the evolution of trade in Wisconsin.”
Mr. Walker’s response was not all that surprising — evolution is a sensitive issue for the evangelical Christian base of the Republican Party and presidential candidates have had to tread carefully around it.
The theory of evolution may be supported by a consensus of scientists, but none of the likely Republican candidates for 2016 seem to be convinced. Former Gov. Jeb Bush of Florida said it should not be taught in schools. Former Gov. Mike Huckabee of Arkansas is an outright skeptic. Senator Ted Cruz of Texas will not talk about it. When asked, in 2001, what he thought of the theory, Gov. Chris Christie of New Jersey said, “None of your business.”
After Mr. Walker’s response, the interviewer in London, an incredulous Justin Webb of the BBC, said to the governor: “Any British politician, right or left wing, would laugh and say, ‘Yes, of course evolution is true.’ ” [Continue reading…]
Adam Gopnik writes: Darwin’s Delay is by now nearly as famous as Hamlet’s, and involves a similar cast of characters: a family ghost, an unhappy lover, and a lot of men digging up old bones. Although it ends with vindication and fame, rather than with slaughter and self-knowledge, it was resolved by language, too — by inner soliloquy forcing itself out into the world, except that in this case the inner voice had the certainties and the outer one the hesitations.
The delay set in between Darwin’s first intimations of his Great Idea, the idea of evolution by natural selection, in the eighteen-thirties (he was already toying with it during his famous voyage on the H.M.S. Beagle), and the publication of “On the Origin of Species,” in 1859. By legend, the two events were in the long run one: Darwin saw the adapted beaks of his many finches, brooded on what they meant, came up with a theory, sought evidence for it, and was prodded into print at last by an unwelcome letter from an obscure naturalist named Alfred Russel Wallace, who had managed to arrive at the same idea.
It seems to have been more complicated than that. One reason Darwin spent so long getting ready to write his masterpiece without getting it written was that he knew what it would mean for faith and life, and, as Janet Browne’s now standard biography makes plain, he was frightened about being attacked by the powerful and the bigoted. Darwin was not a brave man — had the Inquisition been in place in Britain, he never would have published — but he wasn’t a humble man or a cautious thinker, either. He sensed that his account would end any intellectually credible idea of divine creation, and he wanted to break belief without harming the believer, particularly his wife, Emma, whom he loved devotedly and with whom he had shared, before he sat down to write, a private tragedy that seemed tolerable to her only through faith. The problem he faced was also a rhetorical one: how to say something that had never been said before in a way that made it sound like something everybody had always known — how to make an idea potentially scary and subversive sound as sane and straightforward as he believed it to be.
He did it, and doing it was, in some part, a triumph of style. Darwin is the one indisputably great scientist whose scientific work is still read by amateurs. [Continue reading…]
Ivan Semeniuk reports: Francesco Berna still remembers his first visit to Manot Cave, accidentally discovered in 2008 on a ridge in northern Israel. A narrow passage steeply descends into darkness. It then opens onto a 60-metre-long cavern with side chambers, all dramatically ornamented with stalactites and stalagmites.“It’s a spectacular cave,” said Dr. Berna, a geoarcheologist at Simon Fraser University in Burnaby, B.C. “It’s basically untouched.”
Now Manot Cave has yielded a tantalizing sign of humanity’s initial emergence out of Africa and a possible forerunner of the first modern humans in Europe, an international team of researchers that includes Dr. Berna said on Wednesday.
The find also establishes the Levant region (including Israel, Lebanon and part of Syria) as a plausible setting where our species interbred with its Neanderthal cousins.
The team’s key piece of evidence is a partial human skull found during the initial reconnaissance of the cave.
Based on its features and dimensions, the skull is unquestionably that of an anatomically modern human, the first such find in the region. The individual would probably have looked like the first Homo sapiens that appeared in Africa about 200,000 years ago and been physically indistinguishable from humans today.
“He or she would look very modern. With a tie on, you would not be able to tell the difference,” said Israel Hershkovitz, a biological anthropologist at Tel Aviv University and lead author of a paper published this week in the journal Nature that documents the Manot Cave find.
The age of the fossil is the crucial detail. The team’s analysis shows it is about 55,000 years old. That is more recent than the fragmentary remains of some not-so-modern-looking humans that drifted into the region at an earlier stage. But it coincides exactly with a period when a wetter climate may have opened the door to the first modern human migration out of Africa.
Fossils of modern humans that are only slightly less old than the Manot Cave skull have been found in the Czech Republic and Romania, making the new find a potential forerunner of the first Europeans. [Continue reading…]
Much of the reporting on these findings makes reference to “the first Europeans” and even though anthropologists might be clear about what they mean when they use to term Europe, they might consider avoiding using it, given the common meaning that is usually attached to the word.
Indeed, the lead researcher cited above, Israel Hershkovitz, illustrates the problem as he reinforces cultural stereotypes by implying that the human has fully evolved once he adorns the symbol of European, masculine power: a necktie. The irony is compounded by the fact that he and his team were trumpeting the significance of their discovery of a woman’s skull.
(No doubt many Europeans and others with European affectations have been disturbed this week to see Greece’s new prime minister, in the birthplace of democracy, assuming power without a necktie.)
The Oxford archeologist, Barry Cunliffe, has referred to the region of land that recently got dubbed “Europe” as “the westerly excrescence of the continent of Asia.”
Europeans might object to the suggestion that they inhabit an excrescence — especially since the terms suggests an abnormality — but in terms of continental topography, it points to Europe’s unique feature: its eastern boundaries have always been elastic and somewhat arbitrary.
More importantly, when it comes to human evolution, to frame this in terms of the advance into Europe revives so many echoes of nineteenth century racism.
It cannot be overstated that the first Europeans were not European.
Europe is an idea that has only been around for a few hundred years during which time it has been under constant revision.
Migration is also a misleading term since it evokes images of migrants: people who travel vast distances to inhabit new lands.
Human dispersal most likely involved rather short hops, one generation at a time, interspersed with occasional actual migrations driven by events like floods or famine.
Philip Ball writes: Is the natural world creative? Just take a look around it. Look at the brilliant plumage of tropical birds, the diverse pattern and shape of leaves, the cunning stratagems of microbes, the dazzling profusion of climbing, crawling, flying, swimming things. Look at the “grandeur” of life, the “endless forms most beautiful and most wonderful,” as Darwin put it. Isn’t that enough to persuade you?
Ah, but isn’t all this wonder simply the product of the blind fumbling of Darwinian evolution, that mindless machine which takes random variation and sieves it by natural selection? Well, not quite. You don’t have to be a benighted creationist, nor even a believer in divine providence, to argue that Darwin’s astonishing theory doesn’t fully explain why nature is so marvelously, endlessly inventive. “Darwin’s theory surely is the most important intellectual achievement of his time, perhaps of all time,” says evolutionary biologist Andreas Wagner of the University of Zurich. “But the biggest mystery about evolution eluded his theory. And he couldn’t even get close to solving it.”
What Wagner is talking about is how evolution innovates: as he puts it, “how the living world creates.” Natural selection supplies an incredibly powerful way of pruning variation into effective solutions to the challenges of the environment. But it can’t explain where all that variation came from. As the biologist Hugo de Vries wrote in 1905, “natural selection may explain the survival of the fittest, but it cannot explain the arrival of the fittest.” Over the past several years, Wagner and a handful of others have been starting to understand the origins of evolutionary innovation. Thanks to their findings so far, we can now see not only how Darwinian evolution works but why it works: what makes it possible. [Continue reading…]
Scientific American reports: Songbirds stutter, babble when young, become mute if parts of their brains are damaged, learn how to sing from their elders and can even be “bilingual” — in other words, songbirds’ vocalizations share a lot of traits with human speech. However, that similarity goes beyond behavior, researchers have found. Even though humans and birds are separated by millions of years of evolution, the genes that give us our ability to learn speech have much in common with those that lend birds their warble.
A four-year long effort involving more than 100 researchers around the world put the power of nine supercomputers into analyzing the genomes of 48 species of birds. The results, published this week in a package of eight articles in Science and 20 papers in other journals, provides the most complete picture of the bird family tree thus far. The project has also uncovered genetic signatures in song-learning bird brains that have surprising similarities to the genetics of speech in humans, a finding that could help scientists study human speech.
The analysis suggests that most modern birds arose in an impressive speciation event, a “big bang” of avian diversification, in the 10 million years immediately following the extinction of dinosaurs. This period is more recent than posited in previous genetic analyses, but it lines up with the fossil record. By delving deeper into the rich data set, research groups identified when birds lost their teeth, investigated the relatively slow evolution of crocodiles and outlined the similarities between birds’ and humans’ vocal learning ability, among other findings. [Continue reading…]
Patrick Bateson writes: I am disturbed by the way we have created a social environment in which so much emphasis is laid on competition – on forging ahead while trampling on others. The ideal of social cooperation has come to be treated as high-sounding flabbiness, while individual selfishness is regarded as the natural and sole basis for a realistic approach to life. The image of the struggle for existence lies at the back of it, seriously distorting the view we have of ourselves and wrecking mutual trust.
The fashionable philosophy of individualism draws its respectability in part from an appeal to biology and specifically to the Darwinian theory of evolution by natural selection. Now, Darwin’s theory remains the most powerful explanation for the way that each plant and animal evolved so that it is exquisitely adapted to its environment. The theory works just as well for behaviour as it does for anatomy. Individual animals differ in the way they behave. Those that behave in a manner that is better suited to the conditions in which they live are more likely to survive. Finally, if their descendants resemble them in terms of behaviour, then in the course of evolution, the better adapted forms of behaviour will replace those that are not so effective in keeping the individual alive.
It is the Darwinian concept of differential survival that has been picked up and used so insistently in political rhetoric. Biology is thought to be all about competition – and that supposedly means constant struggle. This emphasis has had an insidious effect on the public mind and has encouraged the belief in individual selfishness and in confrontation. Competition is now widely seen as the mainspring of human activity, at least in Western countries. Excellence in the universities and in the arts is thought to be driven by the same ruthless process that supposedly works so well on the sportsfield or the market place, and they all have a lot in common with what supposedly happens in the jungle. The image of selfish genes, competing with each other in the course of evolution has fused imperceptibly with the notion of selfish individuals competing with each other in the course of their life-times. Individuals only thrive by winning. The argument has become so much a part of conventional belief that it is hard at first to see what is wrong with it.
To put it bluntly, thought has been led seriously astray by the rhetoric. [Continue reading…]
The discovery of what appear to have been deliberately etched markings made by a human ancestor, Homo erectus, on the surface of a shell, call for a reconsideration of assumptions that have been made about the origins of abstract thought.
While the meaning of these zigzag markings will most likely remain forever unknown, it can reasonably be inferred that for the individual who created them, the marks had some significance.
In a report in Nature, Josephine Joordens, a biologist at Leiden University whose team discovered the markings, says:
“We’ve looked at all possibilities, but in the end we are really certain that this must have been made by an agent who did a very deliberate action with a very sharp implement,” says Joordens. Her team tried replicating the pattern on fresh and fossilized shells, “and that made us realize how difficult it really was”, she says.
Saying much more about the engraving is tricky. “If you don’t know the intention of the person who made it, it’s impossible to call it art,” says Joordens.
“But on the other hand, it is an ancient drawing. It is a way of expressing yourself. What was meant by the person who did this, we simply don’t know, ” she adds. “It could have been to impress his girlfriend, or to doodle a bit, or to mark the shell as his own property.”
Clive Finlayson, a zoologist at the Gibraltar Museum who was part of the team that described cross-hatch patterns linked to Neanderthals, is also agnostic about whether to call the H. erectus doodles art. What is more important, he says, is the growing realization that abilities such as abstract thinking, once ascribed to only H. sapiens, were present in other archaic humans, including, now, their ancestors.
“I’ve been suggesting increasingly strongly that a lot of these things that are meant to be modern human we’re finding in other hominids,” he says. “We really need to revisit these concepts and take stock.”
Palaeoanthropology, by necessity, is a highly speculative discipline — therein lies both its strength and its weakness.
The conservatism of hard science recoils at the idea that some scratches on a single shell amount to sufficient evidence to prompt a reconsideration about the origins of the human mind, and yet to refrain from such speculation seems like an effort to restrain the powers of the very thing we are trying to understand.
Rationally, there is as much reason to assume that abstract thinking long predates modern humans and thus searching for evidence of its absence and finding none would leave us agnostic about its presence or absence, than there is reason to assume that at some juncture it was born.
My inclination is to believe that any living creature that has some capacity to construct a neurological representation of their surroundings is by that very capacity employing something akin to abstract thinking.
This ability for the inner to mirror the outer has no doubt evolved, becoming progressively more complex and more deeply abstract, and yet mind, if defined as world-mirroring, seems to have been born when life first moved.
Live Science: Human ancestors may have begun evolving the knack for consuming alcohol about 10 million years ago, long before modern humans began brewing booze, researchers say.
The ability to break down alcohol likely helped human ancestors make the most out of rotting, fermented fruit that fell onto the forest floor, the researchers said. Therefore, knowing when this ability developed could help researchers figure out when these human ancestors began moving to life on the ground, as opposed to mostly in trees, as earlier human ancestors had lived.
“A lot of aspects about the modern human condition — everything from back pain to ingesting too much salt, sugar and fat — goes back to our evolutionary history,” said lead study author Matthew Carrigan, a paleogeneticist at Santa Fe College in Gainesville, Florida. “We wanted to understand more about the modern human condition with regards to ethanol,” he said, referring to the kind of alcohol found in rotting fruit and that’s also used in liquor and fuel. [Continue reading…]
Ed Yong writes: In the late 17th century, the Dutch naturalist Anton van Leeuwenhoek looked at his own dental plaque through a microscope and saw a world of tiny cells “very prettily a-moving.” He could not have predicted that a few centuries later, the trillions of microbes that share our lives — collectively known as the microbiome — would rank among the hottest areas of biology.
These microscopic partners help us by digesting our food, training our immune systems and crowding out other harmful microbes that could cause disease. In return, everything from the food we eat to the medicines we take can shape our microbial communities — with important implications for our health. Studies have found that changes in our microbiome accompany medical problems from obesity to diabetes to colon cancer.
As these correlations have unfurled, so has the hope that we might fix these ailments by shunting our bugs toward healthier states. The gigantic probiotics industry certainly wants you to think that, although there is little evidence that swallowing a few billion yogurt-borne bacteria has more than a small impact on the trillions in our guts. The booming genre of microbiome diet books — self-help manuals for the bacterial self — peddles a similar line, even though our knowledge of microbe-manipulating menus is still in its infancy.
This quest for a healthy microbiome has led some people to take measures that are far more extreme than simply spooning up yogurt. [Continue reading…]
Kat McGowan writes: Tales about the origins of our species always start off like this: A small band of hunter-gatherers roams the savannah, loving, warring, and struggling for survival under the African sun. They do not start like this: A fat guy falls off a New York City subway platform onto the tracks.
But what happens next is a quintessential story of who we are as human beings.
On Feb. 17, 2013, around 2:30 a.m., Garrett O’Hanlon, a U.S. Air Force Academy cadet third class, was out celebrating his 22nd birthday in New York City. He and his sister were in the subway waiting for a train when a sudden silence came over the platform, followed by a shriek. People pointed down to the tracks.
O’Hanlon turned and saw a man sprawled facedown on the tracks. “The next thing that happened, I was on the tracks, running toward him,” he says. “I honestly didn’t have a thought process.”
O’Hanlon grabbed the unconscious man by the shoulders, lifting his upper body off the tracks, but struggled to move him. He was deadweight. According to the station clock, the train would arrive in less than two minutes. From the platform, O’Hanlon’s sister was screaming at him to save himself.
Suddenly other arms were there: Personal trainer Dennis Codrington Jr. and his friend Matt Foley had also jumped down to help. “We grabbed him, one by the legs, one by the shoulders, one by the chest,” O’Hanlon says. They got the man to the edge of the platform, where a dozen or more people muscled him up and over. More hands seized the rescuers’ arms and shoulders, helping them up to safety as well.
In the aftermath of the rescue, O’Hanlon says he has been surprised that so many people have asked him why he did it. “I get stunned by the question,” he says. In his view, anybody else would’ve done the same thing. “I feel like it’s a normal reaction,” he says. “To me that’s just what people do.”
More precisely, it is something only people do, according to developmental psychologist Michael Tomasello, codirector of the Max Planck Institute for Evolutionary Anthropology.
For decades Tomasello has explored what makes humans distinctive. His conclusion? We cooperate. Many species, from ants to orcas to our primate cousins, cooperate in the wild. But Tomasello has identified a special form of cooperation. In his view, humans alone are capable of shared intentionality—they intuitively grasp what another person is thinking and act toward a common goal, as the subway rescuers did. This supremely human cognitive ability, Tomasello says, launched our species on its extraordinary trajectory. It forged language, tools, and cultures—stepping-stones to our colonization of every corner of the planet. [Continue reading…]
The Guardian reports: Paintings of wild animals and hand markings left by adults and children on cave walls in Indonesia are at least 35,000 years old, making them some of the oldest artworks known.
The rock art was originally discovered in caves on the island of Sulawesi in the 1950s, but dismissed as younger than 10,000 years old because scientists thought older paintings could not possibly survive in a tropical climate.
But fresh analysis of the pictures by an Australian-Indonesian team has stunned researchers by dating one hand marking to at least 39,900 years old, and two paintings of animals, a pig-deer or babirusa, and another animal, probably a wild pig, to at least 35,400 and 35,700 years ago respectively.
The work reveals that rather than Europe being at the heart of an explosion of creative brilliance when modern humans arrived from Africa, the early settlers of Asia were creating their own artworks at the same time or even earlier.
Archaeologists have not ruled out that the different groups of colonising humans developed their artistic skills independently of one another, but an enticing alternative is that the modern human ancestors of both were artists before they left the African continent.
“Our discovery on Sulawesi shows that cave art was made at opposite ends of the Pleistocene Eurasian world at about the same time, suggesting these practices have deeper origins, perhaps in Africa before our species left this continent and spread across the globe,” said Dr Maxime Aubert, an archaeologist at the University of Wollongong. [Continue reading…]
Quanta Magazine: In his fourth-floor lab at Harvard University, Michael Desai has created hundreds of identical worlds in order to watch evolution at work. Each of his meticulously controlled environments is home to a separate strain of baker’s yeast. Every 12 hours, Desai’s robot assistants pluck out the fastest-growing yeast in each world — selecting the fittest to live on — and discard the rest. Desai then monitors the strains as they evolve over the course of 500 generations. His experiment, which other scientists say is unprecedented in scale, seeks to gain insight into a question that has long bedeviled biologists: If we could start the world over again, would life evolve the same way?
Many biologists argue that it would not, that chance mutations early in the evolutionary journey of a species will profoundly influence its fate. “If you replay the tape of life, you might have one initial mutation that takes you in a totally different direction,” Desai said, paraphrasing an idea first put forth by the biologist Stephen Jay Gould in the 1980s.
Desai’s yeast cells call this belief into question. According to results published in Science in June, all of Desai’s yeast varieties arrived at roughly the same evolutionary endpoint (as measured by their ability to grow under specific lab conditions) regardless of which precise genetic path each strain took. It’s as if 100 New York City taxis agreed to take separate highways in a race to the Pacific Ocean, and 50 hours later they all converged at the Santa Monica pier.
The findings also suggest a disconnect between evolution at the genetic level and at the level of the whole organism. [Continue reading…]
Noah Berlatsky writes: Chance is an uncomfortable thing. So Curtis Johnson argues in Darwin’s Dice: The Idea of Chance in the Thought of Charles Darwin, and he makes a compelling case. The central controversy, and the central innovation, in Darwin’s work is not the theory of natural selection itself, according to Johnson, but Darwin’s more basic, and more innovative, turn to randomness as a way to explain natural phenomena. This application of randomness was so controversial, Johnson argues, that Darwin tried to cover it up, replacing words like “accident” and “chance” with terms like “spontaneous variation” in later editions of his work. Nonetheless, the terminological shift was cosmetic: Randomness remained, and still remains, the disturbing center of Darwin’s theories.
Johnson, a political theorist at Lewis & Clark College, explains that there are two basic kinds of chance in Darwin’s thought. The first—most familiar and least disconcerting—is chance as probability. According to the theory of natural selection, individuals with advantageous adaptations are most likely to survive. A giraffe with a longer neck has a better shot of reaching those lofty leaves and living to munch another day; a polar bear blessed with a warmer coat has a higher probability of surviving a frigid winter than one with less hair. The long-necked giraffe may not always win—it may, for example, be pulverized by a meteor before it can pass on its long-necked genes. But over time, the odds will go its way. There is randomness here, but it is controlled and predictable: It works in accordance with a rule. Natural selection makes sense.
The second kind of chance in Darwin’s work, though, is more mysterious. For natural selection to work, you need to have a range of traits to select among. That range is provided by individual variation, the fact that two different animals (whether giraffe or bear) are different from each other. Some giraffes have longer necks than others. Some bears have thicker fur than others. Why should this be? Darwin’s answer was chance. [Continue reading…]
Michael Graziano writes: About four thousand years ago, somewhere in the Middle East — we don’t know where or when, exactly — a scribe drew a picture of an ox head. The picture was rather simple: just a face with two horns on top. It was used as part of an abjad, a set of characters that represent the consonants in a language. Over thousands of years, that ox-head icon gradually changed as it found its way into many different abjads and alphabets. It became more angular, then rotated to its side. Finally it turned upside down entirely, so that it was resting on its horns. Today it no longer represents an ox head or even a consonant. We know it as the capital letter A.
The moral of this story is that symbols evolve.
Long before written symbols, even before spoken language, our ancestors communicated by gesture. Even now, a lot of what we communicate to each other is non-verbal, partly hidden beneath the surface of awareness. We smile, laugh, cry, cringe, stand tall, shrug. These behaviours are natural, but they are also symbolic. Some of them, indeed, are pretty bizarre when you think about them. Why do we expose our teeth to express friendliness? Why do we leak lubricant from our eyes to communicate a need for help? Why do we laugh?
One of the first scientists to think about these questions was Charles Darwin. In his 1872 book, The Expression of the Emotions in Man and Animals, Darwin observed that all people express their feelings in more or less the same ways. He argued that we probably evolved these gestures from precursor actions in ancestral animals. A modern champion of the same idea is Paul Ekman, the American psychologist. Ekman categorised a basic set of human facial expressions — happy, frightened, disgusted, and so on — and found that they were the same across widely different cultures. People from tribal Papua New Guinea make the same smiles and frowns as people from the industrialised USA.
Our emotional expressions seem to be inborn, in other words: they are part of our evolutionary heritage. And yet their etymology, if I can put it that way, remains a mystery. Can we trace these social signals back to their evolutionary root, to some original behaviour of our ancestors? To explain them fully, we would have to follow the trail back until we left the symbolic realm altogether, until we came face to face with something that had nothing to do with communication. We would have to find the ox head in the letter A.
I think we can do that. [Continue reading…]
Embedded in the mud, glistening green and gold and black, was a butterfly, very beautiful and very dead.
“Not a little thing like that! Not a butterfly!” cried Eckels.
It fell to the floor, an exquisite thing, a small thing that could upset balances and knock down a line of small dominoes and then big dominoes and then gigantic dominoes, all down the years across Time. Eckels’ mind whirled. It couldn’t change things. Killing one butterfly couldn’t be that important! Could it? — Ray Bradbury, A Sound of Thunder, 1952
As one of the massive and probably irreversible consequences of climate change, the melting of the Northern Hemisphere’s permafrost is not an example of the butterfly effect. Yet the discovery of a giant virus which has come back to life after 30,000 years of frozen dormancy, suggests many possibilities including some akin to those envisaged by Ray Bradbury is his famous science fiction story.
Whereas his narrative required that the reader suspend disbelief by entertaining the idea of time travel, the thawing tundra may produce a very real kind of time travel if any viruses or other microbes were to emerge as new invasive species.
Rather than being transported geographically as a result of human activity, these will spring suddenly from a distant past into an environment that may lack necessary evolutionary adaptations to accommodate their presence.
We are assured that Pithovirus sibericum poses no threat to humans — it just attacks amoebas. But our concern shouldn’t be limited to fears about the reemergence of something like an ancient strain of smallpox.
The rebirth of a pathogen that could strike phytoplankton — producers of half the world’s oxygen — would have a devastating impact on the planet.
BBC News reports: The ancient pathogen was discovered buried 30m (100ft) down in the frozen ground.
Called Pithovirus sibericum, it belongs to a class of giant viruses that were discovered 10 years ago.
These are all so large that, unlike other viruses, they can be seen under a microscope. And this one, measuring 1.5 micrometres in length, is the biggest that has ever been found.
The last time it infected anything was more than 30,000 years ago, but in the laboratory it has sprung to life once again.
Tests show that it attacks amoebas, which are single-celled organisms, but does not infect humans or other animals.
Co-author Dr Chantal Abergel, also from the CNRS, said: “It comes into the cell, multiplies and finally kills the cell. It is able to kill the amoeba – but it won’t infect a human cell.”
However, the researchers believe that other more deadly pathogens could be locked in Siberia’s permafrost.
“We are addressing this issue by sequencing the DNA that is present in those layers,” said Dr Abergel.
“This would be the best way to work out what is dangerous in there.”
The researchers say this region is under threat. Since the 1970s, the permafrost has retreated and reduced in thickness, and climate change projections suggest it will decrease further.
It has also become more accessible, and is being eyed for its natural resources.
Prof Claverie warns that exposing the deep layers could expose new viral threats.
He said: “It is a recipe for disaster. If you start having industrial explorations, people will start to move around the deep permafrost layers. Through mining and drilling, those old layers will be penetrated and this is where the danger is coming from.”
He told BBC News that ancient strains of the smallpox virus, which was declared eradicated 30 years ago, could pose a risk. [Continue reading…]
Helmholtz Centre for Environmental Research: Plants are also able to make complex decisions. At least this is what scientists from the Helmholtz Center for Environmental Research (UFZ) and the University of Göttingen have concluded from their investigations on Barberry (Berberis vulgaris), which is able to abort its own seeds to prevent parasite infestation. The results are the first ecological evidence of complex behaviour in plants. They indicate that this species has a structural memory, is able to differentiate between inner and outer conditions as well as anticipate future risks, scientists write in the renowned journal American Naturalist — the premier peer-reviewed American journal for theoretical ecology.
The European barberry or simply Barberry (Berberis vulgaris) is a species of shrub distributed throughout Europe. It is related to the Oregon grape (Mahonia aquifolium) that is native to North America and that has been spreading through Europe for years. Scientists compared both species to find a marked difference in parasite infestation: “a highly specialized species of tephritid fruit fly, whose larvae actually feed on the seeds of the native Barberry, was found to have a tenfold higher population density on its new host plant, the Oregon grape”, reports Dr. Harald Auge, a biologist at the UFZ.
This led scientists to examine the seeds of the Barberry more closely. Approximately 2000 berries were collected from different regions of Germany, examined for signs of piercing and then cut open to examine any infestation by the larvae of the tephritid fruit fly (Rhagoletis meigenii). This parasite punctures the berries in order to lay its eggs inside them. If the larva is able to develop, it will often feed on all of the seeds in the berry. A special characteristic of the Barberry is that each berry usually has two seeds and that the plant is able to stop the development of its seeds in order to save its resources. This mechanism is also employed to defend it from the tephritid fruit fly. If a seed is infested with the parasite, later on the developing larva will feed on both seeds. If however the plant aborts the infested seed, then the parasite in that seed will also die and the second seed in the berry is saved. [Read more…]