Category Archives: Life

The secret language of plants

f13-iconKat McGowan writes: Up in the northern Sierra Nevada, the ecologist Richard Karban is trying to learn an alien language. The sagebrush plants that dot these slopes speak to one another, using words no human knows. Karban, who teaches at the University of California, Davis, is listening in, and he’s beginning to understand what they say.

The evidence for plant communication is only a few decades old, but in that short time it has leapfrogged from electrifying discovery to decisive debunking to resurrection. Two studies published in 1983 demonstrated that willow trees, poplars and sugar maples can warn each other about insect attacks: Intact, undamaged trees near ones that are infested with hungry bugs begin pumping out bug-repelling chemicals to ward off attack. They somehow know what their neighbors are experiencing, and react to it. The mind-bending implication was that brainless trees could send, receive and interpret messages.

The first few “talking tree” papers quickly were shot down as statistically flawed or too artificial, irrelevant to the real-world war between plants and bugs. Research ground to a halt. But the science of plant communication is now staging a comeback. Rigorous, carefully controlled experiments are overcoming those early criticisms with repeated testing in labs, forests and fields. It’s now well established that when bugs chew leaves, plants respond by releasing volatile organic compounds into the air. By Karban’s last count, 40 out of 48 studies of plant communication confirm that other plants detect these airborne signals and ramp up their production of chemical weapons or other defense mechanisms in response. “The evidence that plants release volatiles when damaged by herbivores is as sure as something in science can be,” said Martin Heil, an ecologist at the Mexican research institute Cinvestav Irapuato. “The evidence that plants can somehow perceive these volatiles and respond with a defense response is also very good.”

Plant communication may still be a tiny field, but the people who study it are no longer seen as a lunatic fringe. “It used to be that people wouldn’t even talk to you: ‘Why are you wasting my time with something we’ve already debunked?’” said Karban. “That’s now better for sure.” The debate is no longer whether plants can sense one another’s biochemical messages — they can — but about why and how they do it. [Continue reading…]

Facebooktwittermail

Spike in carbon dioxide linked to world’s largest most sudden mass extinction

end-Permian-extinction

MIT News reports: The largest mass extinction in the history of animal life occurred some 252 million years ago, wiping out more than 96 percent of marine species and 70 percent of life on land — including the largest insects known to have inhabited the Earth. Multiple theories have aimed to explain the cause of what’s now known as the end-Permian extinction, including an asteroid impact, massive volcanic eruptions, or a cataclysmic cascade of environmental events. But pinpointing the cause of the extinction requires better measurements of how long the extinction period lasted.

Now researchers at MIT have determined that the end-Permian extinction occurred over 60,000 years, give or take 48,000 years — practically instantaneous, from a geologic perspective. The new timescale is based on more precise dating techniques, and indicates that the most severe extinction in history may have happened more than 10 times faster than scientists had previously thought.

“We’ve got the extinction nailed in absolute time and duration,” says Sam Bowring, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “How do you kill 96 percent of everything that lived in the oceans in tens of thousands of years? It could be that an exceptional extinction requires an exceptional explanation.”

In addition to establishing the extinction’s duration, Bowring, graduate student Seth Burgess, and a colleague from the Nanjing Institute of Geology and Paleontology also found that, 10,000 years before the die-off, , the oceans experienced a pulse of light carbon, which likely reflects a massive addition of carbon dioxide to the atmosphere. This dramatic change may have led to widespread ocean acidification and increased sea temperatures by 10 degrees Celsius or more, killing the majority of sea life.

But what originally triggered the spike in carbon dioxide? The leading theory among geologists and paleontologists has to do with widespread, long-lasting volcanic eruptions from the Siberian Traps, a region of Russia whose steplike hills are a result of repeated eruptions of magma. To determine whether eruptions from the Siberian Traps triggered a massive increase in oceanic carbon dioxide, Burgess and Bowring are using similar dating techniques to establish a timescale for the Permian period’s volcanic eruptions that are estimated to have covered over five million cubic kilometers.

“It is clear that whatever triggered extinction must have acted very quickly,” says Burgess, the lead author of a paper that reports the results in this week’s Proceedings of the National Academy of Sciences, “fast enough to destabilize the biosphere before the majority of plant and animal life had time to adapt in an effort to survive.” [Continue reading…]


“The Earth is experiencing climate change now due to changes in the composition of the atmosphere. We think the atmosphere at the end-Permian was changed significantly by output from these volcanoes, and that the chemicals were similar to those going into the atmosphere today.” Siberia Project

Facebooktwittermail

Technological narcissism and the illusion of self-knowledge offered by the Quantified Self

e13-iconKnow thyself has been a maxim throughout the ages, rooted in the belief that wisdom and wise living demand we acquire self-knowledge.

As Shakespeare wrote:

This above all: to thine own self be true,
And it must follow, as the night the day,
Thou canst not then be false to any man.

New research on human feelings, however, seems to have the absurd implication that if you really want to know your inner being, you should probably carry around a mirror and pay close attention to your facial expressions. The researchers clearly believe that monitoring muscle contractions is a more reliable way of knowing what someone is feeling than using any kind of subjective measure. Reduced to this muscular view, it turns out — according to the research — that we only have four basic feelings.

Likewise, devotees of the Quantified Self seem to believe that it’s not really possible to know what it means to be alive unless one can be hooked up to and study the output from one or several digital devices.

In each of these cases we are witnessing a trend driven by technological development through which the self is externalized.

Thoreau warned that we have “become the tools of our tools,” but the danger laying beyond that is that we become our tools; that our sense of who we are becomes so pervasively mediated by devices that without these devices we conclude we are nothing.

Josh Cohen writes: With January over, the spirit of self-improvement in which you began the year can start to evaporate. Except now your feeble excuses are under assault from a glut of “self-tracking” devices and apps. Your weakness for saturated fats and alcohol, your troubled sleep and mood swings, your tendencies to procrastination, indecision and disorganisation — all your quirks and flaws can now be monitored and remedied with the help of mobile technology.

Technology offer solutions not only to familiar problems of diet, exercise and sleep, but to anxieties you weren’t even aware of. If you can’t resolve a moral dilemma, there’s an app that will solicit your friends’ advice. If you’re concerned about your toddler’s language development, there’s a small device that will measure the number and range of words she’s using against those of her young peers.

Quantified Self (QS) is a growing global movement selling a new form of wisdom, encapsulated in the slogan “self-knowledge through numbers”. Rooted in the American tech scene, it encourages people to monitor all aspects of their physical, emotional, cognitive, social, domestic and working lives. The wearable cameras that enable you to broadcast your life minute by minute; the Nano-sensors that can be installed in any region of the body to track vital functions from blood pressure to cholesterol intake, the voice recorders that pick up the sound of your sleeping self or your baby’s babble—together, these devices can provide you with the means to regain control over your fugitive life.

This vision has traction at a time when our daily lives, as the Snowden leaks have revealed, are being lived in the shadow of state agencies, private corporations and terrorist networks — overwhelming yet invisible forces that leave us feeling powerless to maintain boundaries around our private selves. In a world where our personal data appears vulnerable to intrusion and exploitation, a movement that effectively encourages you to become your own spy is bound to resonate. Surveillance technologies will put us back in the centre of the lives from which they’d displaced us. Our authoritative command of our physiological and behavioural “numbers” can assure us that after all, no one knows us better than we do. [Continue reading…]

Facebooktwittermail

Too much to remember?

AnalysisBenedict Carey writes: People of a certain age (and we know who we are) don’t spend much leisure time reviewing the research into cognitive performance and aging. The story is grim, for one thing: Memory’s speed and accuracy begin to slip around age 25 and keep on slipping.

The story is familiar, too, for anyone who is over 50 and, having finally learned to live fully in the moment, discovers it’s a senior moment. The finding that the brain slows with age is one of the strongest in all of psychology.

Over the years, some scientists have questioned this dotage curve. But these challenges have had an ornery-old-person slant: that the tests were biased toward the young, for example. Or that older people have learned not to care about clearly trivial things, like memory tests. Or that an older mind must organize information differently from one attached to some 22-year-old who records his every Ultimate Frisbee move on Instagram.

Now comes a new kind of challenge to the evidence of a cognitive decline, from a decidedly digital quarter: data mining, based on theories of information processing. In a paper published in Topics in Cognitive Science, a team of linguistic researchers from the University of Tübingen in Germany used advanced learning models to search enormous databases of words and phrases.

Since educated older people generally know more words than younger people, simply by virtue of having been around longer, the experiment simulates what an older brain has to do to retrieve a word. And when the researchers incorporated that difference into the models, the aging “deficits” largely disappeared.

“What shocked me, to be honest, is that for the first half of the time we were doing this project, I totally bought into the idea of age-related cognitive decline in healthy adults,” the lead author, Michael Ramscar, said by email. But the simulations, he added, “fit so well to human data that it slowly forced me to entertain this idea that I didn’t need to invoke decline at all.” [Continue reading…]

Facebooktwittermail

Rosetta the comet-chasing spacecraft wakes up

The Guardian reports: In 2004, the European Space Agency launched the Rosetta probe on an audacious mission to chase down a comet and place a robot on its surface. For nearly three years Rosetta had been hurtling through space in a state of hibernation. On Monday, it awoke.

The radio signal from Rosetta came from 800m kilometres away, a distance made hardly more conceivable by its proximity to Jupiter. The signal appeared on a computer screen as a tremulous green spike, but it meant the world – perhaps the solar system – to the scientists and engineers gathered at European Space Operations Centre in Darmstadt.

In a time when every spacecraft worth its salt has a Twitter account, the inevitable message followed from @Esa_Rosetta. It was brief and joyful: “Hello, world!”.

Speaking to the assembled crowd at Darmstadt, Matt Taylor, project scientist on the Rosetta mission, said: “Now it’s up to us to do the work we’ve promised to do.”

Just 10 minutes before he’d been facing an uncertain future career. If the spacecraft had not woken up, there would be no science to do and the role of project scientist would have been redundant.

The comet hunter had been woken by an internal alarm clock at 10am UK time but only after several hours of warming up its instruments and orientating towards Earth could it send a message home.

In the event, the missive was late. Taylor had been hiding his nerves well, even joking about the wait on Twitter but when the clock passed 19:00CET, making the signal at least 15 minutes late, the mood changed. ESA scientists and engineers started rocking on their heels, clutching their arms around themselves, and stopping the banter than had helped pass the time. Taylor himself sat down, and seemed to withdraw.

Then the flood of relief when the blip on the graph appeared. “I told you it would work,” said Taylor with a grin.

The successful rousing of the distant probe marks a crucial milestone in a mission that is more spectacular and ambitious than any the European Space Agency has conceived. The €1bn, car-sized spacecraft will now close in on a comet, orbit around it, and send down a lander, called Philae, the first time such a feat has been attempted.

The comet, 67P/Churyumov-Gerasimenko, is 4km wide, or roughly the size of Mont Blanc. That is big enough to study, but too measly to have a gravitational field strong enough to hold the lander in place. Instead, the box of sensors on legs will latch on to the comet by firing an explosive harpoon the moment it lands, and twisting ice screws into its surface.

To my mind, the long-standing belief that comets may have had a role in the origin of life on Earth seems even more speculative than Jeremy England’s idea that it could have arisen as a result of physical processes originating on this planet’s surface.

As much as anything, what seems so extraordinary about a project such as Rosetta is its accomplishment as a feat of navigation (assuming that it does in fact make its planned rendezvous with the comet). It’s like aiming an arrow at a speck of dust on a moving target in the dark.

Facebooktwittermail

A new theory about the origin of life

origin-life

Evolution explains how life changes, but it doesn’t explain how it came into existence. A young physicist at MIT has now come up with a mathematical formula which suggests that given the right set of conditions, the emergence of living forms is not merely possible; it almost seems inevitable.

Let there be light, shining on atoms, and there will eventually be life.

Quanta magazine: Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab. [Continue reading…]

Facebooktwittermail

Why we find it difficult to face the future

Alisa Opar writes: The British philosopher Derek Parfit espoused a severely reductionist view of personal identity in his seminal book, Reasons and Persons: It does not exist, at least not in the way we usually consider it. We humans, Parfit argued, are not a consistent identity moving through time, but a chain of successive selves, each tangentially linked to, and yet distinct from, the previous and subsequent ones. The boy who begins to smoke despite knowing that he may suffer from the habit decades later should not be judged harshly: “This boy does not identify with his future self,” Parfit wrote. “His attitude towards this future self is in some ways like his attitude to other people.”

Parfit’s view was controversial even among philosophers. But psychologists are beginning to understand that it may accurately describe our attitudes towards our own decision-making: It turns out that we see our future selves as strangers. Though we will inevitably share their fates, the people we will become in a decade, quarter century, or more, are unknown to us. This impedes our ability to make good choices on their—which of course is our own—behalf. That bright, shiny New Year’s resolution? If you feel perfectly justified in breaking it, it may be because it feels like it was a promise someone else made.

“It’s kind of a weird notion,” says Hal Hershfield, an assistant professor at New York University’s Stern School of Business. “On a psychological and emotional level we really consider that future self as if it’s another person.”

Using fMRI, Hershfield and colleagues studied brain activity changes when people imagine their future and consider their present. They homed in on two areas of the brain called the medial prefrontal cortex and the rostral anterior cingulate cortex, which are more active when a subject thinks about himself than when he thinks of someone else. They found these same areas were more strongly activated when subjects thought of themselves today, than of themselves in the future. Their future self “felt” like somebody else. In fact, their neural activity when they described themselves in a decade was similar to that when they described Matt Damon or Natalie Portman. [Continue reading…]

Facebooktwittermail

Rapid loss of top predators ‘a major environmental threat’

The Guardian reports: The rapid loss of top predators such as dingoes, leopards and lions is causing an environmental threat comparable to climate change, an international group of scientists has warned.

A study by researchers from Australia, the US and Europe found that removing large carnivores, which has happened worldwide in the past 200 years, causes a raft of harmful reactions to cascade through food chains and landscapes.

Small animals are picked off by feral pests, land is denuded of vegetation as herbivore numbers increase and streams and rivers are even diverted as a result of this loss of carnivores, the ecologists found.

“There is now a substantial body of research demonstrating that, alongside climate change, eliminating large carnivores is one of the most significant anthropogenic impacts on nature,” the study states.

The research looked at the ecological impact of the world’s 31 largest mammalian carnivores, with the largest body of information gathered on seven key species – the dingo, grey wolf, lion, leopard, sea otter, lynx and puma. [Continue reading…]

Facebooktwittermail

The intelligent plant

Michael Pollan writes: In 1973, a book claiming that plants were sentient beings that feel emotions, prefer classical music to rock and roll, and can respond to the unspoken thoughts of humans hundreds of miles away landed on the New York Times best-seller list for nonfiction. “The Secret Life of Plants,” by Peter Tompkins and Christopher Bird, presented a beguiling mashup of legitimate plant science, quack experiments, and mystical nature worship that captured the public imagination at a time when New Age thinking was seeping into the mainstream. The most memorable passages described the experiments of a former C.I.A. polygraph expert named Cleve Backster, who, in 1966, on a whim, hooked up a galvanometer to the leaf of a dracaena, a houseplant that he kept in his office. To his astonishment, Backster found that simply by imagining the dracaena being set on fire he could make it rouse the needle of the polygraph machine, registering a surge of electrical activity suggesting that the plant felt stress. “Could the plant have been reading his mind?” the authors ask. “Backster felt like running into the street and shouting to the world, ‘Plants can think!’ ”

Backster and his collaborators went on to hook up polygraph machines to dozens of plants, including lettuces, onions, oranges, and bananas. He claimed that plants reacted to the thoughts (good or ill) of humans in close proximity and, in the case of humans familiar to them, over a great distance. In one experiment designed to test plant memory, Backster found that a plant that had witnessed the murder (by stomping) of another plant could pick out the killer from a lineup of six suspects, registering a surge of electrical activity when the murderer was brought before it. Backster’s plants also displayed a strong aversion to interspecies violence. Some had a stressful response when an egg was cracked in their presence, or when live shrimp were dropped into boiling water, an experiment that Backster wrote up for the International Journal of Parapsychology, in 1968.

In the ensuing years, several legitimate plant scientists tried to reproduce the “Backster effect” without success. Much of the science in “The Secret Life of Plants” has been discredited. But the book had made its mark on the culture. Americans began talking to their plants and playing Mozart for them, and no doubt many still do. This might seem harmless enough; there will probably always be a strain of romanticism running through our thinking about plants. (Luther Burbank and George Washington Carver both reputedly talked to, and listened to, the plants they did such brilliant work with.) But in the view of many plant scientists “The Secret Life of Plants” has done lasting damage to their field. According to Daniel Chamovitz, an Israeli biologist who is the author of the recent book “What a Plant Knows,” Tompkins and Bird “stymied important research on plant behavior as scientists became wary of any studies that hinted at parallels between animal senses and plant senses.” Others contend that “The Secret Life of Plants” led to “self-censorship” among researchers seeking to explore the “possible homologies between neurobiology and phytobiology”; that is, the possibility that plants are much more intelligent and much more like us than most people think—capable of cognition, communication, information processing, computation, learning, and memory.

The quotation about self-censorship appeared in a controversial 2006 article in Trends in Plant Science proposing a new field of inquiry that the authors, perhaps somewhat recklessly, elected to call “plant neurobiology.” The six authors—among them Eric D. Brenner, an American plant molecular biologist; Stefano Mancuso, an Italian plant physiologist; František Baluška, a Slovak cell biologist; and Elizabeth Van Volkenburgh, an American plant biologist—argued that the sophisticated behaviors observed in plants cannot at present be completely explained by familiar genetic and biochemical mechanisms. Plants are able to sense and optimally respond to so many environmental variables—light, water, gravity, temperature, soil structure, nutrients, toxins, microbes, herbivores, chemical signals from other plants—that there may exist some brainlike information-processing system to integrate the data and coördinate a plant’s behavioral response. The authors pointed out that electrical and chemical signalling systems have been identified in plants which are homologous to those found in the nervous systems of animals. They also noted that neurotransmitters such as serotonin, dopamine, and glutamate have been found in plants, though their role remains unclear.

Hence the need for plant neurobiology, a new field “aimed at understanding how plants perceive their circumstances and respond to environmental input in an integrated fashion.” The article argued that plants exhibit intelligence, defined by the authors as “an intrinsic ability to process information from both abiotic and biotic stimuli that allows optimal decisions about future activities in a given environment.” Shortly before the article’s publication, the Society for Plant Neurobiology held its first meeting, in Florence, in 2005. A new scientific journal, with the less tendentious title Plant Signaling & Behavior, appeared the following year.

Depending on whom you talk to in the plant sciences today, the field of plant neurobiology represents either a radical new paradigm in our understanding of life or a slide back down into the murky scientific waters last stirred up by “The Secret Life of Plants.” Its proponents believe that we must stop regarding plants as passive objects—the mute, immobile furniture of our world—and begin to treat them as protagonists in their own dramas, highly skilled in the ways of contending in nature. They would challenge contemporary biology’s reductive focus on cells and genes and return our attention to the organism and its behavior in the environment. It is only human arrogance, and the fact that the lives of plants unfold in what amounts to a much slower dimension of time, that keep us from appreciating their intelligence and consequent success. Plants dominate every terrestrial environment, composing ninety-nine per cent of the biomass on earth. By comparison, humans and all the other animals are, in the words of one plant neurobiologist, “just traces.” [Continue reading…]

Facebooktwittermail

Stories made present

Richard Hamilton writes: My first job was as a lawyer. I was not a very happy or inspired lawyer. One night I was driving home listening to a radio report, and there is something very intimate about radio: a voice comes out of a machine and into the listener’s ear. With rain pounding the windscreen and only the dashboard lights and the stereo for company, I thought to myself, ‘This is what I want to do.’ So I became a radio journalist.

As broadcasters, we are told to imagine speaking to just one person. My tutor at journalism college told me that there is nothing as captivating as the human voice saying something of interest (he added that radio is better than TV because it has the best pictures). We remember where we were when we heard a particular story. Even now when I drive in my car, the memory of a scene from a radio play can be ignited by a bend in a country road or a set of traffic lights in the city.

But potent as radio seems, can a recording device ever fully replicate the experience of listening to a live storyteller? The folklorist Joseph Bruchac thinks not. ‘The presence of teller and audience, and the immediacy of the moment, are not fully captured by any form of technology,’ he wrote in a comment piece for The Guardian in 2010. ‘Unlike the insect frozen in amber, a told story is alive… The story breathes with the teller’s breath.’ And as devoted as I am to radio, my recent research into oral storytelling makes me think that Bruchac may be right. [Continue reading…]

Facebooktwittermail

Talking to animals

chimpanzee

As a child, I was once taken to a small sad zoo near the Yorkshire seaside town of Scarborough. There were only a handful of animals and my attention was quickly drawn by a solitary chimpanzee.

We soon sat face-to-face within arm’s reach, exchanging calls and became absorbed in what seemed like communication — even if there were no words. Before the eyes of another primate we see mirrors of inquiry. Just as much as I wanted to talk to the chimp, it seemed like he wanted to talk to me. His sorrow, like that of all captives, could not be held tight by silence.

The rest of my family eventually tired of my interest in learning how to speak chimpaneze. After all, talking to animals is something that only small children are willing to take seriously. Supposedly, it is just another childish exercise of the imagination — the kind of behavior that as we grow older we grow out of.

This notion of outgrowing a sense of kinship with other creatures implies an upward advance, yet in truth we don’t outgrow these experiences of connection, we simply move away from them. We imagine we are leaving behind something we no longer need, whereas in fact we are losing something we have forgotten how to appreciate.

Like so many other aspects of maturation, the process through which adults forget their connections to the non-human world involves a dulling of the senses. As we age, we become less alive, less attuned and less receptive to life’s boundless expressions. The insatiable curiosity we had as children, slowly withers as the mental constructs which form a known world cut away and displace our passion for exploration.

Within this known and ordered world, the idea that an adult would describe herself as an animal communicator, naturally provokes skepticism. Is this a person living in a fantasy world? Or is she engaged in a hoax, cynically exploiting the longings of others such as the desire to rediscover a lost childhood?

Whether Anna Breytenbach (who features in the video below) can see inside the minds of animals, I have no way of knowing, but that animals have minds and that they can have what we might regard as intensely human experiences — such as the feeling of loss — I have no doubt.

The cultural impact of science which is often more colored by belief than reason, suggests that whenever we reflect on the experience of animals we are perpetually at risk of falling into the trap of anthropomorphization. The greater risk, however, is that we unquestioningly accept this assumption: that even if as humans we are the culmination of an evolutionary process that goes all the way back to the formation of amino acids, at the apex of this process we somehow stand apart. We can observe the animal kingdom and yet as humans we have risen above it.

But instead, what actually sets us apart in the most significant way is not the collection of attributes that define human uniqueness, but rather it is this very idea of our separateness — the idea that we are here and nature is out there.

Facebooktwittermail

Neanderthals and the dead

The New York Times reports: Early in the 20th century, two brothers discovered a nearly complete Neanderthal skeleton in a pit inside a cave at La Chapelle-aux-Saints, in southwestern France. The discovery raised the possibility that these evolutionary relatives of ours intentionally buried their dead — at least 50,000 years ago, before the arrival of anatomically modern humans in Europe.

These and at least 40 subsequent discoveries, a few as far from Europe as Israel and Iraq, appeared to suggest that Neanderthals, long thought of as brutish cave dwellers, actually had complex funeral practices. Yet a significant number of researchers have since objected that the burials were misinterpreted, and might not represent any advance in cognitive and symbolic behavior.

Now an international team of scientists is reporting that a 13-year re-examination of the burials at La Chapelle-aux-Saints supports the earlier claims that the burials were intentional.

The researchers — archaeologists, geologists and paleoanthropologists — not only studied the skeleton from the original excavations, but found more Neanderthal remains, from two children and an adult. They also studied the bones of other animals in the cave, mainly bison and reindeer, and the geology of the burial pits.

The findings, in this week’s issue of Proceedings of the National Academy of Sciences, “buttress claims for complex symbolic behavior among Western European Neanderthals,” the scientists reported.

William Rendu, the paper’s lead author and a researcher at the Center for International Research in the Humanities and Social Sciences in New York, said in an interview that the geology of the burial pits “cannot be explained by natural events” and that “there is no sign of weathering and scavenging by animals,” which means the bodies were covered soon after death.

“While we cannot know if this practice was part of a ritual or merely pragmatic,” Dr. Rendu said in a statement issued by New York University, “the discovery reduces the behavioral distance between them and us.” [Continue reading…]

Facebooktwittermail

The most arrogant creatures on Earth

Dominique Mosbergen writes: Researchers from the University of Adelaide in Australia argue in an upcoming book, The Dynamic Human, that humans really aren’t much smarter than other creatures — and that some animals may actually be brighter than we are.

“For millennia, all kinds of authorities — from religion to eminent scholars — have been repeating the same idea ad nauseam, that humans are exceptional by virtue that they are the smartest in the animal kingdom,” the book’s co-author Dr. Arthur Saniotis, a visiting research fellow with the university’s School of Medical Sciences, said in a written statement. “However, science tells us that animals can have cognitive faculties that are superior to human beings.”

Not to mention, ongoing research on intelligence and primate brain evolution backs the idea that humans aren’t the cleverest creatures on Earth, co-author Dr. Maciej Henneberg, a professor also at the School of Medical Sciences, told The Huffington Post in an email.

The researchers said the belief in the superiority of that human intelligence can be traced back around 10,000 years to the Agricultural Revolution, when humans began domesticating animals. The idea was reinforced with the advent of organized religion, which emphasized human beings’ superiority over other creatures. [Continue reading…]

At various times in my life, I’ve crossed paths with people possessing immense wealth and power, providing me with glimpses of the mindset of those who regard themselves as the most important people on this planet.

From what I can tell, the concentration of great power does not coincide with the expression of great intelligence. What is far more evident is a great sense of entitlement, which is to say a self-validating sense that power rests where power belongs and that the inequality in its distribution is a reflection of some kind of natural order.

Since this self-serving perception of hierarchical order operates among humans and since humans as a species wield so much more power than any other, it’s perhaps not surprising that we exhibit the same kind of hubris collectively that we see individually in the most dominant among us.

Nevertheless, it is becoming increasingly clear that our sense of superiority is rooted in ignorance.

Amit Majmudar writes: There may come a time when we cease to regard animals as inferior, preliminary iterations of the human—with the human thought of as the pinnacle of evolution so far—and instead regard all forms of life as fugue-like elaborations of a single musical theme.

Animals are routinely superhuman in one way or another. They outstrip us in this or that perceptual or physical ability, and we think nothing of it. It is only our kind of superiority (in the use of tools, basically) that we select as the marker of “real” superiority. A human being with an elephant’s hippocampus would end up like Funes the Memorious in the story by Borges; a human being with a dog’s olfactory bulb would become a Vermeer of scent, but his art would be lost on the rest of us, with our visually dominated brains. The poetry of the orcas is yet to be translated; I suspect that the whale sagas will have much more interesting things in them than the tablets and inscriptions of Sumer and Akkad.

If science should ever persuade people of this biological unity, it would be of far greater benefit to the species than penicillin or cardiopulmonary bypass; of far greater benefit to the planet than the piecemeal successes of environmental activism. We will have arrived, by study and reasoning, at the intuitive, mystical insights of poets.

Facebooktwittermail

The misleading metaphor of the selfish gene

David Dobbs writes: A couple of years ago, at a massive conference of neuroscientists — 35,000 attendees, scores of sessions going at any given time — I wandered into a talk that I thought would be about consciousness but proved (wrong room) to be about grasshoppers and locusts. At the front of the room, a bug-obsessed neuroscientist named Steve Rogers was describing these two creatures — one elegant, modest, and well-mannered, the other a soccer hooligan.

The grasshopper, he noted, sports long legs and wings, walks low and slow, and dines discreetly in solitude. The locust scurries hurriedly and hoggishly on short, crooked legs and joins hungrily with others to form swarms that darken the sky and descend to chew the farmer’s fields bare.

Related, yes, just as grasshoppers and crickets are. But even someone as insect-ignorant as I could see that the hopper and the locust were wildly different animals — different species, doubtless, possibly different genera. So I was quite amazed when Rogers told us that grasshopper and locust are in fact the same species, even the same animal, and that, as Jekyll is Hyde, one can morph into the other at alarmingly short notice.

Not all grasshopper species, he explained (there are some 11,000), possess this morphing power; some always remain grasshoppers. But every locust was, and technically still is, a grasshopper — not a different species or subspecies, but a sort of hopper gone mad. If faced with clues that food might be scarce, such as hunger or crowding, certain grasshopper species can transform within days or even hours from their solitudinous hopper states to become part of a maniacally social locust scourge. They can also return quickly to their original form.

In the most infamous species, Schistocerca gregaria, the desert locust of Africa, the Middle East and Asia, these phase changes (as this morphing process is called) occur when crowding spurs a temporary spike in serotonin levels, which causes changes in gene expression so widespread and powerful they alter not just the hopper’s behaviour but its appearance and form. Legs and wings shrink. Subtle camo colouring turns conspicuously garish. The brain grows to manage the animal’s newly complicated social world, which includes the fact that, if a locust moves too slowly amid its million cousins, the cousins directly behind might eat it.

How does this happen? Does something happen to their genes? Yes, but — and here was the point of Rogers’s talk — their genes don’t actually change. That is, they don’t mutate or in any way alter the genetic sequence or DNA. Nothing gets rewritten. Instead, this bug’s DNA — the genetic book with millions of letters that form the instructions for building and operating a grasshopper — gets reread so that the very same book becomes the instructions for operating a locust. Even as one animal becomes the other, as Jekyll becomes Hyde, its genome stays unchanged. Same genome, same individual, but, I think we can all agree, quite a different beast.

Why?

Transforming the hopper is gene expression — a change in how the hopper’s genes are ‘expressed’, or read out. Gene expression is what makes a gene meaningful, and it’s vital for distinguishing one species from another. We humans, for instance, share more than half our genomes with flatworms; about 60 per cent with fruit flies and chickens; 80 per cent with cows; and 99 per cent with chimps. Those genetic distinctions aren’t enough to create all our differences from those animals — what biologists call our particular phenotype, which is essentially the recognisable thing a genotype builds. This means that we are human, rather than wormlike, flylike, chickenlike, feline, bovine, or excessively simian, less because we carry different genes from those other species than because our cells read differently our remarkably similar genomes as we develop from zygote to adult. The writing varies — but hardly as much as the reading.

This raises a question: if merely reading a genome differently can change organisms so wildly, why bother rewriting the genome to evolve? How vital, really, are actual changes in the genetic code? Do we even need DNA changes to adapt to new environments? Is the importance of the gene as the driver of evolution being overplayed?

You’ve probably noticed that these questions are not gracing the cover of Time or haunting Oprah, Letterman, or even TED talks. Yet for more than two decades they have been stirring a heated argument among geneticists and evolutionary theorists. As evidence of the power of rapid gene expression mounts, these questions might (or might not, for pesky reasons we’ll get to) begin to change not only mainstream evolutionary theory but our more everyday understanding of evolution. [Continue reading…]

Facebooktwittermail

Google’s plan to prolong human suffering

The idiots in Silicon Valley — many of whom aren’t yet old enough to have experienced adventures of aging like getting a colonoscopy — seem to picture “life-extension” as though it means more time to improve one’s tennis strokes in a future turned into a never-ending vacation. But what life extension will much more likely simply mean is prolonged infirmity.

If it was possible to build an economy around “health care” — which should much more accurately be called disease management — then the prospect of a perpetually expanding population of the infirm might look like a golden business opportunity, but what we’re really looking at is an economy built on false promises.

Daniel Callahan writes: This fall Google announced that it would venture into territory far removed from Internet search. Through a new company, Calico, it will be “tackling” the “challenge” of aging.

The announcement, though, was vague about what exactly the challenge is and how exactly Google means to tackle it. Calico may, with the aid of Big Data, simply intensify present efforts to treat the usual chronic diseases that afflict the elderly, like cancer, heart disease and Alzheimer’s. But there is a more ambitious possibility: to “treat” the aging process itself, in an attempt to slow it.

Of course, the dream of beating back time is an old one. Shakespeare had King Lear lament the tortures of aging, while the myth of Ponce de Leon’s Fountain of Youth in Florida and the eternal life of the Struldbrugs in “Gulliver’s Travels” both fed the notion of overcoming aging.

For some scientists, recent anti-aging research — on gene therapy, body-part replacement by regeneration and nanotechnology for repairing aging cells — has breathed new life into this dream. Optimists about average life expectancy’s surpassing 100 years in the coming century, like James W. Vaupel, the founder and director of the Max Planck Institute for Demographic Research in Germany, cite promising animal studies in which the lives of mice have been extended through genetic manipulation and low-calorie diets. They also point to the many life-extending medical advances of the past century as precedents, with no end in sight, and note that average life expectancy in the United States has long been rising, from 47.3 in 1900 to 78.7 in 2010. Others are less sanguine. S. Jay Olshansky, a research associate at the Center on Aging at the University of Chicago, has pointed out that sharp reductions in infant mortality explain most of that rise. Even if some people lived well into old age, the death of 50 percent or more of infants and children for most of history kept the average life expectancy down. As those deaths fell drastically over the past century, life expectancy increased, helped by improvements in nutrition, a decline in infectious disease and advances in medicine. But there is no reason to think another sharp drop of that sort is in the cards.

Even if anti-aging research could give us radically longer lives someday, though, should we even be seeking them? Regardless of what science makes possible, or what individual people want, aging is a public issue with social consequences, and these must be thought through.

Consider how dire the cost projections for Medicare already are. In 2010 more than 40 million Americans were over 65. In 2030 there will be slightly more than 72 million, and in 2050 more than 83 million. The Congressional Budget Office has projected a rise of Medicare expenditures to 5.8 percent of gross domestic product in 2038 from 3.5 percent today, a burden often declared unsustainable.

Modern medicine is very good at keeping elderly people with chronic diseases expensively alive. At 83, I’m a good example. I’m on oxygen at night for emphysema, and three years ago I needed a seven-hour emergency heart operation to save my life. Just 10 percent of the population — mainly the elderly — consumes about 80 percent of health care expenditures, primarily on expensive chronic illnesses and end-of-life costs. Historically, the longer lives that medical advances have given us have run exactly parallel to the increase in chronic illness and the explosion in costs. Can we possibly afford to live even longer — much less radically longer?

At the heart of the idiocy which Silicon Valley cultivates are a host of profound misconceptions about the nature of time.

Having become slaves of technology, we take it as given that time is measured by clocks and calendars. Life extension is thus conceived in purely numerical terms. Yet the time that matters is not the time that can be measured by any device.

That’s why in an age in which technology was supposed to reward us all with extra time, instead we experience time as being perpetually compressed.

Having been provided with the means to do more and more things at the same time — text, tweet, talk etc. — our attention gets sliced into narrower and narrower slivers, and the more time gets filled, the more time-impoverished we become.

For the narcissist, there can be no greater fear than the prospect of the termination of individual existence, yet death is truly intrinsic to life. What it enables is not simply annihilation but more importantly, renewal.

Facebooktwittermail

Are we alone in the Universe?

Paul Davies writes: The recent announcement by a team of astronomers that there could be as many as 40 billion habitable planets in our galaxy has further fueled the speculation, popular even among many distinguished scientists, that the universe is teeming with life.

The astronomer Geoffrey W. Marcy of the University of California, Berkeley, an experienced planet hunter and co-author of the study that generated the finding, said that it “represents one great leap toward the possibility of life, including intelligent life, in the universe.”

But “possibility” is not the same as likelihood. If a planet is to be inhabited rather than merely habitable, two basic requirements must be met: the planet must first be suitable and then life must emerge on it at some stage.

What can be said about the chances of life starting up on a habitable planet? Darwin gave us a powerful explanation of how life on Earth evolved over billions of years, but he would not be drawn out on the question of how life got going in the first place. “One might as well speculate about the origin of matter,” he quipped. In spite of intensive research, scientists are still very much in the dark about the mechanism that transformed a nonliving chemical soup into a living cell. But without knowing the process that produced life, the odds of its happening can’t be estimated.

When I was a student in the 1960s, the prevailing view among scientists was that life on Earth was a freak phenomenon, the result of a sequence of chemical accidents so rare that they would be unlikely to have happened twice in the observable universe. “Man at last knows he is alone in the unfeeling immensity of the universe, out of which he has emerged only by chance,” wrote the biologist Jacques Monod. Today the pendulum has swung dramatically, and many distinguished scientists claim that life will almost inevitably arise in Earthlike conditions. Yet this decisive shift in view is based on little more than a hunch, rather than an improved understanding of life’s origin. [Continue reading…]

Facebooktwittermail

Learning from nature

Human beings have great skill and ingenuity in building machines, yet to the extent that we see ourselves as machine-builders and tool-users, we easily lose touch with the reality that we are organisms that can only exist because we coexist in an incredibly complex set of relations with constellations of other organisms.

Through a fixation on our capacities as agents of change, we see ourselves as distinct, individual, and set apart, yet in fact each of our bodies is really a society in which the cells we claim as our own are vastly outnumbered by bacteria that are not only essential for the assimilation of nutrients but also regulate our immune systems and even affect neurotransmitters in the brain. Our sense of autonomy is pure fiction.

When scientists re-engineer bacteria (see “Redesigning nature”), they are not simply making alterations to the DNA. They are also imposing the machine-builder’s mentality on the natural world. They are assuming that if nature can be shaped in accordance with human designs, it can be improved.

Patrick Blanc is a French botanist and creator of vertical gardens.

I just stumbled across Blanc’s work, so I actually have no idea what he thinks, yet his vertical gardens seem to be an expression of the opposite of the bioengineers’ orientation.

Turning the stark face of a building into a vibrant garden seems like a good way of showing that nature offers vastly more to the human world than we can produce by “enhancing” nature.

Instead of figuring out how we can redesign nature — as though we are its masters — we need to be informed by nature, that we might become better students.

L'Oasis d'Aboukir

L'Oasis d'Aboukir, Paris

Facebooktwittermail

Random acts of secret generosity

Kate Murphy writes: If you place an order at the Chick-fil-A drive-through off Highway 46 in New Braunfels, Tex., it’s not unusual for the driver of the car in front of you to pay for your meal in the time it took you to holler into the intercom and pull around for pickup.

“The people ahead of you paid it forward,” the cashier will chirp as she passes your food through the window.

Confused, you look ahead at the car — it could be a mud-splashed monster truck, Mercedes or minivan — which at this point is turning onto the highway. The cashier giggles, you take your food and unless your heart is irreparably rotted from cynicism and snark, you feel touched.

You could chalk it up to Southern hospitality or small town charm. But it’s just as likely the preceding car will pick up your tab at a Dunkin’ Donuts drive-through in Detroit or a McDonald’s drive-through in Fargo, N.D. Drive-through generosity is happening across America and parts of Canada, sometimes resulting in unbroken chains of hundreds of cars paying in turn for the person behind them.

This is taking place at a time when the nation’s legislators can’t speak a civil word unless reading from Dr. Seuss. “We really don’t know why it’s happening but if I had to guess, I’d say there is just a lot of stuff going on in the country that people find discouraging,” said Mark Moraitakis, director of hospitality at Chick-fil-A, which is based in Atlanta. “Paying it forward is a way to counteract that.” [Continue reading…]

Facebooktwittermail