Ed Yong writes: Evolution works on a strict energy budget. Each adaptation burns through a certain number of calories, and each individual can only acquire so many calories in the course of a day. You can’t have flapping wings and a huge body and venom and fast legs and a big brain. If you want to expand some departments, you need to make cuts in others. That’s why, for example, animals that reproduce faster tend to die earlier. They divert energy towards making new bodies, and away from maintaining their own.
But humans, on the face of it, are exceptional. Compared to other apes, we reproduce more often (or, at least, those of us in traditional societies do) and our babies are bigger when they’re born and we live longer. And, as if to show off, our brains are much larger, and these huge organs sap some 20 percent of our total energy.
“We tend to have our cake and eat it too,” says Herman Pontzer from Hunter College. “These traits that make us human are all energetically costly. And until now, we didn’t really understand how we were fueling them.” [Continue reading…]
Veronique Greenwood writes: The millimeter-long roundworm Caenorhabditis elegans has about 20,000 genes — and so do you. Of course, only the human in this comparison is capable of creating either a circulatory system or a sonnet, a state of affairs that made this genetic equivalence one of the most confusing insights to come out of the Human Genome Project. But there are ways of accounting for some of our complexity beyond the level of genes, and as one new study shows, they may matter far more than people have assumed.
For a long time, one thing seemed fairly solid in biologists’ minds: Each gene in the genome made one protein. The gene’s code was the recipe for one molecule that would go forth into the cell and do the work that needed doing, whether that was generating energy, disposing of waste, or any other necessary task. The idea, which dates to a 1941 paper by two geneticists who later won the Nobel Prize in medicine for their work, even has a pithy name: “one gene, one protein.”
Over the years, biologists realized that the rules weren’t quite that simple. Some genes, it turned out, were being used to make multiple products. In the process of going from gene to protein, the recipe was not always interpreted the same way. Some of the resulting proteins looked a little different from others. And sometimes those changes mattered a great deal. There is one gene, famous in certain biologists’ circles, whose two proteins do completely opposite things. One will force a cell to commit suicide, while the other will stop the process. And in one of the most extreme examples known to science, a single fruit fly gene provides the recipe for more than 38,000 different proteins.
But these are dramatic cases. It was never clear just how common it is for genes to make multiple proteins and how much those differences matter to the daily functioning of the cell. Many researchers have assumed that the proteins made by a given gene probably do not differ greatly in their duties. It’s a reasonable assumption — many small-scale tests of sibling proteins haven’t suggested that they should be wildly different.
It is still an assumption, however, and testing it is quite an endeavor. Researchers would have to take a technically tricky inventory of the proteins in a cell and run numerous tests to see what each one does. In a recent paper in Cell, however, researchers at the Dana-Farber Cancer Institute in Boston and their collaborators reveal the results of just such an effort. They found that in many cases, proteins made by a single gene are no more alike in their behavior than proteins made by completely different genes. Sibling proteins often act like strangers. It’s an insight that opens up an interesting new set of possibilities for thinking about how the cell — and the human body — functions. [Continue reading…]
Ed Yong writes: There are tens of trillions of bacteria in my gut and they are different from those in yours. Why?
This is a really basic question about the human microbiome and, rather vexingly, we still don’t have a good answer. Sure, we know some of the things that influence the roll call of species — diet and antibiotics, to name a few — but their relative importance is unclear and the list is far from complete. That bodes poorly for any attempt to work out whether these microbes are involved in diseases, and whether they can be tweaked to improve our health.
Two new studies have tried to address the problem. They’re the largest microbiome studies thus far published, looking at 1,135 Dutch adults and 1,106 Belgians respectively. Both looked at how hundreds of factors affect the microbiome, including age, height, weight, sleep, medical history, smoking, allergies, blood levels of various molecules, and a long list of foods. Both found dozens of factors that affect either the overall diversity of microbial species, or the abundance of particular ones. And encouragingly, their respective lists overlap considerably.
But here’s the important thing: Collectively, the factors they identified explain a tiny proportion of the variation between people’s microbiomes — 19 percent in the Dutch study, and just 8 percent in the Belgian. Which means we’re still largely in the dark about what makes my microbiome different from yours, let alone whether one is healthier than the other. [Continue reading…]
In case you’ve forgotten the section on the food web from high school biology, here’s a quick refresher.
Plants make up the base of every food chain of the food web (also called the food cycle). Plants use available sunlight to convert water from the soil and carbon dioxide from the air into glucose, which gives them the energy they need to live. Unlike plants, animals can’t synthesize their own food. They survive by eating plants or other animals.
Clearly, animals eat plants. What’s not so clear from this picture is that plants also eat animals. They thrive on them, in fact (just Google “fish emulsion”). In my new book, “A Critique of the Moral Defense of Vegetarianism,” I call it the transitivity of eating. And I argue that this means one can’t be a vegetarian.
Becca Cudmore writes: In his Oscar acceptance speech, Leonardo DiCaprio said, “Making The Revenant was about man’s relationship to the natural world.” Perhaps the film’s most gripping illustration of this was when a grizzly bear nearly mauls DiCaprio’s character, an American fur trapper, to death. To be eaten by a predator, after all, may be the most apt display of man’s vulnerable state in nature. Onstage, DiCaprio evoked that vulnerable state, and made a forceful plea for global climate change action.
It turns out this isn’t the first time a near-fatal mauling has emboldened an environmentalist’s perspective. In 1985, the late philosopher Val Plumwood was nearly eaten by a saltwater crocodile. The harrowing experience inspired her to begin writing The Eye of the Crocodile, a series of essays posthumously published in 2012. In its first and most riveting piece, “Being Prey,” she explains how her critique of anthropocentrism — the idea that humans stand apart from nature — became palpable.
Plumwood was paddling through Australia’s Kakadu National Park in a 14-foot canoe in search of an Aboriginal rock art site. The hours passed, rain mounted, and she had found herself deep in a channel surrounded by steep mud banks and snags. When a sandy bar caused her to stop completely, she stepped out of the canoe and recalled how park owners had warned her of crocodiles hunting at the water’s edge. She paddled back into the main current and, rounding a bend, “saw in midstream what looked like a floating stick.” As the current moved Plumwood farther forward, “the stick developed eyes.” As the animal struck the canoe, she instinctively leapt onto the bank, into the lower branches of a paperbark tree. “But before my foot even tripped the first branch, I had a blurred, incredulous vision of great toothed jaws bursting from the water,” she writes. [Continue reading…]
When we’re born, our lungs are thought to be sterile. But from the moment we take our first breath, our pristine lungs are exposed to all the bugs that are in the air. It has become clear in the last 10 years that the lungs rapidly acquire a population of many different microorganisms (mostly bacteria and viruses) that colonise the lungs and remain with us for the rest of our lives. This population of bugs is called the lung microbiome.
We now know more about the lung microbiome thanks to genetics. In the past, identifying the types of bugs present in the lungs depended on being able to grow them in a laboratory, and for many types of bug this was difficult. The big change that happened recently is our ability to recognise both the different bug species, and their relative abundance, by using DNA sequencing. This can be done either from a sample taken from the lungs or from sputum (the mucus we cough up when we have an infection).
Is the lung microbiome a good or a bad thing?
We all know that bacteria in the lungs can be harmful. When harmful bacteria multiply, they cause pneumonia which, despite the existence of antibiotics, can still be deadly. However, it seems that the lung microbiome usually exists in a balanced state, such that harmful types of bugs do not increase in number sufficiently to cause pneumonia. In fact, it’s possible that the very presence of such a diverse range of bugs in the lungs is one of the reasons it’s quite difficult for harmful bugs to multiply and cause disease.
Ken Ilgunas writes: A couple of years ago, I trespassed across America. I’d set out to hike the proposed route of the Keystone XL pipeline, which had been planned to stretch over a thousand miles over the Great Plains, from Alberta, Canada, to the Gulf Coast. To walk the pipe’s route, roads wouldn’t do. I’d have to cross fields, hop barbed-wire fences and camp in cow pastures — much of it on private property.
I’d figured that walking across the heartland would probably be unlawful, unprecedented and a little bit crazy. We Americans, after all, are forbidden from entering most of our private lands. But in some European countries, walking almost wherever you want is not only ordinary but perfectly acceptable.
In Sweden, they call it “allemansrätt.” In Finland, it’s “jokamiehenoikeus.” In Scotland, it’s “the right to roam.” Germany allows walking through privately owned forests, unused meadows and fallow fields. In 2000, England and Wales passed the Countryside and Rights of Way Act, which gave people access to “mountain, moor, heath or down.” [Continue reading…]
Gene Tracy writes: The flow of time is certainly one of the most immediate aspects of our waking experience. It is essential to how we see ourselves and to how we think we should live our lives. Our memories help fix who we are; other thoughts reach forward to what we might become. Surely our modern scientific sense of time, as it grows ever more sophisticated, should provide meaningful insights here.
Yet today’s physicists rarely debate what time is and why we experience it the way we do, remembering the past but never the future. Instead, researchers build ever-more accurate clocks. The current record-holder, at the Joint Institute for Laboratory Astrophysics in Colorado, measures the vibration of strontium atoms; it is accurate to 1 second in 15 billion years, roughly the entire age of the known universe. Impressive, but it does not answer ‘What is time?’
To declare that question outside the pale of physical theory doesn’t make it meaningless. The flow of time could still be real as part of our internal experience, just real in a different way from a proton or a galaxy. Is our experience of time’s flow akin to watching a live play, where things occur in the moment but not before or after, a flickering in and out of existence around the ‘now’? Or, is it like watching a movie, where all eternity is already in the can, and we are watching a discrete sequence of static images, fooled by our limited perceptual apparatus into thinking the action flows smoothly?
The Newtonian and Einsteinian world theories offer little guidance. They are both eternalised ‘block’ universes, in which time is a dimension not unlike space, so everything exists all at once. Einstein’s equations allow different observers to disagree about the duration of time intervals, but the spacetime continuum itself, so beloved of Star Trek’s Mr Spock, is an invariant stage upon which the drama of the world takes place. In quantum mechanics, as in Newton’s mechanics and Einstein’s relativistic theories, the laws of physics that govern the microscopic world look the same going forward or backward in time. Even the innovative speculations of theorists such as Sean Carroll at Caltech in Pasadena – who conceives of time as an emergent phenomenon that arises out of a more primordial, timeless state – concern themselves more with what time does than what time feels like. Time’s flow appears nowhere in current theories of physics. [Continue reading…]
It’s a stereotype, but many of us have made the assumption that scientists are a bit rigid and less artistic than others. Artists, on the other hand, are often seen as being less rational than the rest of us. Sometimes described as the left side of the brain versus the right side – or simply logical thinking versus artistic creativity – the two are often seen as polar opposites.
Neuroscience has already shown that everyone uses both sides of the brain when performing any task. And while certain patterns of brain activity have sometimes been linked to artistic or logical thinking, it doesn’t really explain who is good at what – and why. That’s because the exact interplay of nature and nurture is notoriously difficult to tease out. But if we put the brain aside for a while and just focus on documented ability, is there any evidence to support the logic versus art stereotype?
Psychological research has approached this question by distinguishing between two styles of thinking: convergent and divergent. The emphasis in convergent thinking is on analytical and deductive reasoning, such as that measured in IQ tests. Divergent thinking, however, is more spontaneous and free-flowing. It focuses on novelty and is measured by tasks requiring us to generate multiple solutions for a problem. An example may be thinking of new, innovative uses for familiar objects.
Studies conducted during the 1960s suggested that convergent thinkers were more likely to be good at science subjects at school. Divergent thinking was shown to be more common in the arts and humanities.
However, we are increasingly learning that convergent and divergent thinking styles need not be mutually exclusive. In 2011, researchers assessed 116 final-year UK arts and science undergraduates on measures of convergent and divergent thinking and creative problem solving. The study found no difference in ability between the arts and science groups on any of these measures. Another study reported no significant difference in measures of divergent thinking between arts, natural science and social science undergraduates. Both arts and natural sciences students, however, rated themselves as being more creative than social sciences students did.
Amanda Gefter writes: As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.” [Continue reading…]
Alison Gopnik writes: For 2,000 years, there was an intuitive, elegant, compelling picture of how the world worked. It was called “the ladder of nature.” In the canonical version, God was at the top, followed by angels, who were followed by humans. Then came the animals, starting with noble wild beasts and descending to domestic animals and insects. Human animals followed the scheme, too. Women ranked lower than men, and children were beneath them. The ladder of nature was a scientific picture, but it was also a moral and political one. It was only natural that creatures higher up would have dominion over those lower down.
Darwin’s theory of evolution by natural selection delivered a serious blow to this conception. Natural selection is a blind historical process, stripped of moral hierarchy. A cockroach is just as well adapted to its environment as I am to mine. In fact, the bug may be better adapted — cockroaches have been around a lot longer than humans have, and may well survive after we are gone. But the very word evolution can imply a progression — New Agers talk about becoming “more evolved” — and in the 19th century, it was still common to translate evolutionary ideas into ladder-of-nature terms.
Modern biological science has in principle rejected the ladder of nature. But the intuitive picture is still powerful. In particular, the idea that children and nonhuman animals are lesser beings has been surprisingly persistent. Even scientists often act as if children and animals are defective adult humans, defined by the abilities we have and they don’t. Neuroscientists, for example, sometimes compare brain-damaged adults to children and animals.
We always should have been suspicious of this picture, but now we have no excuse for continuing with it. In the past 30 years, research has explored the distinctive ways in which children as well as animals think, and the discoveries deal the coup de grâce to the ladder of nature. The primatologist Frans de Waal has been at the forefront of the animal research, and its most important public voice. In Are We Smart Enough to Know How Smart Animals Are?, he makes a passionate and convincing case for the sophistication of nonhuman minds. [Continue reading…]
Live Science reports: Auroras are produced when solar particles that are ejected from the sun and carried to Earth by solar winds collide with electrically charged particles in Earth’s magnetic field, triggering reactions in the upper atmosphere that release light.
They are most commonly glimpsed on Earth at high latitudes, in the Northern and Southern hemispheres. While auroras are typically green, they can also appear violet, red, blue, while or pink, according to NASA.
To produce the auroras video, NASA partnered with media infrastructure experts at Harmonic, with whom they also launched a new UHD channel featuring 4K content — the first noncommercial UHD channel in North America — the agency said in a statement. [Continue reading…]
Antonia Malchik writes: The ranch my mother was born on was not built solely by her family’s labour. It relied on water aquifers deep beneath the surface, the health of soil on plains and hills beyond their borders, on hundreds – perhaps thousands – of years of care by the Blackfoot tribe whose land it should have remained, the weather over which they had no control, the sun, seeds, and a community who knew in their bones that nobody could do this alone. These things comprised an ecosystem that was vital to their survival, and the same holds true today. These are our shared natural resources, or what was once known as ‘the commons’.
We live on and in the commons, even if we don’t recognise it as such. Every time we take a breath, we’re drawing from the commons. Every time we walk down a road we’re using the commons. Every time we sit in the sunshine or shelter from the rain, listen to birdsong or shut our windows against the stench from a nearby oil refinery, we are engaging with the commons. But we have forgotten the critical role that the commons play in our existence. The commons make life possible. Beyond that, they make private property possible. When the commons become degraded or destroyed, enjoyment and use of private property become untenable. A Montana rancher could own ten thousand acres and still be dependent on the health of the commons. Neither a gated community nor high-rise penthouse apartments can close a human being from the wider world that we all rely on. [Continue reading…]
Lee Vinsel & Andrew Russell write: Innovation is a dominant ideology of our era, embraced in America by Silicon Valley, Wall Street, and the Washington DC political elite. As the pursuit of innovation has inspired technologists and capitalists, it has also provoked critics who suspect that the peddlers of innovation radically overvalue innovation. What happens after innovation, they argue, is more important. Maintenance and repair, the building of infrastructures, the mundane labour that goes into sustaining functioning and efficient infrastructures, simply has more impact on people’s daily lives than the vast majority of technological innovations.
The fates of nations on opposing sides of the Iron Curtain illustrate good reasons that led to the rise of innovation as a buzzword and organising concept. Over the course of the 20th century, open societies that celebrated diversity, novelty, and progress performed better than closed societies that defended uniformity and order.
In the late 1960s in the face of the Vietnam War, environmental degradation, the Kennedy and King assassinations, and other social and technological disappointments, it grew more difficult for many to have faith in moral and social progress. To take the place of progress, ‘innovation’, a smaller, and morally neutral, concept arose. Innovation provided a way to celebrate the accomplishments of a high-tech age without expecting too much from them in the way of moral and social improvement.
Before the dreams of the New Left had been dashed by massacres at My Lai and Altamont, economists had already turned to technology to explain the economic growth and high standards of living in capitalist democracies. Beginning in the late 1950s, the prominent economists Robert Solow and Kenneth Arrow found that traditional explanations – changes in education and capital, for example – could not account for significant portions of growth. They hypothesised that technological change was the hidden X factor. Their finding fit hand-in-glove with all of the technical marvels that had come out of the Second World War, the Cold War, the post-Sputnik craze for science and technology, and the post-war vision of a material abundance. [Continue reading…]