Peter Brannen writes: At 5:30am I awoke to the sound of the diesel chug-chugging of a lone lobster boat carving into the glassy Atlantic. An audience of shrieking gulls hushed in the engine’s wake as it rumbled through the narrow strait that separates the United States from Canada. After the boat pushed out into the open ocean, the gulls resumed their gossip, and I began preparing for a day on the water, still groggy from the night before, after joining a group of researchers over beer. I had come to Lubec in Maine with a bizarre question: what was 9/11 like for whales?
I sleepwalked to the pier and helped pack a former Coast Guard patrol boat with boxes of underwater audio-visual equipment, as well as a crossbow built for daring, drive-by whale biopsies. A pod of 40 North Atlantic right whales had been spotted south of Nova Scotia the day before and, with only a few hundred of the animals left in existence, any such gathering meant a potential field research coup. ‘They even got a poop sample!’ one scientist excitedly told me. The boat roared to life and we slipped past postcard-ready lighthouses and crumbling, cedar-shingled herring smokehouses. Lisa Conger, a biologist at the US National Oceanic and Atmospheric Administration (NOAA), manned the wheel of our boat, dodging Canadian islands and fishing weirs. As the Bay of Fundy opened before us, a container ship lumbered by to our stern: a boxy, smoking juggernaut, as unstoppable as the tide.
‘After 9/11, we were the only ones out here,’ Conger said over the wind and waves. While this tucked-away corner of the Atlantic might seem far from the rattle of world affairs, the terrorist attacks on New York City and Washington DC of 11 September 2001 changed the marine world of the Bay of Fundy, too.
Conger leads the field team in Lubec for Susan Parks, a biology professor at Syracuse University. As a graduate student, Parks found that right whales were trying to adapt to a gradual crescendo of man-made noise in the oceans. In one study, she compared calls recorded off Martha’s Vineyard in 1956, and off Argentina in the 1977, with those in the North Atlantic in 2000. Christopher Clark, her advisor, had recorded the Argentine whales and, when Parks first played back their calls, she thought there must be some sort of mistake.
‘It was older equipment – reel-to-reel tapes which I’d never used before – so I went to Chris to ask if I had the speed of the tape wrong because the whales sounded so much lower in frequency than the whales I had been working with.’
In fact, Parks discovered, modern North Atlantic right whales have shifted their calls up an entire octave over the past half century or so, in an attempt to be heard over the unending, and steadily growing, low-frequency drone of commercial shipping. Where right-whale song once carried 20 to 100 miles, today those calls travel only five miles before dissolving into the din. [Continue reading…]
Category Archives: Attention to the Unseen
The risks that GMOs may pose to the global ecosystem
Mark Spitznagel and Nassim Nicholas Taleb, who both anticipated the failure of the financial system in 2007, see eerie parallels in the reasoning being used by those who believed in stability then and those who insist now that there are no significant risks involved in the promotion of genetically modified organisms (GMOs).
Spitznagel and Taleb write: First, there has been a tendency to label anyone who dislikes G.M.O.s as anti-science — and put them in the anti-antibiotics, antivaccine, even Luddite category. There is, of course, nothing scientific about the comparison. Nor is the scholastic invocation of a “consensus” a valid scientific argument.
Interestingly, there are similarities between arguments that are pro-G.M.O. and snake oil, the latter having relied on a cosmetic definition of science. The charge of “therapeutic nihilism” was leveled at people who contested snake oil medicine at the turn of the 20th century. (At that time, anything with the appearance of sophistication was considered “progress.”)
Second, we are told that a modified tomato is not different from a naturally occurring tomato. That is wrong: The statistical mechanism by which a tomato was built by nature is bottom-up, by tinkering in small steps (as with the restaurant business, distinct from contagion-prone banks). In nature, errors stay confined and, critically, isolated.
Third, the technological salvation argument we faced in finance is also present with G.M.O.s, which are intended to “save children by providing them with vitamin-enriched rice.” The argument’s flaw is obvious: In a complex system, we do not know the causal chain, and it is better to solve a problem by the simplest method, and one that is unlikely to cause a bigger problem.
Fourth, by leading to monoculture — which is the same in finance, where all risks became systemic — G.M.O.s threaten more than they can potentially help. Ireland’s population was decimated by the effect of monoculture during the potato famine. Just consider that the same can happen at a planetary scale.
Fifth, and what is most worrisome, is that the risk of G.M.O.s are more severe than those of finance. They can lead to complex chains of unpredictable changes in the ecosystem, while the methods of risk management with G.M.O.s — unlike finance, where some effort was made — are not even primitive.
The G.M.O. experiment, carried out in real time and with our entire food and ecological system as its laboratory, is perhaps the greatest case of human hubris ever. It creates yet another systemic, “too big too fail” enterprise — but one for which no bailouts will be possible when it fails. [Continue reading…]
Is mirror-touch synesthesia a superpower or a curse?
Erika Hayasaki writes: No one, it seemed, knew what the patient clutching the stuffed blue bunny was feeling. At 33, he looked like a bewildered boy, staring at the doctors who crowded into his room in Massachusetts General Hospital. Lumpy oyster-sized growths shrouded his face, the result of a genetic condition that causes benign tumors to develop on the skin, in the brain, and on organs, hindering the patient’s ability to walk, talk, and feel normally. He looked like he was grimacing in pain, but his mother explained that her son, Josh, did not have a clear threshold for pain or other sensations. If Josh felt any discomfort at all, he was nearly incapable of expressing it.
“Any numbness?” asked Joel Salinas, a soft-spoken doctor in the Harvard Neurology Residency Program, a red-tipped reflex hammer in his doctor’s coat pocket. “Like it feels funny?” Josh did not answer. Salinas pulled up a blanket, revealing Josh’s atrophied legs. He thumped Josh’s left leg with the reflex hammer. Again, Josh barely reacted. But Salinas felt something: The thump against Josh’s left knee registered on Salinas’s own left knee as a tingly tap. Not just a thought of what the thump might feel like, but a distinct physical sensation.
That’s because Salinas himself has a rare medical condition, one that stands in marked contrast to his patients’: While Josh appeared unresponsive even to his own sensations, Salinas is peculiarly attuned to the sensations of others. If he sees someone slapped across the cheek, Salinas feels a hint of the slap against his own cheek. A pinch on a stranger’s right arm might become a tickle on his own. “If a person is touched, I feel it, and then I recognize that it’s touch,” Salinas says.
The condition is called mirror-touch synesthesia, and it has aroused significant interest among neuroscientists in recent years because it appears to be an extreme form of a basic human trait. In all of us, mirror neurons in the premotor cortex and other areas of the brain activate when we watch someone else’s behaviors and actions. Our brains map the regions of the body where we see someone else caressed, jabbed, or whacked, and they mimic just a shade of that feeling on the same spots on our own bodies. For mirror-touch synesthetes like Salinas, that mental simulacrum is so strong that it crosses a threshold into near-tactile sensation, sometimes indistinguishable from one’s own. Neuroscientists regard the condition as a state of “heightened empathic ability.” [Continue reading…]
Is consciousness an engineering problem?
Michael Graziano writes: The brain is a machine: a device that processes information. That’s according to the last 100 years of neuroscience. And yet, somehow, it also has a subjective experience of at least some of that information. Whether we’re talking about the thoughts and memories swirling around on the inside, or awareness of the stuff entering through the senses, somehow the brain experiences its own data. It has consciousness. How can that be?
That question has been called the ‘hard problem’ of consciousness, where ‘hard’ is a euphemism for ‘impossible’. For decades, it was a disreputable topic among scientists: if you can’t study it or understand it or engineer it, then it isn’t science. On that view, neuroscientists should stick to the mechanics of how information is processed in the brain, not the spooky feeling that comes along with the information. And yet, one can’t deny that the phenomenon exists. What exactly is this consciousness stuff?
Here’s a more pointed way to pose the question: can we build it? Artificial intelligence is growing more intelligent every year, but we’ve never given our machines consciousness. People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own. But that hasn’t panned out (so far as anyone knows). Apparently, the vital spark has to be deliberately designed into the machine. And so the race is on to figure out what exactly consciousness is and how to build it.
I’ve made my own entry into that race, a framework for understanding consciousness called the Attention Schema theory. The theory suggests that consciousness is no bizarre byproduct – it’s a tool for regulating information in the brain. And it’s not as mysterious as most people think. As ambitious as it sounds, I believe we’re close to understanding consciousness well enough to build it.
In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow. [Continue reading…]
Does Earth have a ‘shadow biosphere’?
Sarah Scoles writes: In the late 1670s, the Dutch scientist Antonie van Leeuwenhoek looked through a microscope at a drop of water and found a whole world. It was tiny; it was squirmy; it was full of weird body types; and it lived, invisibly, all around us. Humans were supposed to be the centre and purpose of the world, and these microscale ‘animalcules’ seemed to have no effect – visible or otherwise – on our existence, so why were they here? Now, we know that those animalcules are microbes and they actually rule our world. They make us sick, keep us healthy, decompose our waste, feed the bottom of our food chain, and make our oxygen. Human ignorance of them had no bearing on their significance, just as gravity was important before an apple dropped on Isaac Newton’s head.
We could be poised on another such philosophical precipice, about to discover a second important world hiding amid our own: alien life on our own planet. Today, scientists seek extraterrestrial microbes in geysers of chilled water shooting from Enceladus and in the ocean sloshing beneath the ice crust of Europa. They search for clues that beings once skittered around the formerly wet rocks of Mars. Telescopes peer into the atmospheres of distant exoplanets, hunting for signs of life. But perhaps these efforts are too far afield. If multiple lines of life bubbled up on Earth and evolved separately from our ancient ancestors, we could discover alien biology without leaving this planet.
The modern-day descendants of these ‘aliens’ might still be here, squirming around with van Leeuwenhoek’s microbes. Scientists call these hypothetical hangers-on the ‘shadow biosphere’. If a shadow biosphere were ever found, it would provide evidence that life isn’t a once-in-a-universe statistical accident. If biology can happen twice on one planet, it must have happened countless times on countless other planets. But most of our scientific methods are ill-equipped to discover a shadow biosphere. And that’s a problem, says Carol Cleland, the originator of the term and its biggest proponent. [Continue reading…]
The case for shame
Julian Baggini writes: In California they call it #droughtshaming. People found using lots of water when the state is as dry as a cream cracker are facing trial by hashtag. The actor Tom Selleck is the latest target, accused of taking truckloads of water from a fire hydrant for his thirsty avocado crop.
Most people seem happy to harness the power of shame when the victims are the rich and powerful. But our attitudes to shame are actually much more ambivalent and contradictory. That’s why it was a stroke of genius to call Paul Abbott’s Channel Four series Shameless. We are at a point in our social history where the word is perfectly poised between condemnation and celebration.
Shame, like guilt, is something we often feel we are better off without. The shame culture is strongly associated with oppression. So-called honour killings are inflicted on people who bring shame to their families, often for nothing more than loving the “wrong” person or, most horrifically, for being the victims of rape. In the case of gay people, shame has given way to pride. To be shameless is to be who you are, without apology.
And yet in other contexts we are rather conflicted about the cry of shame. You can protest against honour killings one day, then name and shame tax-evading multinationals the next. When politicians are called shameless, there is no doubt that this is a very bad thing. Shame is like rain: whether it’s good or bad depends on where and how heavily it falls. [Continue reading…]
On the value of not knowing everything
James McWilliams writes: In January 2010, while driving from Chicago to Minneapolis, Sam McNerney played an audiobook and had an epiphany. The book was Jonah Lehrer’s How We Decide, and the epiphany was that consciousness could reside in the brain. The quest for an empirical understanding of consciousness has long preoccupied neurobiologists. But McNerney was no neurobiologist. He was a twenty-year-old philosophy major at Hamilton College. The standard course work — ancient, modern, and contemporary philosophy — enthralled him. But after this drive, after he listened to Lehrer, something changed. “I had to rethink everything I knew about everything,” McNerney said.
Lehrer’s publisher later withdrew How We Decide for inaccuracies. But McNerney was mentally galvanized for good reason. He had stumbled upon what philosophers call the “Hard Problem” — the quest to understand the enigma of the gap between mind and body. Intellectually speaking, what McNerney experienced was like diving for a penny in a pool and coming up with a gold nugget.
The philosopher Thomas Nagel drew popular attention to the Hard Problem four decades ago in an influential essay titled “What Is It Like to Be a Bat?” Frustrated with the “recent wave of reductionist euphoria,” Nagel challenged the reductive conception of mind — the idea that consciousness resides as a physical reality in the brain — by highlighting the radical subjectivity of experience. His main premise was that “an organism has conscious mental states if and only if there is something that it is like to be that organism.”
If that idea seems elusive, consider it this way: A bat has consciousness only if there is something that it is like for that bat to be a bat. Sam has consciousness only if there is something it is like for Sam to be Sam. You have consciousness only if there is something that it is like for you to be you (and you know that there is). And here’s the key to all this: Whatever that “like” happens to be, according to Nagel, it necessarily defies empirical verification. You can’t put your finger on it. It resists physical accountability.
McNerney returned to Hamilton intellectually turbocharged. This was an idea worth pondering. “It took hold of me,” he said. “It chose me — I know you hear that a lot, but that’s how it felt.” He arranged to do research in cognitive science as an independent study project with Russell Marcus, a trusted professor. Marcus let him loose to write what McNerney calls “a seventy-page hodgepodge of psychological research and philosophy and everything in between.” Marcus remembered the project more charitably, as “a huge, ambitious, wide-ranging, smart, and engaging paper.” Once McNerney settled into his research, Marcus added, “it was like he had gone into a phone booth and come out as a super-student.”
When he graduated in 2011, McNerney was proud. “I pulled it off,” he said about earning a degree in philosophy. Not that he had any hard answers to any big problems, much less the Hard Problem. Not that he had a job. All he knew was that he “wanted to become the best writer and thinker I could be.”
So, as one does, he moved to New York City.
McNerney is the kind of young scholar adored by the humanities. He’s inquisitive, open-minded, thrilled by the world of ideas, and touched with a tinge of old-school transcendentalism. What Emerson said of Thoreau — “he declined to give up his large ambition of knowledge and action for any narrow craft or profession” — is certainly true of McNerney. [Continue reading…]
The logic of effective altruism
Peter Singer writes: I met Matt Wage in 2009 when he took my Practical Ethics class at Princeton University. In the readings relating to global poverty and what we ought to be doing about it, he found an estimate of how much it costs to save the life of one of the millions of children who die each year from diseases that we can prevent or cure. This led him to calculate how many lives he could save, over his lifetime, assuming he earned an average income and donated 10 percent of it to a highly effective organization, such as one providing families with bed nets to prevent malaria, a major killer of children. He discovered that he could, with that level of donation, save about one hundred lives. He thought to himself, “Suppose you see a burning building, and you run through the flames and kick a door open, and let one hundred people out. That would be the greatest moment in your life. And I could do as much good as that!”
Two years later Wage graduated, receiving the Philosophy Department’s prize for the best senior thesis of the year. He was accepted by the University of Oxford for postgraduate study. Many students who major in philosophy dream of an opportunity like that — I know I did — but by then Wage had done a lot of thinking about what career would do the most good. Over many discussions with others, he came to a very different choice: he took a job on Wall Street, working for an arbitrage trading firm. On a higher income, he would be able to give much more, both as a percentage and in dollars, than 10 percent of a professor’s income. One year after graduating, Wage was donating a six-figure sum — roughly half his annual earnings — to highly effective charities. He was on the way to saving a hundred lives, not over his entire career but within the first year or two of his working life and every year thereafter.
Wage is part of an exciting new movement: effective altruism. At universities from Oxford to Harvard and the University of Washington, from Bayreuth in Germany to Brisbane in Australia, effective altruism organizations are forming. Effective altruists are engaging in lively discussions on social media and websites, and their ideas are being examined in the New York Times, the Washington Post, and even the Wall Street Journal. Philosophy, and more specifically practical ethics, has played an important role in effective altruism’s development, and effective altruism shows that philosophy is returning to its Socratic role of challenging our ideas about what it is to live an ethical life. In doing so, philosophy has demonstrated its ability to transform, sometimes quite dramatically, the lives of those who study it. Moreover, it is a transformation that, I believe, should be welcomed because it makes the world a better place. [Continue reading…]
The tiny engines of life
Tim Flannery writes: In 1609 Galileo Galilei turned his gaze, magnified twentyfold by lenses of Dutch design, toward the heavens, touching off a revolution in human thought. A decade later those same lenses delivered the possibility of a second revolution, when Galileo discovered that by inverting their order he could magnify the very small. For the first time in human history, it lay in our power to see the building blocks of bodies, the causes of diseases, and the mechanism of reproduction. Yet according to Paul Falkowski’s Life’s Engines:
Galileo did not seem to have much interest in what he saw with his inverted telescope. He appears to have made little attempt to understand, let alone interpret, the smallest objects he could observe.
Bewitched by the moons of Saturn and their challenge to the heliocentric model of the universe, Galileo ignored the possibility that the magnified fleas he drew might have anything to do with the plague then ravaging Italy. And so for three centuries more, one of the cruellest of human afflictions would rage on, misunderstood and thus unpreventable, taking the lives of countless millions.
Perhaps it’s fundamentally human both to be awed by the things we look up to and to pass over those we look down on. If so, it’s a tendency that has repeatedly frustrated human progress. Half a century after Galileo looked into his “inverted telescope,” the pioneers of microscopy Antonie van Leeuwenhoek and Robert Hooke revealed that a Lilliputian universe existed all around and even inside us. But neither of them had students, and their researches ended in another false dawn for microscopy. It was not until the middle of the nineteenth century, when German manufacturers began producing superior instruments, that the discovery of the very small began to alter science in fundamental ways.
Today, driven by ongoing technological innovations, the exploration of the “nanoverse,” as the realm of the minuscule is often termed, continues to gather pace. One of the field’s greatest pioneers is Paul Falkowski, a biological oceanographer who has spent much of his scientific career working at the intersection of physics, chemistry, and biology. His book Life’s Engines: How Microbes Made Earth Habitable focuses on one of the most astonishing discoveries of the twentieth century — that our cells are comprised of a series of highly sophisticated “little engines” or nanomachines that carry out life’s vital functions. It is a work full of surprises, arguing for example that all of life’s most important innovations were in existence by around 3.5 billion years ago—less than a billion years after Earth formed, and a period at which our planet was largely hostile to living things. How such mind-bending complexity could have evolved at such an early stage, and in such a hostile environment, has forced a fundamental reconsideration of the origins of life itself. [Continue reading…]
On not being there: The data-driven body at work and at play
Rebecca Lemov writes: The protagonist of William Gibson’s 2014 science-fiction novel The Peripheral, Flynne Fisher, works remotely in a way that lends a new and fuller sense to that phrase. The novel features a double future: One set of characters inhabits the near future, ten to fifteen years from the present, while another lives seventy years on, after a breakdown of the climate and multiple other systems that has apocalyptically altered human and technological conditions around the world.
In that “further future,” only 20 percent of the Earth’s human population has survived. Each of these fortunate few is well off and able to live a life transformed by healing nanobots, somaticized e-mail (which delivers messages and calls to the roof of the user’s mouth), quantum computing, and clean energy. For their amusement and profit, certain “hobbyists” in this future have the Borgesian option of cultivating an alternative path in history — it’s called “opening up a stub” — and mining it for information as well as labor.
Flynne, the remote worker, lives on one of those paths. A young woman from the American Southeast, possibly Appalachia or the Ozarks, she favors cutoff jeans and resides in a trailer, eking out a living as a for-hire sub playing video games for wealthy aficionados. Recruited by a mysterious entity that is beta-testing drones that are doing “security” in a murky skyscraper in an unnamed city, she thinks at first that she has been taken on to play a kind of video game in simulated reality. As it turns out, she has been employed to work in the future as an “information flow” — low-wage work, though the pay translates to a very high level of remuneration in the place and time in which she lives.
What is of particular interest is the fate of Flynne’s body. Before she goes to work she must tend to its basic needs (nutrition and elimination), because during her shift it will effectively be “vacant.” Lying on a bed with a special data-transmitting helmet attached to her head, she will be elsewhere, inhabiting an ambulatory robot carapace — a “peripheral” — built out of bio-flesh that can receive her consciousness.
Bodies in this data-driven economic backwater of a future world economy are abandoned for long stretches of time — disposable, cheapened, eerily vacant in the temporary absence of “someone at the helm.” Meanwhile, fleets of built bodies, grown from human DNA, await habitation.
Alex Rivera explores similar territory in his Mexican sci-fi film The Sleep Dealer (2008), set in a future world after a wall erected on the US–Mexican border has successfully blocked migrants from entering the United States. Digital networks allow people to connect to strangers all over the world, fostering fantasies of physical and emotional connection. At the same time, low-income would-be migrant workers in Tijuana and elsewhere can opt to do remote work by controlling robots building a skyscraper in a faraway city, locking their bodies into devices that transmit their labor to the site. In tank-like warehouses, lined up in rows of stalls, they “jack in” by connecting data-transmitting cables to nodes implanted in their arms and backs. Their bodies are in Mexico, but their work is in New York or San Francisco, and while they are plugged in and wearing their remote-viewing spectacles, their limbs move like the appendages of ghostly underwater creatures. Their life force drained by the taxing labor, these “sleep dealers” end up as human discards.
What is surprising about these sci-fi conceits, from “transitioning” in The Peripheral to “jacking in” in The Sleep Dealer, is how familiar they seem, or at least how closely they reflect certain aspects of contemporary reality. Almost daily, we encounter people who are there but not there, flickering in and out of what we think of as presence. A growing body of research explores the question of how users interact with their gadgets and media outlets, and how in turn these interactions transform social relationships. The defining feature of this heavily mediated reality is our presence “elsewhere,” a removal of at least part of our conscious awareness from wherever our bodies happen to be. [Continue reading…]
Each of us is, genetically, more microbial than human
The New York Times reports: Since 2007, when scientists announced plans for a Human Microbiome Project to catalog the micro-organisms living in our body, the profound appreciation for the influence of such organisms has grown rapidly with each passing year. Bacteria in the gut produce vitamins and break down our food; their presence or absence has been linked to obesity, inflammatory bowel disease and the toxic side effects of prescription drugs. Biologists now believe that much of what makes us human depends on microbial activity. The two million unique bacterial genes found in each human microbiome can make the 23,000 genes in our cells seem paltry, almost negligible, by comparison. “It has enormous implications for the sense of self,” Tom Insel, the director of the National Institute of Mental Health, told me. “We are, at least from the standpoint of DNA, more microbial than human. That’s a phenomenal insight and one that we have to take seriously when we think about human development.”
Given the extent to which bacteria are now understood to influence human physiology, it is hardly surprising that scientists have turned their attention to how bacteria might affect the brain. Micro-organisms in our gut secrete a profound number of chemicals, and researchers like [Mark] Lyte have found that among those chemicals are the same substances used by our neurons to communicate and regulate mood, like dopamine, serotonin and gamma-aminobutyric acid (GABA). [Continue reading…]
Why the modern world is bad for your brain
Daniel J Levitin writes: Our brains are busier than ever before. We’re assaulted with facts, pseudo facts, jibber-jabber, and rumour, all posing as information. Trying to figure out what you need to know and what you can ignore is exhausting. At the same time, we are all doing more. Thirty years ago, travel agents made our airline and rail reservations, salespeople helped us find what we were looking for in shops, and professional typists or secretaries helped busy people with their correspondence. Now we do most of those things ourselves. We are doing the jobs of 10 different people while still trying to keep up with our lives, our children and parents, our friends, our careers, our hobbies, and our favourite TV shows.
Our smartphones have become Swiss army knife–like appliances that include a dictionary, calculator, web browser, email, Game Boy, appointment calendar, voice recorder, guitar tuner, weather forecaster, GPS, texter, tweeter, Facebook updater, and flashlight. They’re more powerful and do more things than the most advanced computer at IBM corporate headquarters 30 years ago. And we use them all the time, part of a 21st-century mania for cramming everything we do into every single spare moment of downtime. We text while we’re walking across the street, catch up on email while standing in a queue – and while having lunch with friends, we surreptitiously check to see what our other friends are doing. At the kitchen counter, cosy and secure in our domicile, we write our shopping lists on smartphones while we are listening to that wonderfully informative podcast on urban beekeeping.
But there’s a fly in the ointment. Although we think we’re doing several things at once, multitasking, this is a powerful and diabolical illusion. Earl Miller, a neuroscientist at MIT and one of the world experts on divided attention, says that our brains are “not wired to multitask well… When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.” So we’re not actually keeping a lot of balls in the air like an expert juggler; we’re more like a bad amateur plate spinner, frantically switching from one task to another, ignoring the one that is not right in front of us but worried it will come crashing down any minute. Even though we think we’re getting a lot done, ironically, multitasking makes us demonstrably less efficient. [Continue reading…]
How water, paradoxically, creates the land we walk on
Julia Rosen writes: It’s no secret that water shapes the world around us. Rivers etch great canyons into the Earth’s surface, while glaciers reorganize the topography of entire mountain ranges. But water’s influence on the landscape runs much deeper than this: Water explains why we have land in the first place.
You might think of land as the bits of crust that just happen to jut up above sea level, but that’s mostly not the case. Earth’s continents rise above the seas in part because they are actually made of different stuff than the seafloor. Oceanic crust consists of dense, black basalt, which rides low in the mantle — like a wet log in a river — and eventually sinks back into Earth’s interior. But continental crust floats like a cork, thanks to one special rock: granite. If we didn’t have granite to lift the continents up, a vast ocean would cover our entire planet, with barely any land to speak of.
Gritty, gray granite and its rocky relatives dominate the continents. It forms the sheer walls of Yosemite Valley and the chiseled faces of Mount Rushmore (and also gleams from many a kitchen counter and shower stall). If you don’t see granite at the surface, you can bet it’s hiding just a few kilometers below your feet, unless you’re cruising over the middle of the ocean in a boat or plane. But what’s special about granite is that it’s relatively buoyant, for a rock—and that to make it, you need water. [Continue reading…]
The art of attention: John Berger at 88
Philip Maughan writes: In 1967, while working with the Swiss photographer Jean Mohr on A Fortunate Man, a book about a country GP serving a deprived community in the Forest of Dean, Gloucestershire, John Berger began to reconsider what the role of a writer should be. “He does more than treat [his patients] when they are ill,” Berger wrote of John Sassall, a man whose proximity to suffering and poverty deeply affected him (he later committed suicide). The rural doctor assumes a democratic function, in Berger’s eyes, one he describes in consciously literary terms. “He is the objective witness of their lives,” he says. “The clerk of their records.”
The next five years marked a transition in Berger’s life. By 1972, when the groundbreaking art series Ways of Seeing aired on BBC television, Berger had been living on the Continent for over a decade. He won the Booker Prize for his novel G. the same year, announcing to an astonished audience at the black-tie ceremony in London that he would divide his prize money between the Black Panther Party (he denounced Booker McConnell’s historic links with plantations and indentured labour in the Caribbean) and the funding of his next project with Mohr, A Seventh Man, recording the experiences of migrant workers across Europe.
This is the point at which, for some in England, Berger became a more distant figure. He moved from Switzerland to a remote village in the French Alps two years later. “He thinks and feels what the community incoherently knows,” Berger wrote of Sassall, the “fortunate man”. After time spent working on A Seventh Man, those words were just as applicable to the writer himself. It was Berger who had become a “clerk”, collecting stories from the voiceless and dispossessed – peasants, migrants, even animals – a self-effacing role he would continue to occupy for the next 43 years.
The life and work of John Berger represents a challenge. How best to describe the output of a writer whose bibliography, according to Wikipedia, contains ten “novels”, four “plays”, three collections of “poetry” and 33 books labelled “other”?
“A kind of vicarious autobiography and a history of our time as refracted through the prism of art,” is how the writer Geoff Dyer introduced a selection of Berger’s non-fiction in 2001, though the category doesn’t quite fit. “To separate fact and imagination, event and feeling, protagonist and narrator, is to stay on dry land and never put to sea,” Berger wrote in 1991 in a manifesto (of sorts) inspired by James Joyce’s Ulysses, a book he first read, in French, at the age of 14. [Continue reading…]
Everything we thought we knew about the genome is turning out to be wrong
Claire Ainsworth writes: Ask me what a genome is, and I, like many science writers, might mutter about it being the genetic blueprint of a living creature. But then I’ll confess that “blueprint” is a lousy metaphor since it implies that the genome is two-dimensional, prescriptive and unresponsive.
Now two new books about the genome show the limitation of that metaphor for something so intricate, complex, multilayered and dynamic. Both underscore the risks of taking metaphors too literally, not just in undermining popular understanding of science, but also in trammelling scientific enquiry. They are for anyone interested in how new discoveries and controversies will transform our understanding of biology and of ourselves.
John Parrington is an associate professor in molecular and cellular pharmacology at the University of Oxford. In The Deeper Genome, he provides an elegant, accessible account of the profound and unexpected complexities of the human genome, and shows how many ideas developed in the 20th century are being overturned.
Take DNA. It’s no simple linear code, but an intricately wound, 3D structure that coils and uncoils as its genes are read and spliced in myriad ways. Forget genes as discrete, protein-coding “beads on a string”: only a tiny fraction of the genome codes for proteins, and anyway, no one knows exactly what a gene is any more.[Continue reading…]
DNA deciphers roots of modern Europeans
Carl Zimmer writes: For centuries, archaeologists have reconstructed the early history of Europe by digging up ancient settlements and examining the items that their inhabitants left behind. More recently, researchers have been scrutinizing something even more revealing than pots, chariots and swords: DNA.
On Wednesday in the journal Nature, two teams of scientists — one based at the University of Copenhagen and one based at Harvard University — presented the largest studies to date of ancient European DNA, extracted from 170 skeletons found in countries from Spain to Russia. Both studies indicate that today’s Europeans descend from three groups who moved into Europe at different stages of history.
The first were hunter-gatherers who arrived some 45,000 years ago in Europe. Then came farmers who arrived from the Near East about 8,000 years ago.
Finally, a group of nomadic sheepherders from western Russia called the Yamnaya arrived about 4,500 years ago. The authors of the new studies also suggest that the Yamnaya language may have given rise to many of the languages spoken in Europe today. [Continue reading…]
Even atheists intuitively believe in a creator
Tom Jacobs writes: Since the discoveries of Darwin, evidence has gradually mounted refuting the notion that the natural world is the product of a deity or other outside designer. Yet this idea remains firmly lodged in the human brain.
Just how firmly is the subject of newly published research, which finds even self-proclaimed atheists instinctively think of natural phenomena as being purposefully created.
The findings “suggest that there is a deeply rooted natural tendency to view nature as designed,” writes a research team led by Elisa Järnfelt of Newman University. They also provide evidence that, in the researchers’ words, “religious non-belief is cognitively effortful.” [Continue reading…]
Direct connection discovered between the brain and the immune system
Science Daily reports: In a stunning discovery that overturns decades of textbook teaching, researchers at the University of Virginia School of Medicine have determined that the brain is directly connected to the immune system by vessels previously thought not to exist. That such vessels could have escaped detection when the lymphatic system has been so thoroughly mapped throughout the body is surprising on its own, but the true significance of the discovery lies in the effects it could have on the study and treatment of neurological diseases ranging from autism to Alzheimer’s disease to multiple sclerosis.
“Instead of asking, ‘How do we study the immune response of the brain?’ ‘Why do multiple sclerosis patients have the immune attacks?’ now we can approach this mechanistically. Because the brain is like every other tissue connected to the peripheral immune system through meningeal lymphatic vessels,” said Jonathan Kipnis, PhD, professor in the UVA Department of Neuroscience and director of UVA’s Center for Brain Immunology and Glia (BIG). “It changes entirely the way we perceive the neuro-immune interaction. We always perceived it before as something esoteric that can’t be studied. But now we can ask mechanistic questions.”
“We believe that for every neurological disease that has an immune component to it, these vessels may play a major role,” Kipnis said. “Hard to imagine that these vessels would not be involved in a [neurological] disease with an immune component.”
Kevin Lee, PhD, chairman of the UVA Department of Neuroscience, described his reaction to the discovery by Kipnis’ lab: “The first time these guys showed me the basic result, I just said one sentence: ‘They’ll have to change the textbooks.’ There has never been a lymphatic system for the central nervous system, and it was very clear from that first singular observation — and they’ve done many studies since then to bolster the finding — that it will fundamentally change the way people look at the central nervous system’s relationship with the immune system.” [Continue reading…]