Matthew Battles writes: In ancient Greece, writing arose among traders and artisans doing business in the markets with foreigners and visitors from other cities. Their alphabet emerged not in scribal colleges or the king’s halls, nor was it brought by conquerors, but instead came ashore in the freewheeling, acquisitive, materialistic atmosphere of the agora, the Greek marketplace that also birthed democracy and the public sphere.
The Phoenician letters, transformed by Greeks into the alphabet, share an origin with the Hebrew characters. They crossed the Aegean Sea with trade that flourished between the Greek peninsula and the Canaanite mainland in the ninth century BC. The first alphabetic inscriptions in Greek appear on goods—keepsake vases, containers for oil and olives. The likely earliest such inscription extant, the “Dipylon inscription,” is on a wine jug; it reads something like this: “Whichever dancer dances most fleetly, he shall get me [this vessel]” — a trophy cup. The so-called Cup of Nestor, a clay vessel dating from the eighth century BC, bears an inscription that begins “Nestor’s cup am I, good to drink from.” For the next couple of centuries, Greek letters are used mostly to inscribe dedications — indexing acquisition and ownership in a society where property was the basis of participation in the lettered public sphere.
This was a society of freeborn traders and artisans, a culture that prized beauty, expressiveness, and originality — the perfect environment for the kind of flourishing public space writing seems everywhere to wish to build. And yet the magisterium of writing grows slowly in ancient Greece. Centuries pass before the first texts appear. [Continue reading…]
Tim Maudlin writes: Ever since the 1920s when Edwin Hubble discovered that all visible galaxies are receding from one another, cosmologists have embraced a general theory of the history of the visible universe. In this view, the visible universe originated from an unimaginably compact and hot state. Prior to 1980, the standard Big Bang models had the universe expanding in size and cooling at a steady pace from the beginning of time until now. These models were adjusted to fit observed data by selecting initial conditions, but some began to worry about how precise and special those initial conditions had to be.
For example, Big Bang models attribute an energy density — the amount of energy per cubic centimetre — to the initial state of the cosmos, as well as an initial rate of expansion of space itself. The subsequent evolution of the universe depends sensitively on the relation between this energy density and the rate of expansion. Pack the energy too densely and the universe will eventually recontract into a big crunch; spread it out too thin and the universe will expand forever, with the matter diluting so rapidly that stars and galaxies cannot form. Between these two extremes lies a highly specialised history in which the universe never recontracts and the rate of expansion eventually slows to zero. In the argot of cosmology, this special situation is called W = 1. Cosmological observation reveals that the value of W for the visible universe at present is quite near to 1. This is, by itself, a surprising finding, but what’s more, the original Big Bang models tell us that W = 1 is an unstable equilibrium point, like a marble perfectly balanced on an overturned bowl. If the marble happens to be exactly at the top it will stay there, but if it is displaced even slightly from the very top it will rapidly roll faster and faster away from that special state.
This is an example of cosmological fine-tuning. In order for the standard Big Bang model to yield a universe even vaguely like ours now, this particular initial condition had to be just right at the beginning. Some cosmologists balked at this idea. It might have been just luck that the Solar system formed and life evolved on Earth, but it seemed unacceptable for it to be just luck that the whole observable universe should have started so near the critical energy density required for there to be cosmic structure at all. [Continue reading…]
Andrew Grant writes: In T.H. White’s fantasy novel The Once and Future King, Merlyn the magician suffers from a rare and incurable condition: He experiences time in reverse. He knows what will happen, he laments, but not what has happened. “I have to live backwards from in front, while surrounded by a lot of people living forwards from behind,” he explains to a justifiably confused companion.
While Merlyn is fictional, the backward flow of time should not be. As the society of ants in White’s novel proclaimed, “everything not forbidden is compulsory,” and the laws of physics do not forbid time to run backward. Equations that determine the acceleration of a rocket or the momentum of a billiard ball all work just as well with time flowing backward as forward. Yet unlike Merlyn, we remember the past but not the future. We get older but never younger. There is a distinct arrow of time pointing in one direction.
For nearly 140 years, scientists have tried to rule out the backward flow of time by way of nature’s preference for disorder. Left alone, nature transforms the neat into the messy, a one-way progression that many physicists have used to define time’s direction. But if nature prefers disorder now, it always has. The challenge is figuring out why the universe started out so orderly — thereby allowing disorder to grow and time to march forward — when the early universe should have been messy. Despite many proposals, physicists have not been able to agree on a satisfying explanation. [Continue reading…]
Peter Brannen writes: At 5:30am I awoke to the sound of the diesel chug-chugging of a lone lobster boat carving into the glassy Atlantic. An audience of shrieking gulls hushed in the engine’s wake as it rumbled through the narrow strait that separates the United States from Canada. After the boat pushed out into the open ocean, the gulls resumed their gossip, and I began preparing for a day on the water, still groggy from the night before, after joining a group of researchers over beer. I had come to Lubec in Maine with a bizarre question: what was 9/11 like for whales?
I sleepwalked to the pier and helped pack a former Coast Guard patrol boat with boxes of underwater audio-visual equipment, as well as a crossbow built for daring, drive-by whale biopsies. A pod of 40 North Atlantic right whales had been spotted south of Nova Scotia the day before and, with only a few hundred of the animals left in existence, any such gathering meant a potential field research coup. ‘They even got a poop sample!’ one scientist excitedly told me. The boat roared to life and we slipped past postcard-ready lighthouses and crumbling, cedar-shingled herring smokehouses. Lisa Conger, a biologist at the US National Oceanic and Atmospheric Administration (NOAA), manned the wheel of our boat, dodging Canadian islands and fishing weirs. As the Bay of Fundy opened before us, a container ship lumbered by to our stern: a boxy, smoking juggernaut, as unstoppable as the tide.
‘After 9/11, we were the only ones out here,’ Conger said over the wind and waves. While this tucked-away corner of the Atlantic might seem far from the rattle of world affairs, the terrorist attacks on New York City and Washington DC of 11 September 2001 changed the marine world of the Bay of Fundy, too.
Conger leads the field team in Lubec for Susan Parks, a biology professor at Syracuse University. As a graduate student, Parks found that right whales were trying to adapt to a gradual crescendo of man-made noise in the oceans. In one study, she compared calls recorded off Martha’s Vineyard in 1956, and off Argentina in the 1977, with those in the North Atlantic in 2000. Christopher Clark, her advisor, had recorded the Argentine whales and, when Parks first played back their calls, she thought there must be some sort of mistake.
‘It was older equipment – reel-to-reel tapes which I’d never used before – so I went to Chris to ask if I had the speed of the tape wrong because the whales sounded so much lower in frequency than the whales I had been working with.’
In fact, Parks discovered, modern North Atlantic right whales have shifted their calls up an entire octave over the past half century or so, in an attempt to be heard over the unending, and steadily growing, low-frequency drone of commercial shipping. Where right-whale song once carried 20 to 100 miles, today those calls travel only five miles before dissolving into the din. [Continue reading…]
Mark Spitznagel and Nassim Nicholas Taleb, who both anticipated the failure of the financial system in 2007, see eerie parallels in the reasoning being used by those who believed in stability then and those who insist now that there are no significant risks involved in the promotion of genetically modified organisms (GMOs).
Spitznagel and Taleb write: First, there has been a tendency to label anyone who dislikes G.M.O.s as anti-science — and put them in the anti-antibiotics, antivaccine, even Luddite category. There is, of course, nothing scientific about the comparison. Nor is the scholastic invocation of a “consensus” a valid scientific argument.
Interestingly, there are similarities between arguments that are pro-G.M.O. and snake oil, the latter having relied on a cosmetic definition of science. The charge of “therapeutic nihilism” was leveled at people who contested snake oil medicine at the turn of the 20th century. (At that time, anything with the appearance of sophistication was considered “progress.”)
Second, we are told that a modified tomato is not different from a naturally occurring tomato. That is wrong: The statistical mechanism by which a tomato was built by nature is bottom-up, by tinkering in small steps (as with the restaurant business, distinct from contagion-prone banks). In nature, errors stay confined and, critically, isolated.
Third, the technological salvation argument we faced in finance is also present with G.M.O.s, which are intended to “save children by providing them with vitamin-enriched rice.” The argument’s flaw is obvious: In a complex system, we do not know the causal chain, and it is better to solve a problem by the simplest method, and one that is unlikely to cause a bigger problem.
Fourth, by leading to monoculture — which is the same in finance, where all risks became systemic — G.M.O.s threaten more than they can potentially help. Ireland’s population was decimated by the effect of monoculture during the potato famine. Just consider that the same can happen at a planetary scale.
Fifth, and what is most worrisome, is that the risk of G.M.O.s are more severe than those of finance. They can lead to complex chains of unpredictable changes in the ecosystem, while the methods of risk management with G.M.O.s — unlike finance, where some effort was made — are not even primitive.
The G.M.O. experiment, carried out in real time and with our entire food and ecological system as its laboratory, is perhaps the greatest case of human hubris ever. It creates yet another systemic, “too big too fail” enterprise — but one for which no bailouts will be possible when it fails. [Continue reading…]
Erika Hayasaki writes: No one, it seemed, knew what the patient clutching the stuffed blue bunny was feeling. At 33, he looked like a bewildered boy, staring at the doctors who crowded into his room in Massachusetts General Hospital. Lumpy oyster-sized growths shrouded his face, the result of a genetic condition that causes benign tumors to develop on the skin, in the brain, and on organs, hindering the patient’s ability to walk, talk, and feel normally. He looked like he was grimacing in pain, but his mother explained that her son, Josh, did not have a clear threshold for pain or other sensations. If Josh felt any discomfort at all, he was nearly incapable of expressing it.
“Any numbness?” asked Joel Salinas, a soft-spoken doctor in the Harvard Neurology Residency Program, a red-tipped reflex hammer in his doctor’s coat pocket. “Like it feels funny?” Josh did not answer. Salinas pulled up a blanket, revealing Josh’s atrophied legs. He thumped Josh’s left leg with the reflex hammer. Again, Josh barely reacted. But Salinas felt something: The thump against Josh’s left knee registered on Salinas’s own left knee as a tingly tap. Not just a thought of what the thump might feel like, but a distinct physical sensation.
That’s because Salinas himself has a rare medical condition, one that stands in marked contrast to his patients’: While Josh appeared unresponsive even to his own sensations, Salinas is peculiarly attuned to the sensations of others. If he sees someone slapped across the cheek, Salinas feels a hint of the slap against his own cheek. A pinch on a stranger’s right arm might become a tickle on his own. “If a person is touched, I feel it, and then I recognize that it’s touch,” Salinas says.
The condition is called mirror-touch synesthesia, and it has aroused significant interest among neuroscientists in recent years because it appears to be an extreme form of a basic human trait. In all of us, mirror neurons in the premotor cortex and other areas of the brain activate when we watch someone else’s behaviors and actions. Our brains map the regions of the body where we see someone else caressed, jabbed, or whacked, and they mimic just a shade of that feeling on the same spots on our own bodies. For mirror-touch synesthetes like Salinas, that mental simulacrum is so strong that it crosses a threshold into near-tactile sensation, sometimes indistinguishable from one’s own. Neuroscientists regard the condition as a state of “heightened empathic ability.” [Continue reading…]
Michael Graziano writes: The brain is a machine: a device that processes information. That’s according to the last 100 years of neuroscience. And yet, somehow, it also has a subjective experience of at least some of that information. Whether we’re talking about the thoughts and memories swirling around on the inside, or awareness of the stuff entering through the senses, somehow the brain experiences its own data. It has consciousness. How can that be?
That question has been called the ‘hard problem’ of consciousness, where ‘hard’ is a euphemism for ‘impossible’. For decades, it was a disreputable topic among scientists: if you can’t study it or understand it or engineer it, then it isn’t science. On that view, neuroscientists should stick to the mechanics of how information is processed in the brain, not the spooky feeling that comes along with the information. And yet, one can’t deny that the phenomenon exists. What exactly is this consciousness stuff?
Here’s a more pointed way to pose the question: can we build it? Artificial intelligence is growing more intelligent every year, but we’ve never given our machines consciousness. People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own. But that hasn’t panned out (so far as anyone knows). Apparently, the vital spark has to be deliberately designed into the machine. And so the race is on to figure out what exactly consciousness is and how to build it.
I’ve made my own entry into that race, a framework for understanding consciousness called the Attention Schema theory. The theory suggests that consciousness is no bizarre byproduct – it’s a tool for regulating information in the brain. And it’s not as mysterious as most people think. As ambitious as it sounds, I believe we’re close to understanding consciousness well enough to build it.
In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow. [Continue reading…]
Sarah Scoles writes: In the late 1670s, the Dutch scientist Antonie van Leeuwenhoek looked through a microscope at a drop of water and found a whole world. It was tiny; it was squirmy; it was full of weird body types; and it lived, invisibly, all around us. Humans were supposed to be the centre and purpose of the world, and these microscale ‘animalcules’ seemed to have no effect – visible or otherwise – on our existence, so why were they here? Now, we know that those animalcules are microbes and they actually rule our world. They make us sick, keep us healthy, decompose our waste, feed the bottom of our food chain, and make our oxygen. Human ignorance of them had no bearing on their significance, just as gravity was important before an apple dropped on Isaac Newton’s head.
We could be poised on another such philosophical precipice, about to discover a second important world hiding amid our own: alien life on our own planet. Today, scientists seek extraterrestrial microbes in geysers of chilled water shooting from Enceladus and in the ocean sloshing beneath the ice crust of Europa. They search for clues that beings once skittered around the formerly wet rocks of Mars. Telescopes peer into the atmospheres of distant exoplanets, hunting for signs of life. But perhaps these efforts are too far afield. If multiple lines of life bubbled up on Earth and evolved separately from our ancient ancestors, we could discover alien biology without leaving this planet.
The modern-day descendants of these ‘aliens’ might still be here, squirming around with van Leeuwenhoek’s microbes. Scientists call these hypothetical hangers-on the ‘shadow biosphere’. If a shadow biosphere were ever found, it would provide evidence that life isn’t a once-in-a-universe statistical accident. If biology can happen twice on one planet, it must have happened countless times on countless other planets. But most of our scientific methods are ill-equipped to discover a shadow biosphere. And that’s a problem, says Carol Cleland, the originator of the term and its biggest proponent. [Continue reading…]
Julian Baggini writes: In California they call it #droughtshaming. People found using lots of water when the state is as dry as a cream cracker are facing trial by hashtag. The actor Tom Selleck is the latest target, accused of taking truckloads of water from a fire hydrant for his thirsty avocado crop.
Most people seem happy to harness the power of shame when the victims are the rich and powerful. But our attitudes to shame are actually much more ambivalent and contradictory. That’s why it was a stroke of genius to call Paul Abbott’s Channel Four series Shameless. We are at a point in our social history where the word is perfectly poised between condemnation and celebration.
Shame, like guilt, is something we often feel we are better off without. The shame culture is strongly associated with oppression. So-called honour killings are inflicted on people who bring shame to their families, often for nothing more than loving the “wrong” person or, most horrifically, for being the victims of rape. In the case of gay people, shame has given way to pride. To be shameless is to be who you are, without apology.
And yet in other contexts we are rather conflicted about the cry of shame. You can protest against honour killings one day, then name and shame tax-evading multinationals the next. When politicians are called shameless, there is no doubt that this is a very bad thing. Shame is like rain: whether it’s good or bad depends on where and how heavily it falls. [Continue reading…]
James McWilliams writes: In January 2010, while driving from Chicago to Minneapolis, Sam McNerney played an audiobook and had an epiphany. The book was Jonah Lehrer’s How We Decide, and the epiphany was that consciousness could reside in the brain. The quest for an empirical understanding of consciousness has long preoccupied neurobiologists. But McNerney was no neurobiologist. He was a twenty-year-old philosophy major at Hamilton College. The standard course work — ancient, modern, and contemporary philosophy — enthralled him. But after this drive, after he listened to Lehrer, something changed. “I had to rethink everything I knew about everything,” McNerney said.
Lehrer’s publisher later withdrew How We Decide for inaccuracies. But McNerney was mentally galvanized for good reason. He had stumbled upon what philosophers call the “Hard Problem” — the quest to understand the enigma of the gap between mind and body. Intellectually speaking, what McNerney experienced was like diving for a penny in a pool and coming up with a gold nugget.
The philosopher Thomas Nagel drew popular attention to the Hard Problem four decades ago in an influential essay titled “What Is It Like to Be a Bat?” Frustrated with the “recent wave of reductionist euphoria,” Nagel challenged the reductive conception of mind — the idea that consciousness resides as a physical reality in the brain — by highlighting the radical subjectivity of experience. His main premise was that “an organism has conscious mental states if and only if there is something that it is like to be that organism.”
If that idea seems elusive, consider it this way: A bat has consciousness only if there is something that it is like for that bat to be a bat. Sam has consciousness only if there is something it is like for Sam to be Sam. You have consciousness only if there is something that it is like for you to be you (and you know that there is). And here’s the key to all this: Whatever that “like” happens to be, according to Nagel, it necessarily defies empirical verification. You can’t put your finger on it. It resists physical accountability.
McNerney returned to Hamilton intellectually turbocharged. This was an idea worth pondering. “It took hold of me,” he said. “It chose me — I know you hear that a lot, but that’s how it felt.” He arranged to do research in cognitive science as an independent study project with Russell Marcus, a trusted professor. Marcus let him loose to write what McNerney calls “a seventy-page hodgepodge of psychological research and philosophy and everything in between.” Marcus remembered the project more charitably, as “a huge, ambitious, wide-ranging, smart, and engaging paper.” Once McNerney settled into his research, Marcus added, “it was like he had gone into a phone booth and come out as a super-student.”
When he graduated in 2011, McNerney was proud. “I pulled it off,” he said about earning a degree in philosophy. Not that he had any hard answers to any big problems, much less the Hard Problem. Not that he had a job. All he knew was that he “wanted to become the best writer and thinker I could be.”
So, as one does, he moved to New York City.
McNerney is the kind of young scholar adored by the humanities. He’s inquisitive, open-minded, thrilled by the world of ideas, and touched with a tinge of old-school transcendentalism. What Emerson said of Thoreau — “he declined to give up his large ambition of knowledge and action for any narrow craft or profession” — is certainly true of McNerney. [Continue reading…]
Peter Singer writes: I met Matt Wage in 2009 when he took my Practical Ethics class at Princeton University. In the readings relating to global poverty and what we ought to be doing about it, he found an estimate of how much it costs to save the life of one of the millions of children who die each year from diseases that we can prevent or cure. This led him to calculate how many lives he could save, over his lifetime, assuming he earned an average income and donated 10 percent of it to a highly effective organization, such as one providing families with bed nets to prevent malaria, a major killer of children. He discovered that he could, with that level of donation, save about one hundred lives. He thought to himself, “Suppose you see a burning building, and you run through the flames and kick a door open, and let one hundred people out. That would be the greatest moment in your life. And I could do as much good as that!”
Two years later Wage graduated, receiving the Philosophy Department’s prize for the best senior thesis of the year. He was accepted by the University of Oxford for postgraduate study. Many students who major in philosophy dream of an opportunity like that — I know I did — but by then Wage had done a lot of thinking about what career would do the most good. Over many discussions with others, he came to a very different choice: he took a job on Wall Street, working for an arbitrage trading firm. On a higher income, he would be able to give much more, both as a percentage and in dollars, than 10 percent of a professor’s income. One year after graduating, Wage was donating a six-figure sum — roughly half his annual earnings — to highly effective charities. He was on the way to saving a hundred lives, not over his entire career but within the first year or two of his working life and every year thereafter.
Wage is part of an exciting new movement: effective altruism. At universities from Oxford to Harvard and the University of Washington, from Bayreuth in Germany to Brisbane in Australia, effective altruism organizations are forming. Effective altruists are engaging in lively discussions on social media and websites, and their ideas are being examined in the New York Times, the Washington Post, and even the Wall Street Journal. Philosophy, and more specifically practical ethics, has played an important role in effective altruism’s development, and effective altruism shows that philosophy is returning to its Socratic role of challenging our ideas about what it is to live an ethical life. In doing so, philosophy has demonstrated its ability to transform, sometimes quite dramatically, the lives of those who study it. Moreover, it is a transformation that, I believe, should be welcomed because it makes the world a better place. [Continue reading…]
Tim Flannery writes: In 1609 Galileo Galilei turned his gaze, magnified twentyfold by lenses of Dutch design, toward the heavens, touching off a revolution in human thought. A decade later those same lenses delivered the possibility of a second revolution, when Galileo discovered that by inverting their order he could magnify the very small. For the first time in human history, it lay in our power to see the building blocks of bodies, the causes of diseases, and the mechanism of reproduction. Yet according to Paul Falkowski’s Life’s Engines:
Galileo did not seem to have much interest in what he saw with his inverted telescope. He appears to have made little attempt to understand, let alone interpret, the smallest objects he could observe.
Bewitched by the moons of Saturn and their challenge to the heliocentric model of the universe, Galileo ignored the possibility that the magnified fleas he drew might have anything to do with the plague then ravaging Italy. And so for three centuries more, one of the cruellest of human afflictions would rage on, misunderstood and thus unpreventable, taking the lives of countless millions.
Perhaps it’s fundamentally human both to be awed by the things we look up to and to pass over those we look down on. If so, it’s a tendency that has repeatedly frustrated human progress. Half a century after Galileo looked into his “inverted telescope,” the pioneers of microscopy Antonie van Leeuwenhoek and Robert Hooke revealed that a Lilliputian universe existed all around and even inside us. But neither of them had students, and their researches ended in another false dawn for microscopy. It was not until the middle of the nineteenth century, when German manufacturers began producing superior instruments, that the discovery of the very small began to alter science in fundamental ways.
Today, driven by ongoing technological innovations, the exploration of the “nanoverse,” as the realm of the minuscule is often termed, continues to gather pace. One of the field’s greatest pioneers is Paul Falkowski, a biological oceanographer who has spent much of his scientific career working at the intersection of physics, chemistry, and biology. His book Life’s Engines: How Microbes Made Earth Habitable focuses on one of the most astonishing discoveries of the twentieth century — that our cells are comprised of a series of highly sophisticated “little engines” or nanomachines that carry out life’s vital functions. It is a work full of surprises, arguing for example that all of life’s most important innovations were in existence by around 3.5 billion years ago—less than a billion years after Earth formed, and a period at which our planet was largely hostile to living things. How such mind-bending complexity could have evolved at such an early stage, and in such a hostile environment, has forced a fundamental reconsideration of the origins of life itself. [Continue reading…]
Rebecca Lemov writes: The protagonist of William Gibson’s 2014 science-fiction novel The Peripheral, Flynne Fisher, works remotely in a way that lends a new and fuller sense to that phrase. The novel features a double future: One set of characters inhabits the near future, ten to fifteen years from the present, while another lives seventy years on, after a breakdown of the climate and multiple other systems that has apocalyptically altered human and technological conditions around the world.
In that “further future,” only 20 percent of the Earth’s human population has survived. Each of these fortunate few is well off and able to live a life transformed by healing nanobots, somaticized e-mail (which delivers messages and calls to the roof of the user’s mouth), quantum computing, and clean energy. For their amusement and profit, certain “hobbyists” in this future have the Borgesian option of cultivating an alternative path in history — it’s called “opening up a stub” — and mining it for information as well as labor.
Flynne, the remote worker, lives on one of those paths. A young woman from the American Southeast, possibly Appalachia or the Ozarks, she favors cutoff jeans and resides in a trailer, eking out a living as a for-hire sub playing video games for wealthy aficionados. Recruited by a mysterious entity that is beta-testing drones that are doing “security” in a murky skyscraper in an unnamed city, she thinks at first that she has been taken on to play a kind of video game in simulated reality. As it turns out, she has been employed to work in the future as an “information flow” — low-wage work, though the pay translates to a very high level of remuneration in the place and time in which she lives.
What is of particular interest is the fate of Flynne’s body. Before she goes to work she must tend to its basic needs (nutrition and elimination), because during her shift it will effectively be “vacant.” Lying on a bed with a special data-transmitting helmet attached to her head, she will be elsewhere, inhabiting an ambulatory robot carapace — a “peripheral” — built out of bio-flesh that can receive her consciousness.
Bodies in this data-driven economic backwater of a future world economy are abandoned for long stretches of time — disposable, cheapened, eerily vacant in the temporary absence of “someone at the helm.” Meanwhile, fleets of built bodies, grown from human DNA, await habitation.
Alex Rivera explores similar territory in his Mexican sci-fi film The Sleep Dealer (2008), set in a future world after a wall erected on the US–Mexican border has successfully blocked migrants from entering the United States. Digital networks allow people to connect to strangers all over the world, fostering fantasies of physical and emotional connection. At the same time, low-income would-be migrant workers in Tijuana and elsewhere can opt to do remote work by controlling robots building a skyscraper in a faraway city, locking their bodies into devices that transmit their labor to the site. In tank-like warehouses, lined up in rows of stalls, they “jack in” by connecting data-transmitting cables to nodes implanted in their arms and backs. Their bodies are in Mexico, but their work is in New York or San Francisco, and while they are plugged in and wearing their remote-viewing spectacles, their limbs move like the appendages of ghostly underwater creatures. Their life force drained by the taxing labor, these “sleep dealers” end up as human discards.
What is surprising about these sci-fi conceits, from “transitioning” in The Peripheral to “jacking in” in The Sleep Dealer, is how familiar they seem, or at least how closely they reflect certain aspects of contemporary reality. Almost daily, we encounter people who are there but not there, flickering in and out of what we think of as presence. A growing body of research explores the question of how users interact with their gadgets and media outlets, and how in turn these interactions transform social relationships. The defining feature of this heavily mediated reality is our presence “elsewhere,” a removal of at least part of our conscious awareness from wherever our bodies happen to be. [Continue reading…]
The New York Times reports: Since 2007, when scientists announced plans for a Human Microbiome Project to catalog the micro-organisms living in our body, the profound appreciation for the influence of such organisms has grown rapidly with each passing year. Bacteria in the gut produce vitamins and break down our food; their presence or absence has been linked to obesity, inflammatory bowel disease and the toxic side effects of prescription drugs. Biologists now believe that much of what makes us human depends on microbial activity. The two million unique bacterial genes found in each human microbiome can make the 23,000 genes in our cells seem paltry, almost negligible, by comparison. “It has enormous implications for the sense of self,” Tom Insel, the director of the National Institute of Mental Health, told me. “We are, at least from the standpoint of DNA, more microbial than human. That’s a phenomenal insight and one that we have to take seriously when we think about human development.”
Given the extent to which bacteria are now understood to influence human physiology, it is hardly surprising that scientists have turned their attention to how bacteria might affect the brain. Micro-organisms in our gut secrete a profound number of chemicals, and researchers like [Mark] Lyte have found that among those chemicals are the same substances used by our neurons to communicate and regulate mood, like dopamine, serotonin and gamma-aminobutyric acid (GABA). [Continue reading…]
Julia Rosen writes: It’s no secret that water shapes the world around us. Rivers etch great canyons into the Earth’s surface, while glaciers reorganize the topography of entire mountain ranges. But water’s influence on the landscape runs much deeper than this: Water explains why we have land in the first place.
You might think of land as the bits of crust that just happen to jut up above sea level, but that’s mostly not the case. Earth’s continents rise above the seas in part because they are actually made of different stuff than the seafloor. Oceanic crust consists of dense, black basalt, which rides low in the mantle — like a wet log in a river — and eventually sinks back into Earth’s interior. But continental crust floats like a cork, thanks to one special rock: granite. If we didn’t have granite to lift the continents up, a vast ocean would cover our entire planet, with barely any land to speak of.
Gritty, gray granite and its rocky relatives dominate the continents. It forms the sheer walls of Yosemite Valley and the chiseled faces of Mount Rushmore (and also gleams from many a kitchen counter and shower stall). If you don’t see granite at the surface, you can bet it’s hiding just a few kilometers below your feet, unless you’re cruising over the middle of the ocean in a boat or plane. But what’s special about granite is that it’s relatively buoyant, for a rock—and that to make it, you need water. [Continue reading…]
Philip Maughan writes: In 1967, while working with the Swiss photographer Jean Mohr on A Fortunate Man, a book about a country GP serving a deprived community in the Forest of Dean, Gloucestershire, John Berger began to reconsider what the role of a writer should be. “He does more than treat [his patients] when they are ill,” Berger wrote of John Sassall, a man whose proximity to suffering and poverty deeply affected him (he later committed suicide). The rural doctor assumes a democratic function, in Berger’s eyes, one he describes in consciously literary terms. “He is the objective witness of their lives,” he says. “The clerk of their records.”
The next five years marked a transition in Berger’s life. By 1972, when the groundbreaking art series Ways of Seeing aired on BBC television, Berger had been living on the Continent for over a decade. He won the Booker Prize for his novel G. the same year, announcing to an astonished audience at the black-tie ceremony in London that he would divide his prize money between the Black Panther Party (he denounced Booker McConnell’s historic links with plantations and indentured labour in the Caribbean) and the funding of his next project with Mohr, A Seventh Man, recording the experiences of migrant workers across Europe.
This is the point at which, for some in England, Berger became a more distant figure. He moved from Switzerland to a remote village in the French Alps two years later. “He thinks and feels what the community incoherently knows,” Berger wrote of Sassall, the “fortunate man”. After time spent working on A Seventh Man, those words were just as applicable to the writer himself. It was Berger who had become a “clerk”, collecting stories from the voiceless and dispossessed – peasants, migrants, even animals – a self-effacing role he would continue to occupy for the next 43 years.
The life and work of John Berger represents a challenge. How best to describe the output of a writer whose bibliography, according to Wikipedia, contains ten “novels”, four “plays”, three collections of “poetry” and 33 books labelled “other”?
“A kind of vicarious autobiography and a history of our time as refracted through the prism of art,” is how the writer Geoff Dyer introduced a selection of Berger’s non-fiction in 2001, though the category doesn’t quite fit. “To separate fact and imagination, event and feeling, protagonist and narrator, is to stay on dry land and never put to sea,” Berger wrote in 1991 in a manifesto (of sorts) inspired by James Joyce’s Ulysses, a book he first read, in French, at the age of 14. [Continue reading…]
Claire Ainsworth writes: Ask me what a genome is, and I, like many science writers, might mutter about it being the genetic blueprint of a living creature. But then I’ll confess that “blueprint” is a lousy metaphor since it implies that the genome is two-dimensional, prescriptive and unresponsive.
Now two new books about the genome show the limitation of that metaphor for something so intricate, complex, multilayered and dynamic. Both underscore the risks of taking metaphors too literally, not just in undermining popular understanding of science, but also in trammelling scientific enquiry. They are for anyone interested in how new discoveries and controversies will transform our understanding of biology and of ourselves.
John Parrington is an associate professor in molecular and cellular pharmacology at the University of Oxford. In The Deeper Genome, he provides an elegant, accessible account of the profound and unexpected complexities of the human genome, and shows how many ideas developed in the 20th century are being overturned.
Take DNA. It’s no simple linear code, but an intricately wound, 3D structure that coils and uncoils as its genes are read and spliced in myriad ways. Forget genes as discrete, protein-coding “beads on a string”: only a tiny fraction of the genome codes for proteins, and anyway, no one knows exactly what a gene is any more.[Continue reading…]