Paul Voosen writes: Last year, as the summer heat broke, a congregation of climate scientists and communicators gathered at the headquarters of the American Association for the Advancement of Science, a granite edifice erected in the heart of Washington, to wail over their collective futility.
Year by year, the evidence for human-caused global warming has grown more robust. Greenhouse gases load the air and sea. Temperatures rise. Downpours strengthen. Ice melts. Yet the American public seems, from cursory glances at headlines and polls, more divided than ever on the basic existence of climate change, in spite of scientists’ many, many warnings. Their message, the attendees fretted, simply wasn’t getting through.
This worry wasn’t just about climate change, but also stem cells. Genetically modified food. Vaccines. Nuclear power. And, of course, evolution: Challenging scientific reality seems to be an increasingly common feature of American life. Some researchers have gone so far as to accuse one political party, the Republicans, of making “science denial” a bedrock principle. The authority attributed to scientists for a century is crumbling.
It is a disturbing story. It is also, in many ways, a fairy tale. So says Dan M. Kahan, a law professor at Yale University who, over the past decade, has run an insurgent research campaign into how the public understands science. Through a magpie synthesis of psychology, risk perception, anthropology, political science, and communication research, leavened with heavy doses of empiricism and idol bashing, he has exposed the tribal biases that mediate our encounters with scientific knowledge. It’s a dynamic he calls cultural cognition. [Continue reading...]
Mark C. Taylor writes: “Sleeker. Faster. More Intuitive” (The New York Times); “Welcome to a world where speed is everything” (Verizon FiOS); “Speed is God, and time is the devil” (chief of Hitachi’s portable-computer division). In “real” time, life speeds up until time itself seems to disappear—fast is never fast enough, everything has to be done now, instantly. To pause, delay, stop, slow down is to miss an opportunity and to give an edge to a competitor. Speed has become the measure of success—faster chips, faster computers, faster networks, faster connectivity, faster news, faster communications, faster transactions, faster deals, faster delivery, faster product cycles, faster brains, faster kids. Why are we so obsessed with speed, and why can’t we break its spell?
The cult of speed is a modern phenomenon. In “The Futurist Manifesto” in 1909, Filippo Tommaso Marionetti declared, “We say that the splendor of the world has been enriched by a new beauty: the beauty of speed.” The worship of speed reflected and promoted a profound shift in cultural values that occurred with the advent of modernity and modernization. With the emergence of industrial capitalism, the primary values governing life became work, efficiency, utility, productivity, and competition. When Frederick Winslow Taylor took his stopwatch to the factory floor in the early 20th century to increase workers’ efficiency, he began a high-speed culture of surveillance so memorably depicted in Charlie Chaplin’s Modern Times. Then, as now, efficiency was measured by the maximization of rapid production through the programming of human behavior.
With the transition from mechanical to electronic technologies, speed increased significantly. The invention of the telegraph, telephone, and stock ticker liberated communication from the strictures imposed by the physical means of conveyance. Previously, messages could be sent no faster than people, horses, trains, or ships could move. By contrast, immaterial words, sounds, information, and images could be transmitted across great distances at very high speed. During the latter half of the 19th century, railway and shipping companies established transportation networks that became the backbone of national and international information networks. When the trans-Atlantic cable (1858) and transcontinental railroad (1869) were completed, the foundation for the physical infrastructure of today’s digital networks was in place.
Fast-forward 100 years. During the latter half of the 20th century, information, communications, and networking technologies expanded rapidly, and transmission speed increased exponentially. But more than data and information were moving faster. Moore’s Law, according to which the speed of computer chips doubles every two years, now seems to apply to life itself. Plugged in 24/7/365, we are constantly struggling to keep up but are always falling further behind. The faster we go, the less time we seem to have. As our lives speed up, stress increases, and anxiety trickles down from managers to workers, and parents to children. [Continue reading...]
Natalie Wolchover writes: Imagine an archipelago where each island hosts a single tortoise species and all the islands are connected — say by rafts of flotsam. As the tortoises interact by dipping into one another’s food supplies, their populations fluctuate.
In 1972, the biologist Robert May devised a simple mathematical model that worked much like the archipelago. He wanted to figure out whether a complex ecosystem can ever be stable or whether interactions between species inevitably lead some to wipe out others. By indexing chance interactions between species as random numbers in a matrix, he calculated the critical “interaction strength” — a measure of the number of flotsam rafts, for example — needed to destabilize the ecosystem. Below this critical point, all species maintained steady populations. Above it, the populations shot toward zero or infinity.
Little did May know, the tipping point he discovered was one of the first glimpses of a curiously pervasive statistical law.
The law appeared in full form two decades later, when the mathematicians Craig Tracy and Harold Widom proved that the critical point in the kind of model May used was the peak of a statistical distribution. Then, in 1999, Jinho Baik, Percy Deift and Kurt Johansson discovered that the same statistical distribution also describes variations in sequences of shuffled integers — a completely unrelated mathematical abstraction. Soon the distribution appeared in models of the wriggling perimeter of a bacterial colony and other kinds of random growth. Before long, it was showing up all over physics and mathematics.
Diane Ackerman writes: Last summer, I watched as a small screen in a department store window ran a video of surfing in California. That simple display mesmerized high-heeled, pin-striped, well-coiffed passersby who couldn’t take their eyes off the undulating ocean and curling waves that dwarfed the human riders. Just as our ancient ancestors drew animals on cave walls and carved animals from wood and bone, we decorate our homes with animal prints and motifs, give our children stuffed animals to clutch, cartoon animals to watch, animal stories to read. Our lives trumpet, stomp, and purr with animal tales, such as The Bat Poet, The Velveteen Rabbit, Aesop’s Fables, The Wind in the Willows, The Runaway Bunny, and Charlotte’s Web. I first read these wondrous books as a grown-up, when both the adult and the kid in me were completely spellbound. We call each other by “pet” names, wear animal-print clothes. We ogle plants and animals up close on screens of one sort or another. We may not worship or hunt the animals we see, but we still regard them as necessary physical and spiritual companions. It seems the more we exile ourselves from nature, the more we crave its miracle waters. Yet technological nature can’t completely satisfy that ancient yearning.
What if, through novelty and convenience, digital nature replaces biological nature? Gradually, we may grow used to shallower and shallower experiences of nature. Studies show that we’ll suffer. Richard Louv writes of widespread “nature deficit disorder” among children who mainly play indoors — an oddity quite new in the history of humankind. He documents an upswell in attention disorders, obesity, depression, and lack of creativity. A San Diego fourth-grader once told him: “I like to play indoors because that’s where all the electrical outlets are.” Adults suffer equally. It’s telling that hospital patients with a view of trees heal faster than those gazing at city buildings and parking lots. In studies conducted by Peter H. Kahn and his colleagues at the University of Washington, office workers in windowless cubicles were given flat-screen views of nature. They reaped the benefits of greater health, happiness, and efficiency than those without virtual windows. But they weren’t as happy, healthy, or creative as people given real windows with real views of nature.
As a species, we’ve somehow survived large and small ice ages, genetic bottlenecks, plagues, world wars, and all manner of natural disasters, but I sometimes wonder if we’ll survive our own ingenuity. At first glance, it seems like we may be living in sensory overload. The new technology, for all its boons, also bedevils us with speed demons, alluring distractors, menacing highjinks, cyber-bullies, thought-nabbers, calm-frayers, and a spiky wad of miscellaneous news. Some days it feels like we’re drowning in a twittering bog of information. But, at exactly the same time, we’re living in sensory poverty, learning about the world without experiencing it up close, right here, right now, in all its messy, majestic, riotous detail. Like seeing icebergs without the cold, without squinting in the Antarctic glare, without the bracing breaths of dry air, without hearing the chorus of lapping waves and shrieking gulls. We lose the salty smell of the cold sea, the burning touch of ice. If, reading this, you can taste those sensory details in your mind, is that because you’ve experienced them in some form before, as actual experience? If younger people never experience them, can they respond to words on the page in the same way?
The farther we distance ourselves from the spell of the present, explored by all our senses, the harder it will be to understand and protect nature’s precarious balance, let alone the balance of our own human nature. [Continue reading...]
Philip Ball writes: It was announced in headlines worldwide as one of the biggest scientific discoveries for decades, sure to garner Nobel prizes. But now it looks likely that the alleged evidence of both gravitational waves and ultra-fast expansion of the universe in the big bang (called inflation) has literally turned to dust.
Last March, a team using a telescope called Bicep2 at the South Pole claimed to have read the signatures of these two elusive phenomena in the twisting patterns of the cosmic microwave background radiation: the afterglow of the big bang. But this week, results from an international consortium using a space telescope called Planck show that Bicep2’s data is likely to have come not from the microwave background but from dust scattered through our own galaxy.
Some will regard this as a huge embarrassment, not only for the Bicep2 team but for science itself. Already some researchers have criticised the team for making a premature announcement to the press before their work had been properly peer reviewed.
But there’s no shame here. On the contrary, this episode is good for science. [Continue reading...]
Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.
A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.
The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.
But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking. [Continue reading...]
Scientific American reports: Earth’s magnetic field, which protects the planet from huge blasts of deadly solar radiation, has been weakening over the past six months, according to data collected by a European Space Agency (ESA) satellite array called Swarm.
The biggest weak spots in the magnetic field — which extends 370,000 miles (600,000 kilometers) above the planet’s surface — have sprung up over the Western Hemisphere, while the field has strengthened over areas like the southern Indian Ocean, according to the magnetometers onboard the Swarm satellites — three separate satellites floating in tandem.
The scientists who conducted the study are still unsure why the magnetic field is weakening, but one likely reason is that Earth’s magnetic poles are getting ready to flip, said Rune Floberghagen, the ESA’s Swarm mission manager. In fact, the data suggest magnetic north is moving toward Siberia.
“Such a flip is not instantaneous, but would take many hundred if not a few thousand years,” Floberghagen told Live Science. “They have happened many times in the past.”
Scientists already know that magnetic north shifts. Once every few hundred thousand years the magnetic poles flip so that a compass would point south instead of north. While changes in magnetic field strength are part of this normal flipping cycle, data from Swarm have shown the field is starting to weaken faster than in the past. Previously, researchers estimated the field was weakening about 5 percent per century, but the new data revealed the field is actually weakening at 5 percent per decade, or 10 times faster than thought. As such, rather than the full flip occurring in about 2,000 years, as was predicted, the new data suggest it could happen sooner. [Continue reading...]
Natalie Wolchover writes: One January afternoon five years ago, Princeton geologist Lincoln Hollister opened an email from a colleague he’d never met bearing the subject line, “Help! Help! Help!” Paul Steinhardt, a theoretical physicist and the director of Princeton’s Center for Theoretical Science, wrote that he had an extraordinary rock on his hands, one that he thought was natural but whose origin and formation he could not identify. Hollister had examined tons of obscure rocks over his five-decade career and agreed to take a look.
Originally a dense grain two or three millimeters across that had been ground down into microscopic fragments, the rock was a mishmash of lustrous metal and matte mineral of a yellowish hue. It reminded Hollister of something from Oregon called josephinite. He told Steinhardt that such rocks typically form deep underground at the boundary between Earth’s core and mantle or near the surface due to a particular weathering phenomenon. “Of course, all of that ended up being a false path,” said Hollister, 75. The more the scientists studied the rock, the stranger it seemed.
After five years, approximately 5,000 Steinhardt-Hollister emails and a treacherous journey to the barren arctic tundra of northeastern Russia, the mystery has only deepened. Today, Steinhardt, Hollister and 15 collaborators reported the curious results of a long and improbable detective story. Their findings, detailed in the journal Nature Communications, reveal new aspects of the solar system as it was 4.5 billion years ago: chunks of incongruous metal inexplicably orbiting the newborn sun, a collision of extraordinary magnitude, and the creation of new minerals, including an entire class of matter never before seen in nature. It’s a drama etched in the geochemistry of a truly singular rock. [Continue reading...]
Philip Ball writes: When the German physicist Arnold Sommerfeld assigned his most brilliant student a subject for his doctoral thesis in 1923, he admitted that “I would not have proposed a topic of this difficulty to any of my other pupils.” Those others included such geniuses as Wolfgang Pauli and Hans Bethe, yet for Sommerfeld the only one who was up to the challenge of this subject was Werner Heisenberg.
Heisenberg went on to be a key founder of quantum theory and was awarded the 1932 Nobel Prize in physics. He developed one of the first mathematical descriptions of this new and revolutionary discipline, discovered the uncertainty principle, and together with Niels Bohr engineered the “Copenhagen interpretation” of quantum theory, to which many physicists still adhere today.
The subject of Heisenberg’s doctoral dissertation, however, wasn’t quantum physics. It was harder than that. The 59-page calculation that he submitted to the faculty of the University of Munich in 1923 was titled “On the stability and turbulence of fluid flow.”
Sommerfeld had been contacted by the Isar Company of Munich, which was contracted to prevent the Isar River from flooding by building up its banks. The company wanted to know at what point the river flow changed from being smooth (the technical term is “laminar”) to being turbulent, beset with eddies. That question requires some understanding of what turbulence is. Heisenberg’s work on the problem was impressive—he solved the mathematical equations of flow at the point of the laminar-to-turbulent change—and it stimulated ideas for decades afterward. But he didn’t really crack it—he couldn’t construct a comprehensive theory of turbulence.
Heisenberg was not given to modesty, but it seems he had no illusions about his achievements here. One popular story goes that he once said, “When I meet God, I am going to ask him two questions. Why relativity? And why turbulence? I really believe he will have an answer for the first.”
It is probably an apocryphal tale. The same remark has been attributed to at least one other person: The British mathematician and expert on fluid flow, Horace Lamb, is said to have hoped that God might enlighten him on quantum electrodynamics and turbulence, saying that “about the former I am rather optimistic.”
You get the point: turbulence, a ubiquitous and eminently practical problem in the real world, is frighteningly hard to understand. [Continue reading...]
No, a ‘supercomputer’ did NOT pass the Turing Test for the first time and everyone should know better
Follow numerous “reports” (i.e. numerous regurgitations of a press release from Reading University) on an “historic milestone in artificial intelligence” having been passed “for the very first time by supercomputer Eugene Goostman” at an event organized by Professor Kevin Warwick, Mike Masnick writes:
If you’ve spent any time at all in the tech world, you should automatically have red flags raised around that name. Warwick is somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question. He’s been doing it for decades. All the way back in 2000, we were writing about all the ridiculous press he got for claiming to be the world’s first “cyborg” for implanting a chip in his arm. There was even a — since taken down — Kevin Warwick Watch website that mocked and categorized all of his media appearances in which gullible reporters simply repeated all of his nutty claims. Warwick had gone quiet for a while, but back in 2010, we wrote about how his lab was getting bogus press for claiming to have “the first human infected with a computer virus.” The Register has rightly referred to Warwick as both “Captain Cyborg” and a “media strumpet” and have long been chronicling his escapades in exaggerating bogus stories about the intersection of humans and computers for many, many years.
Basically, any reporter should view extraordinary claims associated with Warwick with extreme caution. But that’s not what happened at all. Instead, as is all too typical with Warwick claims, the press went nutty over it, including publications that should know better.
Anyone can try having a “conversation” with Eugene Goostman.
If the strings of words it spits out give you the impression you’re talking to a human being, that’s probably an indication that you don’t spend enough time talking to human beings.
The New York Times reports: Scientists in the Netherlands have moved a step closer to overriding one of Albert Einstein’s most famous objections to the implications of quantum mechanics, which he described as “spooky action at a distance.”
In a paper published on Thursday in the journal Science, physicists at the Kavli Institute of Nanoscience at the Delft University of Technology reported that they were able to reliably teleport information between two quantum bits separated by three meters, or about 10 feet.
Quantum teleportation is not the “Star Trek”-style movement of people or things; rather, it involves transferring so-called quantum information — in this case what is known as the spin state of an electron — from one place to another without moving the physical matter to which the information is attached.
Classical bits, the basic units of information in computing, can have only one of two values — either 0 or 1. But quantum bits, or qubits, can simultaneously describe many values. They hold out both the possibility of a new generation of faster computing systems and the ability to create completely secure communication networks.
Moreover, the scientists are now closer to definitively proving Einstein wrong in his early disbelief in the notion of entanglement, in which particles separated by light-years can still appear to remain connected, with the state of one particle instantaneously affecting the state of another. [Continue reading...]
Bruce Falconer writes: In July 2012, a commercial fishing charter called Ocean Pearl motored through the frigid waters of the North Pacific. It carried 100 tons of iron dust and a crew of 11, led by a tall and heavyset 62-year-old American named Russ George. Passing beyond Canada’s territorial limit, the vessel arrived at an area of swirling currents known as the Haida eddies. There, in an eddy that had been chosen for the experiment, George and his crew mixed their cargo of iron with seawater and pumped it into the ocean through a hose, turning the waters a cloudy red. In early August, the ship returned to port, where the crew loaded an additional 20 tons of iron. They dumped it near the same Haida eddy a few weeks later, bringing to an end the most audacious and, before long, notorious attempt yet undertaken by man to modify Earth’s climate.
The expedition was grand in its aims and obscure in its patronage. Funding George’s voyage was a village of Haida Indians on Haida Gwaii, a remote Canadian archipelago about 500 miles northwest of Vancouver. George and his business partners had gained the town’s support for a project of dumping iron dust into the ocean to stimulate the growth of a plankton bloom. The plankton would help feed starving salmon, upon which the Haida had traditionally depended for their livelihood, and also remove a million tons of carbon dioxide from the atmosphere. (In algae form, plankton, like all plants, absorbs CO2 through photosynthesis.) The intended result: a replenished fish population—and millions of dollars’ worth of “carbon credits” that could be sold on the international market.
Back on land, in Vancouver, George and his associates drafted a report on the expedition. It claimed that Ocean Pearl had seeded more than 3,800 square miles of barren waters, leaving in its wake “a verdant emerald sea lush with the growth of a hundred million tonnes of plankton.” According to the account, fin, sperm, and sei whales, rarely seen in the region, appeared in large numbers, along with killer whales, dolphins, schools of albacore tuna, and armies of night-feeding squid. Albatross, storm petrels, sooty shearwaters, and other seabirds had circled above the ship, while flocks of Brant geese came to rest on the water and drifted with the bloom.
But George did little to publicize these findings. Instead, he set about compiling the data in private, telling people that he intended to produce a precise estimate of the CO2 he had removed from the atmosphere and then invite an independent auditor to certify his claims.
If that was the plan, it quickly fell apart. In October 2012, the Guardian of London broke the news of George’s expedition, saying it “contravenes two UN conventions” against large-scale ocean fertilization experiments. Numerous media outlets followed up with alarmed, often savage, reports, some of which went so far as to label George a “rogue geoengineer” or “eco-terrorist.” Amid the uproar, Canadian environment minister Peter Kent accused George of “rogue science” and promised that any violation of the country’s environmental law would be “prosecuted to the full extent.”
George, for his part, spoke of media misrepresentation, and he stressed that he was engaged in cautious research. Amid the controversy, in an interview with Scientific American, he was asked whether his iron fertilization had worked. “We don’t know,” he answered. “The correct attitude is: ‘Data, speak to me.’ Do the work, get the data, let it speak to you and tell you what the facts might be.” While most commenters seemed to think George had gone too far, some expressed sympathy—or at least puzzled ambivalence. A Salon headline the following summer asked, “Does Russ George Deserve a Nobel Prize or a Prison Sentence?”
George’s efforts place him in the company of a small but growing group of people convinced that global warming can be halted only with the aid of dramatic intervention in our planet’s natural processes, an approach known as geoengineering. The fixes envisioned by geoengineers range from the seemingly trivial, like painting roads and roofs white to reflect solar radiation, to the extraterrestrial, like a proposal by one Indian physicist to use the explosive power of nuclear fusion to elongate Earth’s orbit by one or two percent, thus reducing solar intensity. (It would also add 5.5 days to the year.)
Because its methods tend to be both ambitious and untested, geoengineering is closely tied to the dynamics of alarm—feeding on it and causing it in equal measure. [Continue reading...]
Evgeny Morozov writes: In “On What We Can Not Do,” a short and pungent essay published a few years ago, the Italian philosopher Giorgio Agamben outlined two ways in which power operates today. There’s the conventional type that seeks to limit our potential for self-development by restricting material resources and banning certain behaviors. But there’s also a subtler, more insidious type, which limits not what we can do but what we can not do. What’s at stake here is not so much our ability to do things but our capacity not to make use of that very ability.
While each of us can still choose not to be on Facebook, have a credit history or build a presence online, can we really afford not to do any of those things today? It was acceptable not to have a cellphone when most people didn’t have them; today, when almost everybody does and when our phone habits can even be used to assess whether we qualify for a loan, such acts of refusal border on the impossible.
For Agamben, it’s this double power “to be and to not be, to do and to not do” that makes us human. This active necessity to choose (and err) contributes to the development of individual faculties that shape our subjectivity. The tragedy of modern man, then, is that “he has become blind not to his capacities but to his incapacities, not to what he can do but to what he cannot, or can not, do.”
This blindness to the question of incapacities mars most popular books on recent advances in our ability to store, analyze and profit from vast amounts of data generated by our gadgets. (Our wherewithal not to call this phenomenon by the ugly, jargony name of Big Data seems itself to be under threat.) The two books under review, alas, are no exception.
In “The Naked Future,” Patrick Tucker, an editor at large for The Futurist magazine, surveys how this influx of readily available data will transform every domain of our existence, from improving our ability to predict earthquakes (thanks to the proliferation of sensors) to producing highly customized education courses that would tailor their content and teaching style, in real time, to the needs of individual students. His verdict: It’s all for the better.
Since most of us lead rather structured, regular lives — work, home, weekend — even a handful of data points (our location, how often we call our friends) proves useful in predicting what we may be doing a day or a year from now. “A flat tire on a Monday at 10 a.m. isn’t actually random. . . . We just don’t yet know how to model it,” Tucker writes.
Seeking to integrate data streams from multiple sources — our inboxes, our phones, our cars and, with its recent acquisition of a company that makes thermostats and smoke detectors, our bedrooms — a company like Google is well positioned not just to predict our future but also to detect just how much risk we take on every day, be it fire, a flat tire or a default on a loan. (Banks and insurance companies beware: You will be disrupted next!)
With so much predictive power, we may soon know the exact price of “preferring not to,” as a modern-day Bartleby might put it. [Continue reading...]
Luke Barnes and Geraint Lewis write: The recent BICEP2 observations – of swirls in the polarisation of the cosmic microwave background – have been proclaimed as many things, from evidence of the Big Bang and gravitational waves to something strange called the multiverse.
The multiverse theory is that our universe is but one of a vast, variegated ensemble of other universes. We don’t know how many pieces there are to the multiverse but estimates suggest there many be squillions of them.
But (if they exist) there has not been enough time since our cosmic beginning for light from these other universes to reach us. They are beyond our cosmic horizon and thus in principle unobservable.
How, then, can cosmologists say they have seen evidence of them?
Unobservable entities aren’t necessarily out-of-bounds for science. For example, protons and neutrons are made of subatomic particles called quarks. While they cannot be observed directly, their existence and properties are inferred from the way particles behave when smashed together.
But there is no such luxury with the multiverse. No signals from from other universes have or will ever bother our telescopes.
While there is some debate about what actually makes a scientific theory, we should at least ask if the multiverse theory is testable? Does it make predictions that we can test in a laboratory or with our telescopes? [Continue reading...]
Addy Pross writes: Biology is wondrously strange – so familiar, yet so strikingly different to physics and chemistry. We know where we are with inanimate matter. Ever since Isaac Newton, it has answered to a basically mechanical view of nature, blindly following its laws without regard for purposes. But could there be, as Immanuel Kant put it, a Newton of the blade of grass? Living things might be made of the same fundamental stuff as the rest of the material world – ‘dead’ atoms and molecules – but they do not behave in the same way at all. In fact, they seem so purposeful as to defy the materialist philosophy on which the rest of modern science was built.
Even after Charles Darwin, we continue to struggle with that difference. As any biologist will acknowledge, function and purpose remain central themes in the life sciences, though they have long been banished from the physical sciences. How, then, can living things be reconciled with our mechanical-mechanistic universe? This is a conceptual question, of course, but it has a historical dimension: how did life on Earth actually come about? How could it have? Both at the abstract level and in the particular story of our world, there seems to be a chasm between the animate and inanimate realms.
I believe that it is now possible to bridge that gap. [Continue reading...]
In December, and again in February, at the Google Bus blockades in San Francisco, one thing struck me forcefully: the technology corporation employees waiting for their buses were all staring so intently at their phones that they apparently didn’t notice the unusual things going on around them until their buses were surrounded. Sometimes I feel like I’m living in a science-fiction novel, because my region is so afflicted with people who stare at the tiny screens in their hands on trains, in restaurants, while crossing the street, and too often while driving. San Francisco is, after all, where director Phil Kaufman set the 1978 remake of Invasion of the Body Snatchers, the movie wherein a ferny spore-spouting form of alien life colonizes human beings so that they become zombie-like figures.
In the movies, such colonization took place secretly, or by force, or both: it was a war, and (once upon a time) an allegory for the Cold War and a possible communist takeover. Today, however — Hypercapitalism Invades! — we not only choose to carry around those mobile devices, but pay corporations hefty monthly fees to do so. In return, we get to voluntarily join the great hive of people being in touch all the time, so much so that human nature seems in the process of being remade, with the young immersed in a kind of contact that makes solitude seem like polar ice, something that’s melting away.
We got phones, and then Smart Phones, and then Angry Birds and a million apps — and a golden opportunity to be tracked all the time thanks to the GPS devices in those phones. Your cell phone is the shackle that chains you to corporate America (and potentially to government surveillance as well) and like the ankle bracelets that prisoners wear, it’s your monitor. It connects you to the Internet and so to the brave new world that has such men as Larry Ellison and Mark Zuckerberg in it. That world — maybe not so brave after all — is the subject of Astra Taylor’s necessary, alarming, and exciting new book, The People’s Platform: Taking Back Power and Culture in the Digital Age.
The Internet arose with little regulation, little public decision-making, and a whole lot of fantasy about how it was going to make everyone powerful and how everything would be free. Free, as in unregulated and open, got confused with free, as in not getting paid, and somehow everyone from Facebook to Arianna Huffington created massively lucrative sites (based on advertising dollars) in which the people who made the content went unpaid. Just as Russia woke up with oil oligarchs spreading like mushrooms after a night’s heavy rain, so we woke up with new titans of the tech industry throwing their billionaire weight around. The Internet turns out to be a superb mechanism for consolidating money and power and control, even as it gives toys and minor voices to the rest of us.
As Taylor writes in her book, “The online sphere inspires incessant talk of gift economies and public-spiritedness and democracy, but commercialization and privatization and inequality lurk beneath the surface. This contradiction is captured in a single word: ‘open,’ a concept capacious enough to contain both the communal and capitalistic impulses central to Web 2.0.” And she goes on to discuss, “the tendency of open systems to amplify inequality — and new media thinkers’ glib disregard for this fundamental characteristic.” Part of what makes her book exceptional, in fact, is its breadth. It reviews much of the existing critique of the Internet and connects the critiques of specific aspects of it into an overview of how a phenomenon supposed to be wildly democratic has become wildly not that way at all.
And at a certain juncture, she turns to gender. Though far from the only weak point of the Internet as an egalitarian space — after all, there’s privacy (lack of), the environment (massive server farms), and economics (tax cheats, “content providers” like musicians fleeced) — gender politics, as she shows in today’s post adapted from her book, is one of the most spectacular problems online. Let’s imagine this as science fiction: a group of humans apparently dissatisfied with how things were going on Earth — where women were increasing their rights, representation, and participation — left our orbit and started their own society on their own planet. The new planet wasn’t far away or hard to get to (if you could afford the technology): it was called the Internet. We all know it by name; we all visit it; but we don’t name the society that dominates it much.
Taylor does: the dominant society, celebrating itself and pretty much silencing everyone else, makes the Internet bear a striking resemblance to Congress in 1850 or a gentlemen’s club (minus any gentleness). It’s a gated community, and as Taylor describes today, the security detail is ferocious, patrolling its borders by trolling and threatening dissident voices, and just having a female name or being identified as female is enough to become a target of hate and threats.
Early this year, a few essays were published on Internet misogyny that were so compelling I thought 2014 might be the year we revisit these online persecutions, the way that we revisited rape in 2013, thanks to the Steubenville and New Delhi assault cases of late 2012. But the subject hasn’t (yet) quite caught fire, and so not much gets said and less gets done about this dynamic new machinery for privileging male and silencing female voices. Which is why we need to keep examining and discussing this, as well as the other problems of the Internet. And why you need to read Astra Taylor’s book. This excerpt is part of her diagnosis of the problems; the book ends with ideas about a cure. Rebecca Solnit
Open systems and glass ceilings
The disappearing woman and life on the internet
By Astra Taylor
The Web is regularly hailed for its “openness” and that’s where the confusion begins, since “open” in no way means “equal.” While the Internet may create space for many voices, it also reflects and often amplifies real-world inequities in striking ways.
An elaborate system organized around hubs and links, the Web has a surprising degree of inequality built into its very architecture. Its traffic, for instance, tends to be distributed according to “power laws,” which follow what’s known as the 80/20 rule — 80% of a desirable resource goes to 20% of the population.
In fact, as anyone knows who has followed the histories of Google, Apple, Amazon, and Facebook, now among the biggest companies in the world, the Web is increasingly a winner-take-all, rich-get-richer sort of place, which means the disparate percentages in those power laws are only likely to look uglier over time.
Twice in my life — in the 1960s and the post-9/11 years — I was suddenly aware of clicks and other strange noises on my phone. In both periods, I’ve wondered what the story was, and then made self-conscious jokes with whoever was on the other end of the line about those who might (or might not) be listening in. Twice in my life I’ve felt, up close and personal, that ominous, uncomfortable, twitchy sense of being overheard, without ever knowing if it was a manifestation of the paranoia of the times or of realism — or perhaps of both.
I’m conceptually outraged by mass surveillance, but generally my personal attitude has always been: Go ahead. Read my email, listen to my phone calls, follow my web searches, check out my location via my cell phone. My tweets don’t exist — but if they did, I’d say have at ‘em. I don’t give a damn.
And in some sense, I don’t, even though everyone, including me, is embarrassed by something. Everyone says something about someone they would rather not have made public (or perhaps have even said). Everyone has some thing — or sometimes many things — they would rather keep to themselves.
Increasingly, however, as the U.S. surveillance state grows ever more pervasive, domestically and globally, as the corporate version of the same expands exponentially, as prying “eyes” and “ears” of every technological variety proliferate, the question of who exactly we are arises. What are we without privacy, without a certain kind of unknowability? What are we when “our” information is potentially anyone’s information? We may soon find out. A recent experiment by two Stanford University graduate students who gathered just a few month’s worth of phone metadata on 546 volunteers has, for instance, made mincemeat of President Obama’s claim that the NSA’s massive version of metadata collection “is not looking at people’s names and they’re not looking at content.” Using only the phone metadata they got, the Stanford researchers “inferred sensitive information about people’s lives, including: neurological and heart conditions, gun ownership, marijuana cultivation, abortion, and participation in Alcoholics Anonymous.”
And that’s just a crude version of what the future holds for all of us. There are various kinds of extinctions. That superb environmental reporter Elizabeth Kolbert has just written a powerful book, The Sixth Extinction, about the more usual (if horrifying) kind. Our developing surveillance world may offer us an example of another kind of extinction: of what we once knew as the private self. If you want to be chilled to the bone when it comes to this, check out today’s stunning report by the ACLU’s Catherine Crump and Matthew Harwood on where the corporate world is taking your identity. Tom Engelhardt
Estimates vary, but by 2020 there could be over 30 billion devices connected to the Internet. Once dumb, they will have smartened up thanks to sensors and other technologies embedded in them and, thanks to your machines, your life will quite literally have gone online.
The implications are revolutionary. Your smart refrigerator will keep an inventory of food items, noting when they go bad. Your smart thermostat will learn your habits and adjust the temperature to your liking. Smart lights will illuminate dangerous parking garages, even as they keep an “eye” out for suspicious activity.
Techno-evangelists have a nice catchphrase for this future utopia of machines and the never-ending stream of information, known as Big Data, it produces: the Internet of Things. So abstract. So inoffensive. Ultimately, so meaningless.
A future Internet of Things does have the potential to offer real benefits, but the dark side of that seemingly shiny coin is this: companies will increasingly know all there is to know about you. Most people are already aware that virtually everything a typical person does on the Internet is tracked. In the not-too-distant future, however, real space will be increasingly like cyberspace, thanks to our headlong rush toward that Internet of Things. With the rise of the networked device, what people do in their homes, in their cars, in stores, and within their communities will be monitored and analyzed in ever more intrusive ways by corporations and, by extension, the government.
And one more thing: in cyberspace it is at least theoretically possible to log off. In your own well-wired home, there will be no “opt out.”