On not being there: The data-driven body at work and at play

Rebecca Lemov writes: The protagonist of William Gibson’s 2014 science-fiction novel The Peripheral, Flynne Fisher, works remotely in a way that lends a new and fuller sense to that phrase. The novel features a double future: One set of characters inhabits the near future, ten to fifteen years from the present, while another lives seventy years on, after a breakdown of the climate and multiple other systems that has apocalyptically altered human and technological conditions around the world.

In that “further future,” only 20 percent of the Earth’s human population has survived. Each of these fortunate few is well off and able to live a life transformed by healing nanobots, somaticized e-mail (which delivers messages and calls to the roof of the user’s mouth), quantum computing, and clean energy. For their amusement and profit, certain “hobbyists” in this future have the Borgesian option of cultivating an alternative path in history — it’s called “opening up a stub” — and mining it for information as well as labor.

Flynne, the remote worker, lives on one of those paths. A young woman from the American Southeast, possibly Appalachia or the Ozarks, she favors cutoff jeans and resides in a trailer, eking out a living as a for-hire sub playing video games for wealthy aficionados. Recruited by a mysterious entity that is beta-testing drones that are doing “security” in a murky skyscraper in an unnamed city, she thinks at first that she has been taken on to play a kind of video game in simulated reality. As it turns out, she has been employed to work in the future as an “information flow” — low-wage work, though the pay translates to a very high level of remuneration in the place and time in which she lives.

What is of particular interest is the fate of Flynne’s body. Before she goes to work she must tend to its basic needs (nutrition and elimination), because during her shift it will effectively be “vacant.” Lying on a bed with a special data-transmitting helmet attached to her head, she will be elsewhere, inhabiting an ambulatory robot carapace — a “peripheral” — built out of bio-flesh that can receive her consciousness.

Bodies in this data-driven economic backwater of a future world economy are abandoned for long stretches of time — disposable, cheapened, eerily vacant in the temporary absence of “someone at the helm.” Meanwhile, fleets of built bodies, grown from human DNA, await habitation.

Alex Rivera explores similar territory in his Mexican sci-fi film The Sleep Dealer (2008), set in a future world after a wall erected on the US–Mexican border has successfully blocked migrants from entering the United States. Digital networks allow people to connect to strangers all over the world, fostering fantasies of physical and emotional connection. At the same time, low-income would-be migrant workers in Tijuana and elsewhere can opt to do remote work by controlling robots building a skyscraper in a faraway city, locking their bodies into devices that transmit their labor to the site. In tank-like warehouses, lined up in rows of stalls, they “jack in” by connecting data-transmitting cables to nodes implanted in their arms and backs. Their bodies are in Mexico, but their work is in New York or San Francisco, and while they are plugged in and wearing their remote-viewing spectacles, their limbs move like the appendages of ghostly underwater creatures. Their life force drained by the taxing labor, these “sleep dealers” end up as human discards.

What is surprising about these sci-fi conceits, from “transitioning” in The Peripheral to “jacking in” in The Sleep Dealer, is how familiar they seem, or at least how closely they reflect certain aspects of contemporary reality. Almost daily, we encounter people who are there but not there, flickering in and out of what we think of as presence. A growing body of research explores the question of how users interact with their gadgets and media outlets, and how in turn these interactions transform social relationships. The defining feature of this heavily mediated reality is our presence “elsewhere,” a removal of at least part of our conscious awareness from wherever our bodies happen to be. [Continue reading…]

facebooktwittermail

Why the modern world is bad for your brain

Daniel J Levitin writes: Our brains are busier than ever before. We’re assaulted with facts, pseudo facts, jibber-jabber, and rumour, all posing as information. Trying to figure out what you need to know and what you can ignore is exhausting. At the same time, we are all doing more. Thirty years ago, travel agents made our airline and rail reservations, salespeople helped us find what we were looking for in shops, and professional typists or secretaries helped busy people with their correspondence. Now we do most of those things ourselves. We are doing the jobs of 10 different people while still trying to keep up with our lives, our children and parents, our friends, our careers, our hobbies, and our favourite TV shows.

Our smartphones have become Swiss army knife–like appliances that include a dictionary, calculator, web browser, email, Game Boy, appointment calendar, voice recorder, guitar tuner, weather forecaster, GPS, texter, tweeter, Facebook updater, and flashlight. They’re more powerful and do more things than the most advanced computer at IBM corporate headquarters 30 years ago. And we use them all the time, part of a 21st-century mania for cramming everything we do into every single spare moment of downtime. We text while we’re walking across the street, catch up on email while standing in a queue – and while having lunch with friends, we surreptitiously check to see what our other friends are doing. At the kitchen counter, cosy and secure in our domicile, we write our shopping lists on smartphones while we are listening to that wonderfully informative podcast on urban beekeeping.

But there’s a fly in the ointment. Although we think we’re doing several things at once, multitasking, this is a powerful and diabolical illusion. Earl Miller, a neuroscientist at MIT and one of the world experts on divided attention, says that our brains are “not wired to multitask well… When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.” So we’re not actually keeping a lot of balls in the air like an expert juggler; we’re more like a bad amateur plate spinner, frantically switching from one task to another, ignoring the one that is not right in front of us but worried it will come crashing down any minute. Even though we think we’re getting a lot done, ironically, multitasking makes us demonstrably less efficient. [Continue reading…]

facebooktwittermail

Even the tech savvy should turn off their phones and listen

Bruno Giussani writes: Nothing exists nowadays unless it is Facebooked, Tweeted or Instagrammed with emphasis on “insta”. So perhaps the event that I hosted on Tuesday at the Royal Institution, TEDGlobal London, didn’t exist. Because we ran a little experiment, banning the use of smartphones, tablets, laptops, cameras – any electronic device – during the conference.

At the end of the event (which over two sessions of 100 minutes each featured scientists, technologists, historians, a photographer, a slam poet, a singer, a racing car driver and a writer) I asked the 350 attendees whether we should apply the same rule next time. It’s a safe guess that at least two-thirds of them use Twitter or FB with some regularity, but pretty much every hand in the theatre shot up, with maybe two exceptions. I have heard nothing but positive feedback since. [Continue reading…]

facebooktwittermail

Artificial neural networks on acid

red-tree-small-long-unsmoothed

Quartz reports: American sci-fi novelist Philip K. Dick once famously asked, Do Androids Dream of Electric Sheep? While he was on the right track, the answer appears to be, no, they don’t. They dream of dog-headed knights atop horses, of camel-birds and pig-snails, and of Dali-esque mutated landscapes.

Google’s image recognition software, which can detect, analyze, and even auto-caption images, uses artificial neural networks to simulate the human brain. In a process they’re calling “inceptionism,” Google engineers sought out to see what these artificial networks “dream” of — what, if anything, do they see in a nondescript image of clouds, for instance? What does a fake brain that’s trained to detect images of dogs see when it’s shown a picture of a knight?

Google trains the software by feeding it millions of images, eventually teaching it to recognize specific objects within a picture. When it’s fed an image, it is asked to emphasize the object in the image that it recognizes. The network is made up of layers — the higher the layer, the more precise the interpretation. Eventually, in the final output layer, the network makes a “decision” as to what’s in the image.

But the networks aren’t restricted to only identifying images. Their training allows them to generate images as well. [Continue reading…]

facebooktwittermail

How technology is damaging our brains

The New York Times reports: When one of the most important e-mail messages of his life landed in his in-box a few years ago, Kord Campbell overlooked it.

Not just for a day or two, but 12 days. He finally saw it while sifting through old messages: a big company wanted to buy his Internet start-up.

“I stood up from my desk and said, ‘Oh my God, oh my God, oh my God,’ ” Mr. Campbell said. “It’s kind of hard to miss an e-mail like that, but I did.”

The message had slipped by him amid an electronic flood: two computer screens alive with e-mail, instant messages, online chats, a Web browser and the computer code he was writing.

While he managed to salvage the $1.3 million deal after apologizing to his suitor, Mr. Campbell continues to struggle with the effects of the deluge of data. Even after he unplugs, he craves the stimulation he gets from his electronic gadgets. He forgets things like dinner plans, and he has trouble focusing on his family.

His wife, Brenda, complains, “It seems like he can no longer be fully in the moment.”

This is your brain on computers.

Scientists say juggling e-mail, phone calls and other incoming information can change how people think and behave. They say our ability to focus is being undermined by bursts of information.

These play to a primitive impulse to respond to immediate opportunities and threats. The stimulation provokes excitement — a dopamine squirt — that researchers say can be addictive. In its absence, people feel bored.

The resulting distractions can have deadly consequences, as when cellphone-wielding drivers and train engineers cause wrecks. And for millions of people like Mr. Campbell, these urges can inflict nicks and cuts on creativity and deep thought, interrupting work and family life.

While many people say multitasking makes them more productive, research shows otherwise. Heavy multitaskers actually have more trouble focusing and shutting out irrelevant information, scientists say, and they experience more stress.

And scientists are discovering that even after the multitasking ends, fractured thinking and lack of focus persist. In other words, this is also your brain off computers. [Continue reading…]

facebooktwittermail

The secret history of the vocoder

facebooktwittermail

Why the singularity is greatly exaggerated

Ken Goldberg, Professor of Industrial Engineering and Operations at the University of California, interviewed by Jeanne Carstensen.

In 1968, Marvin Minsky said, “Within a generation we will have intelligent computers like HAL in the film, 2001.” What made him and other early AI proponents think machines would think like humans?

Even before Moore’s law there was the idea that computers are going to get faster and their clumsy behavior is going to get a thousand times better. It’s what Ray Kurzweil now claims. He says, “OK, we’re moving up this curve in terms of the number of neurons, number of processing units, so by this projection we’re going to be at super-human levels of intelligence.” But that’s deceptive. It’s a fallacy. Just adding more speed or neurons or processing units doesn’t mean you end up with a smarter or more capable system. What you need are new algorithms, new ways of understanding a problem. In the area of creativity, it’s not at all clear that a faster computer is going to get you there. You’re just going to come up with more bad, bland, boring things. That ability to distinguish, to filter out what’s interesting, that’s still elusive.

Today’s computers, though, can generate an awful lot of connections in split seconds.

But generating is fairly easy and testing pretty hard. In Robert Altman’s movie, The Player, they try to combine two movies to make a better one. You can imagine a computer that just takes all movie titles and tries every combination of pairs, like Reservoir Dogs meets Casablanca. I could write that program right now on my laptop and just let it run. It would instantly generate all possible combinations of movies and there will be some good ones. But recognizing them, that’s the hard part.

That’s the part you need humans for.

Right, the Tim Robbins movie exec character says, “I listen to stories and decide if they’ll make good movies or not.” The great majority of combinations won’t work, but every once in a while there’s one that is both new and interesting. In early AI it seemed like the testing was going to be easy. But we haven’t been able to figure out the filtering.

Can’t you write a creativity algorithm?

If you want to do variations on a theme, like Thomas Kinkade, sure. Take our movie machine. Let’s say there have been 10,000 movies — that’s 10,000 squared, or 100 million combinations of pairs of movies. We can build a classifier that would look at lots of pairs of successful movies and do some kind of inference on it so that it could learn what would be successful again. But it would be looking for patterns that are already existent. It wouldn’t be able to find that new thing that was totally out of left field. That’s what I think of as creativity — somebody comes up with something really new and clever. [Continue reading…]

facebooktwittermail

We are ignoring the new machine age at our peril

John Naughton writes: As a species, we don’t seem to be very good at dealing with nonlinearity. We cope moderately well with situations and environments that are changing gradually. But sudden, major discontinuities – what some people call “tipping points” – leave us spooked. That’s why we are so perversely relaxed about climate change, for example: things are changing slowly, imperceptibly almost, but so far there hasn’t been the kind of sharp, catastrophic change that would lead us seriously to recalibrate our behaviour and attitudes.

So it is with information technology. We know – indeed, it has become a cliche – that computing power has been doubling at least every two years since records of these things began. We know that the amount of data now generated by our digital existence is expanding annually at an astonishing rate. We know that our capacity to store digital information has been increasing exponentially. And so on. What we apparently have not sussed, however, is that these various strands of technological progress are not unconnected. Quite the contrary, and therein lies our problem.

The thinker who has done most to explain the consequences of connectedness is a Belfast man named W Brian Arthur, an economist who was the youngest person ever to occupy an endowed chair at Stanford University and who in later years has been associated with the Santa Fe Institute, one of the world’s leading interdisciplinary research institutes. In 2009, he published a remarkable book, The Nature of Technology, in which he formulated a coherent theory of what technology is, how it evolves and how it spurs innovation and industry. Technology, he argued, “builds itself organically from itself” in ways that resemble chemistry or even organic life. And implicit in Arthur’s conception of technology is the idea that innovation is not linear, but what mathematicians call “combinatorial”, ie one driven by a whole bunch of things. And the significant point about combinatorial innovation is that it brings about radical discontinuities that nobody could have anticipated. [Continue reading…]

facebooktwittermail

The complexity of science

Leonard Mlodinow writes: The other week I was working in my garage office when my 14-year-old daughter, Olivia, came in to tell me about Charles Darwin. Did I know that he discovered the theory of evolution after studying finches on the Galápagos Islands? I was steeped in what felt like the 37th draft of my new book, which is on the development of scientific ideas, and she was proud to contribute this tidbit of history that she had just learned in class.

Sadly, like many stories of scientific discovery, that commonly recounted tale, repeated in her biology textbook, is not true.

The popular history of science is full of such falsehoods. In the case of evolution, Darwin was a much better geologist than ornithologist, at least in his early years. And while he did notice differences among the birds (and tortoises) on the different islands, he didn’t think them important enough to make a careful analysis. His ideas on evolution did not come from the mythical Galápagos epiphany, but evolved through many years of hard work, long after he had returned from the voyage. (To get an idea of the effort involved in developing his theory, consider this: One byproduct of his research was a 684-page monograph on barnacles.)

The myth of the finches obscures the qualities that were really responsible for Darwin’s success: the grit to formulate his theory and gather evidence for it; the creativity to seek signs of evolution in existing animals, rather than, as others did, in the fossil record; and the open-mindedness to drop his belief in creationism when the evidence against it piled up.

The mythical stories we tell about our heroes are always more romantic and often more palatable than the truth. But in science, at least, they are destructive, in that they promote false conceptions of the evolution of scientific thought. [Continue reading…]

facebooktwittermail

Chain reactions spreading ideas through science and culture

David Krakauer writes: On Dec. 2, 1942, just over three years into World War II, President Roosevelt was sent the following enigmatic cable: “The Italian navigator has landed in the new world.” The accomplishments of Christopher Columbus had long since ceased to be newsworthy. The progress of the Italian physicist, Enrico Fermi, navigator across the territories of Lilliputian matter — the abode of the microcosm of the atom — was another thing entirely. Fermi’s New World, discovered beneath a Midwestern football field in Chicago, was the province of newly synthesized radioactive elements. And Fermi’s landing marked the earliest sustained and controlled nuclear chain reaction required for the construction of an atomic bomb.

This physical chain reaction was one of the links of scientific and cultural chain reactions initiated by the Hungarian physicist, Leó Szilárd. The first was in 1933, when Szilárd proposed the idea of a neutron chain reaction. Another was in 1939, when Szilárd and Einstein sent the now famous “Szilárd-Einstein” letter to Franklin D. Roosevelt informing him of the destructive potential of atomic chain reactions: “This new phenomenon would also lead to the construction of bombs, and it is conceivable — though much less certain — that extremely powerful bombs of a new type may thus be constructed.”

This scientific information in turn generated political and policy chain reactions: Roosevelt created the Advisory Committee on Uranium which led in yearly increments to the National Defense Research Committee, the Office of Scientific Research and Development, and finally, the Manhattan Project.

Life itself is a chain reaction. Consider a cell that divides into two cells and then four and then eight great-granddaughter cells. Infectious diseases are chain reactions. Consider a contagious virus that infects one host that infects two or more susceptible hosts, in turn infecting further hosts. News is a chain reaction. Consider a report spread from one individual to another, who in turn spreads the message to their friends and then on to the friends of friends.

These numerous connections that fasten together events are like expertly arranged dominoes of matter, life, and culture. As the modernist designer Charles Eames would have it, “Eventually everything connects — people, ideas, objects. The quality of the connections is the key to quality per se.”

Dominoes, atoms, life, infection, and news — all yield domino effects that require a sensitive combination of distances between pieces, physics of contact, and timing. When any one of these ingredients is off-kilter, the propagating cascade is likely to come to a halt. Premature termination is exactly what we might want to happen to a deadly infection, but it is the last thing that we want to impede an idea. [Continue reading…]

facebooktwittermail

What does it mean to preserve nature in the Age of Humans?

By Ben A Minteer, Arizona State University and Stephen Pyne, Arizona State University

Is the Earth now spinning through the “Age of Humans?” More than a few scientists think so. They’ve suggested, in fact, that we modify the name of the current geological epoch (the Holocene, which began roughly 12,000 years ago) to the “Anthropocene.” It’s a term first put into wide circulation by Nobel-Prize winning atmospheric chemist Paul Crutzen in an article published in Nature in 2002. And it’s stirring up a good deal of debate, not only among geologists.

The idea is that we needed a new planetary marker to account for the scale of human changes to the Earth: extensive land transformation, mass extinctions, control of the nitrogen cycle, large-scale water diversion, and especially change of the atmosphere through the emission of greenhouse gases. Although naming geological epochs isn’t usually a controversial act, the Anthropocene proposal is radical because it means that what had been an environmental fixture against which people acted, the geological record, is now just another expression of the human presence.

It seems to be a particularly bitter pill to swallow for nature preservationists, heirs to the American tradition led by writers, scientists and activists such as John Muir, Aldo Leopold, David Brower, Rachel Carson and Edward Abbey. That’s because some have argued the traditional focus on the goal of wilderness protection rests on a view of “pristine” nature that is simply no longer viable on a planet hurtling toward nine billion human inhabitants.

Given this situation, we felt the time was ripe to explore the impact of the Anthropocene on the idea and practice of nature preservation. Our plan was to create a salon, a kind of literary summit. But we wanted to cut to the chase: What does it mean to “save American nature” in the age of humans?

We invited a distinguished group of environmental writers – scientists, philosophers, historians, journalists, agency administrators and activists – to give it their best shot. The essays appear in the new collection, After Preservation: Saving American Nature in the Age of Humans.

[Read more…]

facebooktwittermail

Why your employer would like to replace you with a machine

Zeynep Tufekci writes: The machine hums along, quietly scanning the slides, generating Pap smear diagnostics, just the way a college-educated, well-compensated lab technician might.

A robot with emotion-detection software interviews visitors to the United States at the border. In field tests, this eerily named “embodied avatar kiosk” does much better than humans in catching those with invalid documentation. Emotional-processing software has gotten so good that ad companies are looking into “mood-targeted” advertising, and the government of Dubai wants to use it to scan all its closed-circuit TV feeds.

Yes, the machines are getting smarter, and they’re coming for more and more jobs.

Not just low-wage jobs, either.

Today, machines can process regular spoken language and not only recognize human faces, but also read their expressions. They can classify personality types, and have started being able to carry out conversations with appropriate emotional tenor.

Machines are getting better than humans at figuring out who to hire, who’s in a mood to pay a little more for that sweater, and who needs a coupon to nudge them toward a sale. In applications around the world, software is being used to predict whether people are lying, how they feel and whom they’ll vote for.

To crack these cognitive and emotional puzzles, computers needed not only sophisticated, efficient algorithms, but also vast amounts of human-generated data, which can now be easily harvested from our digitized world. The results are dazzling. Most of what we think of as expertise, knowledge and intuition is being deconstructed and recreated as an algorithmic competency, fueled by big data.

But computers do not just replace humans in the workplace. They shift the balance of power even more in favor of employers. Our normal response to technological innovation that threatens jobs is to encourage workers to acquire more skills, or to trust that the nuances of the human mind or human attention will always be superior in crucial ways. But when machines of this capacity enter the equation, employers have even more leverage, and our standard response is not sufficient for the looming crisis. [Continue reading…]

facebooktwittermail

Climate scientists need to produce more ‘actionable science’

John Upton writes: When a San Francisco panel began mulling rules about building public projects near changing shorelines, its self-described science translator, David Behar, figured he would just turn to the U.N.’s most recent climate assessment for guidance on future sea levels.

He couldn’t.

Nor could Behar, leader of the city utility department’s climate program, get what he needed from a 2012 National Research Council report dealing with West Coast sea level rise projections. A National Climate Assessment paper dealing with sea level rise didn’t seem to have what he needed, either. Even after reviewing two California government reports dealing with sea level rise, Behar says he had to telephone climate scientists and review a journal paper summarizing the views of 90 experts before he felt confident that he understood science’s latest projections for hazards posed by the onslaught of rising seas.

“You sometimes have to interview the authors of these reports to actually understand what they’re saying,” Behar said. “On the surface,” the assessments and reports that Behar turned to “all look like they’re saying different things,” he said. “But when you dive deeper — with the help of the authors, in most cases — they don’t disagree with one another very much.”

Governments around the world, from Madison, Wis., and New York City to the Obama Administration and the European Union have begun striving in recent years to adapt to the growing threats posed by climate change. But the burst of adaptation planning threatens to be hobbled by cultural and linguistic divides between those who practice science and those who prepare policy.[Continue reading…]

facebooktwittermail

How Yitang Zhang rose from obscurity and a disadvantaged youth to mathematical celebrity

Thomas Lin writes: As a boy in Shanghai, China, Yitang Zhang believed he would someday solve a great problem in mathematics. In 1964, at around the age of nine, he found a proof of the Pythagorean theorem, which describes the relationship between the lengths of the sides of any right triangle. He was 10 when he first learned about two famous number theory problems, Fermat’s last theorem and the Goldbach conjecture. While he was not yet aware of the centuries-old twin primes conjecture, he was already taken with prime numbers, often described as indivisible “atoms” that make up all other natural numbers.

But soon after, the anti-intellectual Cultural Revolution shuttered schools and sent him and his mother to the countryside to work in the fields. Because of his father’s troubles with the Communist Party, Zhang was also unable to attend high school. For 10 years, he worked as a laborer, reading books on math, history and other subjects when he could.

Not long after the revolution ended, Zhang, then 23, enrolled at Peking University and became one of China’s top math students. After completing his master’s at the age of 29, he was recruited by T. T. Moh to pursue a doctorate at Purdue University in Lafayette, Ind. But, promising though he was, after defending his dissertation in 1991 he could not find academic work as a mathematician.

In George Csicsery’s new documentary film Counting From Infinity, Zhang discusses his difficulties at Purdue and in the years that followed. He says his doctoral adviser never wrote recommendation letters for him. (Moh has written that Zhang did not ask for any.) Zhang admits that his shy, quiet demeanor didn’t help in building relationships or making himself known to the wider math community. During this initial job-hunting period, Zhang sometimes lived in his car, according to his friend Jacob Chi, music director of the Pueblo Symphony in Colorado. In 1992, Zhang began working at another friend’s Subway sandwich restaurant. For about seven years he worked odd jobs for various friends.

In 1999, at 44, Zhang caught a break. [Continue reading…]

facebooktwittermail

Stardust

Ray Jayawardhana writes: Joni Mitchell beat Carl Sagan to the punch. She sang “we are stardust, billion-year-old carbon” in her 1970 song “Woodstock.” That was three years before Mr. Sagan wrote about humans’ being made of “star-stuff” in his book “The Cosmic Connection” — a point he would later convey to a far larger audience in his 1980 television series, “Cosmos.”

By now, “stardust” and “star-stuff” have nearly turned cliché. But that does not make the reality behind those words any less profound or magical: The iron in our blood, the calcium in our bones and the oxygen we breathe are the physical remains — ashes, if you will — of stars that lived and died long ago.

That discovery is relatively recent. Four astrophysicists developed the idea in a landmark paper published in 1957. They argued that almost all the elements in the periodic table were cooked up over time through nuclear reactions inside stars — rather than in the first instants of the Big Bang, as previously thought. The stuff of life, in other words, arose in places and times somewhat more accessible to our telescopic investigations.

Since most of us spend our lives confined to a narrow strip near Earth’s surface, we tend to think of the cosmos as a lofty, empyrean realm far beyond our reach and relevance. We forget that only a thin sliver of atmosphere separates us from the rest of the universe. [Continue reading…]

facebooktwittermail

Electric cars that run on coal

Climate Central: The world must move quickly to make electric vehicles more climate-friendly, or the world may not be able to meet its climate goals.

That’s the conclusion of a University of Toronto paper published in the March edition of Nature Climate Change, which argues that countries need to reduce the carbon intensity of their electric power supply in order to make electric transportation systems and other infrastructure an effective strategy for combating climate change.

Think about it this way: Every Nissan Leaf might run on electric power, but how that electricity was generated determines what greenhouse gas emissions the car is responsible for. If the car is charged on solar or geothermal power, the carbon footprint may be miniscule. If it’s charged on electricity generated using coal, it might prove as bad or worse for the climate than burning gasoline. (Climate Central created a road map for climate-friendly cars in 2013 showing where driving electric vehicles is most climate friendly in the U.S.)

The University of Toronto paper establishes an emissions threshold to help governments and consumers better understand whether it helps the climate to push for electric cars and the electrification of other modes of transportation based on the carbon intensity of the electricity those vehicles use. [Continue reading…]

facebooktwittermail

Science’s embarrassing fossil fuel problem

Alice Bell writes: An investigation by Greenpeace and the Climate Investigations Centre reported in the Guardian and New York Times this weekend showed that the work of Willie Soon — an apparently ‘scientific’ voice for climate scepticism — had accepted more than $1.2 million from the fossil-fuel industry over the 14 years.

As Suzanne Goldenberg’s report stresses, although those seeking to delay action to curb carbon emissions were keen to cite and fund Soon’s Harvard-Smithsonian credentials, he did not enjoy the same sort of recognition from the scientific community. He did not receive grants from Nasa or the the National Science Foundation, for example — the sorts of institutions who funded his colleagues at the Center for Astrophysics. Moreover, it appears that Soon violated ethical guidelines of the journals that published his work by not disclosing such funding. It seems to be a story of someone working outside the usual codes of modern science.

But Soon is not a singular aberration in the story of science’s relationship with the fossil fuel industry. It goes deeper than that.

Science and engineering is suffused with oil, gas and, yes, even coal. We must look this squarely in the eye if we’re going to tackle climate change.

The fossil fuel industry is sometimes labelled anti-science, but that’s far from the truth. It loves science — or at least particular bits of science — indeed it needs science. The fossil fuel industry needs the science and engineering community to train staff, to gather information and help develop new techniques. Science and engineering also provides the industry with cultural credibility and can open up powerful political spaces within which to lobby. [Continue reading…]

facebooktwittermail

Too many worlds

Philip Ball writes: In July 2011, participants at a conference on the placid shore of Lake Traunsee in Austria were polled on what they thought the meeting was about. You might imagine that this question would have been settled in advance, but since the broad theme was quantum theory, perhaps a degree of uncertainty was to be expected. The title of the conference was ‘Quantum Physics and the Nature of Reality’. The poll, completed by 33 of the participating physicists, mathematicians and philosophers, posed a range of unresolved questions about the relationship between those two things, one of which was: ‘What is your favourite interpretation of quantum mechanics?’

The word ‘favourite’ speaks volumes. Isn’t science supposed to be decided by experiment and observation, free from personal preferences? But experiments in quantum physics have been obstinately silent on what it means. All we can do is develop hunches, intuitions and, yes, cherished ideas. Of these, the survey offered no fewer than 11 to choose from (as well as ‘other’ and ‘none’).

The most popular (supported by 42 per cent of the very small sample) was basically the view put forward by Niels Bohr, Werner Heisenberg and their colleagues in the early days of quantum theory. Today it is known as the Copenhagen Interpretation. More on that below. You might not recognise most of the other alternatives, such as Quantum Bayesianism, Relational Quantum Mechanics, and Objective Collapse (which is not, as you might suppose, just saying ‘what the hell’). Maybe you haven’t heard of the Copenhagen Interpretation either. But in third place (18 per cent) was the Many Worlds Interpretation (MWI), and I suspect you do know something about that, since the MWI is the one with all the glamour and publicity. It tells us that we have multiple selves, living other lives in other universes, quite possibly doing all the things that we dream of but will never achieve (or never dare). Who could resist such an idea?

Yet resist we should. We should resist not just because MWI is unlikely to be true, or even because, since no one knows how to test it, the idea is perhaps not truly scientific at all. Those are valid criticisms, but the main reason we should hold out is that it is incoherent, both philosophically and logically. There could be no better contender for Wolfgang Pauli’s famous put-down: it is not even wrong. [Continue reading…]

facebooktwittermail