Category Archives: Science/Technology

The man who wants to save the world and make a little money on the side

Bruce Falconer writes: In July 2012, a commercial fishing charter called Ocean Pearl motored through the frigid waters of the North Pacific. It carried 100 tons of iron dust and a crew of 11, led by a tall and heavyset 62-year-old American named Russ George. Passing beyond Canada’s territorial limit, the vessel arrived at an area of swirling currents known as the Haida eddies. There, in an eddy that had been chosen for the experiment, George and his crew mixed their cargo of iron with seawater and pumped it into the ocean through a hose, turning the waters a cloudy red. In early August, the ship returned to port, where the crew loaded an additional 20 tons of iron. They dumped it near the same Haida eddy a few weeks later, bringing to an end the most audacious and, before long, notorious attempt yet undertaken by man to modify Earth’s climate.

The expedition was grand in its aims and obscure in its patronage. Funding George’s voyage was a village of Haida Indians on Haida Gwaii, a remote Canadian archipelago about 500 miles northwest of Vancouver. George and his business partners had gained the town’s support for a project of dumping iron dust into the ocean to stimulate the growth of a plankton bloom. The plankton would help feed starving salmon, upon which the Haida had traditionally depended for their livelihood, and also remove a million tons of carbon dioxide from the atmosphere. (In algae form, plankton, like all plants, absorbs CO2 through photosynthesis.) The intended result: a replenished fish population—and millions of dollars’ worth of “carbon credits” that could be sold on the international market.

Back on land, in Vancouver, George and his associates drafted a report on the expedition. It claimed that Ocean Pearl had seeded more than 3,800 square miles of barren waters, leaving in its wake “a verdant emerald sea lush with the growth of a hundred million tonnes of plankton.” According to the account, fin, sperm, and sei whales, rarely seen in the region, appeared in large numbers, along with killer whales, dolphins, schools of albacore tuna, and armies of night-feeding squid. Albatross, storm petrels, sooty shearwaters, and other seabirds had circled above the ship, while flocks of Brant geese came to rest on the water and drifted with the bloom.

But George did little to publicize these findings. Instead, he set about compiling the data in private, telling people that he intended to produce a precise estimate of the CO2 he had removed from the atmosphere and then invite an independent auditor to certify his claims.

If that was the plan, it quickly fell apart. In October 2012, the Guardian of London broke the news of George’s expedition, saying it “contravenes two UN conventions” against large-scale ocean fertilization experiments. Numerous media outlets followed up with alarmed, often savage, reports, some of which went so far as to label George a “rogue geoengineer” or “eco-terrorist.” Amid the uproar, Canadian environment minister Peter Kent accused George of “rogue science” and promised that any violation of the country’s environmental law would be “prosecuted to the full extent.”

George, for his part, spoke of media misrepresentation, and he stressed that he was engaged in cautious research. Amid the controversy, in an interview with Scientific American, he was asked whether his iron fertilization had worked. “We don’t know,” he answered. “The correct attitude is: ‘Data, speak to me.’ Do the work, get the data, let it speak to you and tell you what the facts might be.” While most commenters seemed to think George had gone too far, some expressed sympathy—or at least puzzled ambivalence. A Salon headline the following summer asked, “Does Russ George Deserve a Nobel Prize or a Prison Sentence?

George’s efforts place him in the company of a small but growing group of people convinced that global warming can be halted only with the aid of dramatic intervention in our planet’s natural processes, an approach known as geoengineering. The fixes envisioned by geoengineers range from the seemingly trivial, like painting roads and roofs white to reflect solar radiation, to the extraterrestrial, like a proposal by one Indian physicist to use the explosive power of nuclear fusion to elongate Earth’s orbit by one or two percent, thus reducing solar intensity. (It would also add 5.5 days to the year.)

Because its methods tend to be both ambitious and untested, geoengineering is closely tied to the dynamics of alarm—feeding on it and causing it in equal measure. [Continue reading…]

Facebooktwittermail

The Naked Future and Social Physics — review

Evgeny Morozov writes: In “On What We Can Not Do,” a short and pungent essay published a few years ago, the Italian philosopher Giorgio Agamben outlined two ways in which power operates today. There’s the conventional type that seeks to limit our potential for self-­development by restricting material resources and banning certain behaviors. But there’s also a subtler, more insidious type, which limits not what we can do but what we can not do. What’s at stake here is not so much our ability to do things but our capacity not to make use of that very ability.

While each of us can still choose not to be on Facebook, have a credit history or build a presence online, can we really afford not to do any of those things today? It was acceptable not to have a cellphone when most people didn’t have them; today, when almost everybody does and when our phone habits can even be used to assess whether we qualify for a loan, such acts of refusal border on the impossible.

For Agamben, it’s this double power “to be and to not be, to do and to not do” that makes us human. This active necessity to choose (and err) contributes to the development of individual faculties that shape our subjectivity. The tragedy of modern man, then, is that “he has become blind not to his capacities but to his incapacities, not to what he can do but to what he cannot, or can not, do.”

This blindness to the question of incapacities mars most popular books on recent advances in our ability to store, analyze and profit from vast amounts of data generated by our gadgets. (Our wherewithal not to call this phenomenon by the ugly, jargony name of Big Data seems itself to be under threat.) The two books under review, alas, are no exception.

In “The Naked Future,” Patrick Tucker, an editor at large for The Futurist magazine, surveys how this influx of readily available data will transform every domain of our existence, from improving our ability to predict earthquakes (thanks to the proliferation of sensors) to producing highly customized education courses that would tailor their content and teaching style, in real time, to the needs of individual students. His verdict: It’s all for the better.

Since most of us lead rather structured, regular lives — work, home, weekend — even a handful of data points (our location, how often we call our friends) proves useful in predicting what we may be doing a day or a year from now. “A flat tire on a Monday at 10 a.m. isn’t actually random. . . . We just don’t yet know how to model it,” Tucker writes.

Seeking to integrate data streams from multiple sources — our inboxes, our phones, our cars and, with its recent acquisition of a company that makes thermostats and smoke detectors, our bedrooms — a company like Google is well positioned not just to predict our future but also to detect just how much risk we take on every day, be it fire, a flat tire or a default on a loan. (Banks and insurance companies beware: You will be disrupted next!)

With so much predictive power, we may soon know the exact price of “preferring not to,” as a modern-day Bartleby might put it. [Continue reading…]

Facebooktwittermail

Have cosmologists lost their minds in the multiverse?

Luke Barnes and Geraint Lewis write: The recent BICEP2 observations – of swirls in the polarisation of the cosmic microwave background – have been proclaimed as many things, from evidence of the Big Bang and gravitational waves to something strange called the multiverse.

The multiverse theory is that our universe is but one of a vast, variegated ensemble of other universes. We don’t know how many pieces there are to the multiverse but estimates suggest there many be squillions of them.

But (if they exist) there has not been enough time since our cosmic beginning for light from these other universes to reach us. They are beyond our cosmic horizon and thus in principle unobservable.

How, then, can cosmologists say they have seen evidence of them?

Unobservable entities aren’t necessarily out-of-bounds for science. For example, protons and neutrons are made of subatomic particles called quarks. While they cannot be observed directly, their existence and properties are inferred from the way particles behave when smashed together.

But there is no such luxury with the multiverse. No signals from from other universes have or will ever bother our telescopes.

While there is some debate about what actually makes a scientific theory, we should at least ask if the multiverse theory is testable? Does it make predictions that we can test in a laboratory or with our telescopes? [Continue reading…]

Facebooktwittermail

Why does life resist disorder?

Addy Pross writes: Biology is wondrously strange – so familiar, yet so strikingly different to physics and chemistry. We know where we are with inanimate matter. Ever since Isaac Newton, it has answered to a basically mechanical view of nature, blindly following its laws without regard for purposes. But could there be, as Immanuel Kant put it, a Newton of the blade of grass? Living things might be made of the same fundamental stuff as the rest of the material world – ‘dead’ atoms and molecules – but they do not behave in the same way at all. In fact, they seem so purposeful as to defy the materialist philosophy on which the rest of modern science was built.

Even after Charles Darwin, we continue to struggle with that difference. As any biologist will acknowledge, function and purpose remain central themes in the life sciences, though they have long been banished from the physical sciences. How, then, can living things be reconciled with our mechanical-mechanistic universe? This is a conceptual question, of course, but it has a historical dimension: how did life on Earth actually come about? How could it have? Both at the abstract level and in the particular story of our world, there seems to be a chasm between the animate and inanimate realms.

I believe that it is now possible to bridge that gap. [Continue reading…]

Facebooktwittermail

Astra Taylor: Misogyny and the cult of internet openness

In December, and again in February, at the Google Bus blockades in San Francisco, one thing struck me forcefully: the technology corporation employees waiting for their buses were all staring so intently at their phones that they apparently didn’t notice the unusual things going on around them until their buses were surrounded. Sometimes I feel like I’m living in a science-fiction novel, because my region is so afflicted with people who stare at the tiny screens in their hands on trains, in restaurants, while crossing the street, and too often while driving. San Francisco is, after all, where director Phil Kaufman set the 1978 remake of Invasion of the Body Snatchers, the movie wherein a ferny spore-spouting form of alien life colonizes human beings so that they become zombie-like figures.

In the movies, such colonization took place secretly, or by force, or both: it was a war, and (once upon a time) an allegory for the Cold War and a possible communist takeover. Today, however — Hypercapitalism Invades! — we not only choose to carry around those mobile devices, but pay corporations hefty monthly fees to do so. In return, we get to voluntarily join the great hive of people being in touch all the time, so much so that human nature seems in the process of being remade, with the young immersed in a kind of contact that makes solitude seem like polar ice, something that’s melting away.

We got phones, and then Smart Phones, and then Angry Birds and a million apps — and a golden opportunity to be tracked all the time thanks to the GPS devices in those phones. Your cell phone is the shackle that chains you to corporate America (and potentially to government surveillance as well) and like the ankle bracelets that prisoners wear, it’s your monitor. It connects you to the Internet and so to the brave new world that has such men as Larry Ellison and Mark Zuckerberg in it. That world — maybe not so brave after all — is the subject of Astra Taylor’s necessary, alarming, and exciting new book, The People’s Platform: Taking Back Power and Culture in the Digital Age.

The Internet arose with little regulation, little public decision-making, and a whole lot of fantasy about how it was going to make everyone powerful and how everything would be free. Free, as in unregulated and open, got confused with free, as in not getting paid, and somehow everyone from Facebook to Arianna Huffington created massively lucrative sites (based on advertising dollars) in which the people who made the content went unpaid. Just as Russia woke up with oil oligarchs spreading like mushrooms after a night’s heavy rain, so we woke up with new titans of the tech industry throwing their billionaire weight around. The Internet turns out to be a superb mechanism for consolidating money and power and control, even as it gives toys and minor voices to the rest of us.

As Taylor writes in her book, “The online sphere inspires incessant talk of gift economies and public-spiritedness and democracy, but commercialization and privatization and inequality lurk beneath the surface. This contradiction is captured in a single word: ‘open,’ a concept capacious enough to contain both the communal and capitalistic impulses central to Web 2.0.” And she goes on to discuss, “the tendency of open systems to amplify inequality — and new media thinkers’ glib disregard for this fundamental characteristic.”  Part of what makes her book exceptional, in fact, is its breadth. It reviews much of the existing critique of the Internet and connects the critiques of specific aspects of it into an overview of how a phenomenon supposed to be wildly democratic has become wildly not that way at all.

And at a certain juncture, she turns to gender. Though far from the only weak point of the Internet as an egalitarian space — after all, there’s privacy (lack of), the environment (massive server farms), and economics (tax cheats, “content providers” like musicians fleeced) — gender politics, as she shows in today’s post adapted from her book, is one of the most spectacular problems online. Let’s imagine this as science fiction: a group of humans apparently dissatisfied with how things were going on Earth — where women were increasing their rights, representation, and participation — left our orbit and started their own society on their own planet. The new planet wasn’t far away or hard to get to (if you could afford the technology): it was called the Internet. We all know it by name; we all visit it; but we don’t name the society that dominates it much.

Taylor does: the dominant society, celebrating itself and pretty much silencing everyone else, makes the Internet bear a striking resemblance to Congress in 1850 or a gentlemen’s club (minus any gentleness). It’s a gated community, and as Taylor describes today, the security detail is ferocious, patrolling its borders by trolling and threatening dissident voices, and just having a female name or being identified as female is enough to become a target of hate and threats.

Early this year, a few essays were published on Internet misogyny that were so compelling I thought 2014 might be the year we revisit these online persecutions, the way that we revisited rape in 2013, thanks to the Steubenville and New Delhi assault cases of late 2012. But the subject hasn’t (yet) quite caught fire, and so not much gets said and less gets done about this dynamic new machinery for privileging male and silencing female voices. Which is why we need to keep examining and discussing this, as well as the other problems of the Internet. And why you need to read Astra Taylor’s book. This excerpt is part of her diagnosis of the problems; the book ends with ideas about a cure. Rebecca Solnit

Open systems and glass ceilings
The disappearing woman and life on the internet
By Astra Taylor

The Web is regularly hailed for its “openness” and that’s where the confusion begins, since “open” in no way means “equal.” While the Internet may create space for many voices, it also reflects and often amplifies real-world inequities in striking ways.

An elaborate system organized around hubs and links, the Web has a surprising degree of inequality built into its very architecture. Its traffic, for instance, tends to be distributed according to “power laws,” which follow what’s known as the 80/20 rule — 80% of a desirable resource goes to 20% of the population.

In fact, as anyone knows who has followed the histories of Google, Apple, Amazon, and Facebook, now among the biggest companies in the world, the Web is increasingly a winner-take-all, rich-get-richer sort of place, which means the disparate percentages in those power laws are only likely to look uglier over time.

Continue reading

Facebooktwittermail

Catherine Crump and Matthew Harwood: The net closes around us

Twice in my life — in the 1960s and the post-9/11 years — I was suddenly aware of clicks and other strange noises on my phone.  In both periods, I’ve wondered what the story was, and then made self-conscious jokes with whoever was on the other end of the line about those who might (or might not) be listening in.  Twice in my life I’ve felt, up close and personal, that ominous, uncomfortable, twitchy sense of being overheard, without ever knowing if it was a manifestation of the paranoia of the times or of realism — or perhaps of both.

I’m conceptually outraged by mass surveillance, but generally my personal attitude has always been: Go ahead.  Read my email, listen to my phone calls, follow my web searches, check out my location via my cell phone.  My tweets don’t exist — but if they did, I’d say have at ‘em.  I don’t give a damn.

And in some sense, I don’t, even though everyone, including me, is embarrassed by something.  Everyone says something about someone they would rather not have made public (or perhaps have even said).  Everyone has some thing — or sometimes many things — they would rather keep to themselves.

Increasingly, however, as the U.S. surveillance state grows ever more pervasive, domestically and globally, as the corporate version of the same expands exponentially, as prying “eyes” and “ears” of every technological variety proliferate, the question of who exactly we are arises.  What are we without privacy, without a certain kind of unknowability?  What are we when “our” information is potentially anyone’s information?  We may soon find out.  A recent experiment by two Stanford University graduate students who gathered just a few month’s worth of phone metadata on 546 volunteers has, for instance, made mincemeat of President Obama’s claim that the NSA’s massive version of metadata collection “is not looking at people’s names and they’re not looking at content.”  Using only the phone metadata they got, the Stanford researchers “inferred sensitive information about people’s lives, including: neurological and heart conditions, gun ownership, marijuana cultivation, abortion, and participation in Alcoholics Anonymous.”

And that’s just a crude version of what the future holds for all of us.  There are various kinds of extinctions.  That superb environmental reporter Elizabeth Kolbert has just written a powerful book, The Sixth Extinction, about the more usual (if horrifying) kind.  Our developing surveillance world may offer us an example of another kind of extinction: of what we once knew as the private self.  If you want to be chilled to the bone when it comes to this, check out today’s stunning report by the ACLU’s Catherine Crump and Matthew Harwood on where the corporate world is taking your identity. Tom Engelhardt

Invasion of the data snatchers
Big Data and the Internet of Things means the surveillance of everything
By Catherine Crump and Matthew Harwood

Estimates vary, but by 2020 there could be over 30 billion devices connected to the Internet. Once dumb, they will have smartened up thanks to sensors and other technologies embedded in them and, thanks to your machines, your life will quite literally have gone online. 

The implications are revolutionary. Your smart refrigerator will keep an inventory of food items, noting when they go bad. Your smart thermostat will learn your habits and adjust the temperature to your liking. Smart lights will illuminate dangerous parking garages, even as they keep an “eye” out for suspicious activity.

Techno-evangelists have a nice catchphrase for this future utopia of machines and the never-ending stream of information, known as Big Data, it produces: the Internet of Things.  So abstract. So inoffensive. Ultimately, so meaningless.

A future Internet of Things does have the potential to offer real benefits, but the dark side of that seemingly shiny coin is this: companies will increasingly know all there is to know about you.  Most people are already aware that virtually everything a typical person does on the Internet is tracked. In the not-too-distant future, however, real space will be increasingly like cyberspace, thanks to our headlong rush toward that Internet of Things. With the rise of the networked device, what people do in their homes, in their cars, in stores, and within their communities will be monitored and analyzed in ever more intrusive ways by corporations and, by extension, the government.

And one more thing: in cyberspace it is at least theoretically possible to log off.  In your own well-wired home, there will be no “opt out.”

Continue reading

Facebooktwittermail

Revelations of NSA spying cost U.S. tech companies

a13-iconThe New York Times reports: Microsoft has lost customers, including the government of Brazil.

IBM is spending more than a billion dollars to build data centers overseas to reassure foreign customers that their information is safe from prying eyes in the United States government.

And tech companies abroad, from Europe to South America, say they are gaining customers that are shunning United States providers, suspicious because of the revelations by Edward J. Snowden that tied these providers to the National Security Agency’s vast surveillance program.

Even as Washington grapples with the diplomatic and political fallout of Mr. Snowden’s leaks, the more urgent issue, companies and analysts say, is economic. Technology executives, including Mark Zuckerberg of Facebook, raised the issue when they went to the White House on Friday for a meeting with President Obama.

It is impossible to see now the full economic ramifications of the spying disclosures — in part because most companies are locked in multiyear contracts — but the pieces are beginning to add up as businesses question the trustworthiness of American technology products. [Continue reading…]

Facebooktwittermail

Facebook’s DeepFace will soon be analyzing your face

n13-iconPolicyMic: Facebook is officially one step closer to becoming Big Brother. Last week, the company announced a new, startlingly accurate photo recognition program, ominously called DeepFace.

The program can compare two photos and determine whether they are of the same person with 97.25% accuracy — just a hair below the 97.53% facial recognition rate of actual human beings.

DeepFace goes way beyond Facebook’s current facial recognition that suggests tags for images. [Continue reading…]

Facebooktwittermail

In unseen worlds, science invariably crosses paths with fantasy

f13-iconPhilip Ball writes: For centuries, scientists studied light to comprehend the visible world. Why are things colored? What is a rainbow? How do our eyes work? And what is light itself? These are questions that preoccupied scientists and philosophers since the time of Aristotle, including Roger Bacon, Isaac Newton, Michael Faraday, Thomas Young, and James Clerk Maxwell.

But in the late 19th century all that changed, and it was largely Maxwell’s doing. This was the period in which the whole focus of physics — then still emerging as a distinct scientific discipline — shifted from the visible to the invisible. Light itself was instrumental to that change. Not only were the components of light invisible “fields,” but light was revealed as merely a small slice of a rainbow extending far into the unseen.

Physics has never looked back. Today its theories and concepts are concerned largely with invisible entities: not only unseen force fields and insensible rays but particles too small to see even with the most advanced microscopes. We now know that our everyday perception grants us access to only a tiny fraction of reality. Telescopes responding to radio waves, infrared radiation, and X-rays have vastly expanded our view of the universe, while electron microscopes, X-ray beams, and other fine probes of nature’s granularity have unveiled the microworld hidden beyond our visual acuity. Theories at the speculative forefront of physics flesh out this unseen universe with parallel worlds and with mysterious entities named for their very invisibility: dark matter and dark energy.

This move beyond the visible has become a fundamental part of science’s narrative. But it’s a more complicated shift than we often appreciate. Making sense of what is unseen — of what lies “beyond the light” — has a much longer history in human experience. Before science had the means to explore that realm, we had to make do with stories that became enshrined in myth and folklore. Those stories aren’t banished as science advances; they are simply reinvented. Scientists working at the forefront of the invisible will always be confronted with gaps in knowledge, understanding, and experimental capability. In the face of those limits, they draw unconsciously on the imagery of the old stories. This is a necessary part of science, and these stories can sometimes suggest genuinely productive scientific ideas. But the danger is that we will start to believe them at face value, mistaking them for theories.

A backward glance at the history of the invisible shows how the narratives and tropes of myth and folklore can stimulate science, while showing that the truth will probably turn out to be far stranger and more unexpected than these old stories can accommodate. [Continue reading…]

Facebooktwittermail

How America’s billionaires are taking over science

f13-iconWilliam J Broad writes: American science, long a source of national power and pride, is increasingly becoming a private enterprise.

In Washington, budget cuts have left the nation’s research complex reeling. Labs are closing. Scientists are being laid off. Projects are being put on the shelf, especially in the risky, freewheeling realm of basic research. Yet from Silicon Valley to Wall Street, science philanthropy is hot, as many of the richest Americans seek to reinvent themselves as patrons of social progress through science research.

The result is a new calculus of influence and priorities that the scientific community views with a mix of gratitude and trepidation.

“For better or worse,” said Steven A. Edwards, a policy analyst at the American Association for the Advancement of Science, “the practice of science in the 21st century is becoming shaped less by national priorities or by peer-review groups and more by the particular preferences of individuals with huge amounts of money.”

They have mounted a private war on disease, with new protocols that break down walls between academia and industry to turn basic discoveries into effective treatments. They have rekindled traditions of scientific exploration by financing hunts for dinosaur bones and giant sea creatures. They are even beginning to challenge Washington in the costly game of big science, with innovative ships, undersea craft and giant telescopes — as well as the first private mission to deep space.

The new philanthropists represent the breadth of American business, people like Michael R. Bloomberg, the former New York mayor (and founder of the media company that bears his name), James Simons (hedge funds) and David H. Koch (oil and chemicals), among hundreds of wealthy donors. Especially prominent, though, are some of the boldest-face names of the tech world, among them Bill Gates (Microsoft), Eric E. Schmidt (Google) and Lawrence J. Ellison (Oracle).

This is philanthropy in the age of the new economy — financed with its outsize riches, practiced according to its individualistic, entrepreneurial creed. The donors are impatient with the deliberate, and often politicized, pace of public science, they say, and willing to take risks that government cannot or simply will not consider.

Yet that personal setting of priorities is precisely what troubles some in the science establishment. Many of the patrons, they say, are ignoring basic research — the kind that investigates the riddles of nature and has produced centuries of breakthroughs, even whole industries — for a jumble of popular, feel-good fields like environmental studies and space exploration.

As the power of philanthropic science has grown, so has the pitch, and the edge, of the debate. Nature, a family of leading science journals, has published a number of wary editorials, one warning that while “we applaud and fully support the injection of more private money into science,” the financing could also “skew research” toward fields more trendy than central.

“Physics isn’t sexy,” William H. Press, a White House science adviser, said in an interview. “But everybody looks at the sky.”

Fundamentally at stake, the critics say, is the social contract that cultivates science for the common good. They worry that the philanthropic billions tend to enrich elite universities at the expense of poor ones, while undermining political support for federally sponsored research and its efforts to foster a greater diversity of opportunity — geographic, economic, racial — among the nation’s scientific investigators.

Historically, disease research has been particularly prone to unequal attention along racial and economic lines. A look at major initiatives suggests that the philanthropists’ war on disease risks widening that gap, as a number of the campaigns, driven by personal adversity, target illnesses that predominantly afflict white people — like cystic fibrosis, melanoma and ovarian cancer. [Continue reading…]

Facebooktwittermail

Is the lopsided Universe telling us we need new theories?

a13-iconArs Technica reports: The Universe is incredibly regular. The variation of the cosmos’ temperature across the entire sky is tiny: a few millionths of a degree, no matter which direction you look. Yet the same light from the very early cosmos that reveals the Universe’s evenness also tells astronomers a great deal about the conditions that gave rise to irregularities like stars, galaxies, and (incidentally) us.

That light is the cosmic microwave background, and it provides some of the best knowledge we have about the structure, content, and history of the Universe. But it also contains a few mysteries: on very large scales, the cosmos seems to have a certain lopsidedness. That slight asymmetry is reflected in temperature fluctuations much larger than any galaxy, aligned on the sky in a pattern facetiously dubbed “the axis of evil.”

The lopsidedness is real, but cosmologists are divided over whether it reveals anything meaningful about the fundamental laws of physics. The fluctuations are sufficiently small that they could arise from random chance. We have just one observable Universe, but nobody sensible believes we can see all of it. With a sufficiently large cosmos beyond the reach of our telescopes, the rest of the Universe may balance the oddity that we can see, making it a minor, local variation.

However, if the asymmetry can’t be explained away so simply, it could indicate that some new physical mechanisms were at work in the early history of the Universe. As Amanda Yoho, a graduate student in cosmology at Case Western Reserve University, told Ars, “I think the alignments, in conjunction with all of the other large angle anomalies, must point to something we don’t know, whether that be new fundamental physics, unknown astrophysical or cosmological sources, or something else.” [Continue reading…]

Facebooktwittermail

How people are becoming the property of technology companies

o13-iconFollowing Facebook’s $19 billion dollar acquisition of WhatsApp, Reuven Cohen writes: In November 2013, a survey of smartphone owners found that WhatsApp was the leading social messaging app in countries including Spain, Switzerland, Germany and Japan. Yet at 450 million users and growing, there is a strong likelihood that both Facebook and WhatsApp share the majority of the same user base. So what’s driving the massive valuation? One answer might be users attention. Unlike many other mobile apps, WhatsApp users actually use this service on an ongoing daily or even hourly basis.

“Attention,” write Thomas Mandel and Gerard Van der Leun in their 1996 book Rules of the Net, ”is the hard currency of cyberspace.” This has never been truer.

WhatsApp’s value may not have much to do with the disruption of the telecom world as much as a looming battle for Internet users rapidly decreasing attention spans. A study back in 2011 uncovered the reality for most mobile apps. Most people never use an app more than once. According to the study, 26% of the time customers never give the app a second try. With an ever-increasing number of apps competing for users attention, the only real metric that matters is whether or not they actual use it. Your attention may very well be the fundamental value behind Facebook’s purchase.

In a 1997 Wired article, author Michael H. Goldhaber describes the shift towards the so called Attention Economy; “Attention has its own behavior, its own dynamics, its own consequences. An economy built on it will be different than the familiar material-based one.” writes Goldhaber.

His thesis is that as the Internet becomes an increasingly strong presence in the overall economy and our daily lives, the flow of attention will not only anticipate the flow of money, but also eventually replace it altogether. Fast-forward 17 years and his thesis has never been more true.

As we become ever more bombarded with information, the value of this information decreases. Just look at the improvements made to Facebook’s news feed over the years. In an attempt to make its news feed more useful, the company has implement-advanced algorithms that attempt to tailor the flow of information to your specific interests. The better Facebook gets at keeping your attention, the more valuable you become. Yes, you are the product. [Continue reading…]

To the extent that corporations are in the business of corralling, controlling, and effectively claiming ownership of people’s attention, the only way of finding freedom in such a world will derive from each individual’s effort to cultivate their own powers of autonomous attention.

Facebooktwittermail

Unscientific Americans?

Science World Report: About 25% of Americans don’t know that the Earth rotates the sun. A poll from the National Science Foundation was the bearer of the bad news.

You might be thinking that a minute sample size was taken among a population of the severely uninformed, but evidences suggests otherwise. The National Science Foundation conducts a poll to measure scientific literacy each year. This year, 2,200 people were asked ten questions about physical and biological sciences, and about one in four people did not know that the Earth revolved around the Sun; a proven scientific fact that was discovered in 1543 by Nicolaus Copernicus.

Now before jumping to the conclusion that this provides yet more evidence that Americans are strikingly ignorant, it’s worth noting that among citizens of the European Union polled in 2005, 29% believed the Sun revolves around the Earth.

But hold on — here’s perhaps the most revealing element in popular awareness about basic science on both sides of the Atlantic: both population groups demonstrated a better understanding of plate tectonics.

83% of Americans and 87% of Europeans understand that “the continents on which we live have been moving for millions of years and will continue to move in the future.”

Does this mean that continental drift is an easier concept to grasp than the structure of the solar system?

I don’t think so. Neither do I think that a quarter of Americans believe in Ptolemaic astronomy. It seems more likely that a significant number of people who are not speaking their native language find the question — Does the Earth go around the Sun, or does the Sun go around the Earth? — grammatically challenging.

In other words, this poll may reveal less about what people believe than it reveals about how well they understand what they are being asked.

Facebooktwittermail

Why this climate scientist’s libel case matters

a13-iconUnion of Concerned Scientists: Back in 2012, after the Competitive Enterprise Institute and the National Review each published pieces that likened climate scientist Michael E. Mann to a child molester and called his work a fraud, Mann fought back with a lawsuit, charging them with libel. Now, in a preliminary ruling, a Superior Court Judge has sided with Mann, paving the way for the case to move forward and potentially setting an important precedent about the limits of disinformation.

The ruling, in essence, reinforces the wise adage attributed to former New York Sen. Patrick Moynihan that, while everyone is entitled to his or her own opinion, we are not each entitled to our own facts. But first, some background.

Michael Mann, a world-renowned climate scientist at Penn State University, is perhaps best known as the author of the so-called “Hockey Stick” graph. Some 15 years ago, in 1999, Mann and two colleagues published data they had compiled from tree rings, coral growth bands and ice cores, as well as more recent temperature measurements, to chart 1,000 years’ worth of climate data. The resulting graph of their findings showed relatively stable global temperatures followed by a steep warming trend beginning in the 1900s. One of Mann’s colleagues gave it the nickname because the graph looks something like a hockey stick lying on its side with the upturned blade representing the sharp, comparatively recent temperature increase. It quickly became one of the most famous, easy-to-grasp representations of the reality of global warming.

The United Nations’ Intergovernmental Panel on Climate Change featured Mann’s work, among similar studies, in their pathbreaking 2001 report, concluding that temperature increases in the 20th century were likely to have been “the largest of any century during the past 1,000 years.” But while Mann’s peer-reviewed research pointed clearly to a human role in global warming, it also made Mann a lightning rod for attacks from those, including many in the fossil fuel industry, who sought to deny the reality of global warming. [Continue reading…]

Facebooktwittermail

Douglas Hofstadter — Research on artificial intelligence is sidestepping the core question: how do people think?

f13-iconDouglas Hofstadter is a cognitive scientist at Indiana University and the Pulitzer Prize-winning author of Gödel, Escher, Bach: An Eternal Golden Braid.

Popular Mechanics: You’ve said in the past that IBM’s Jeopardy-playing computer, Watson, isn’t deserving of the term artificial intelligence. Why?

Douglas Hofstadter: Well, artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial level — things that are impressive if you don’t look at the details. In that sense, we’ve already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinking — that’s inaccurate. Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.

Do you think we’ll start seeing diminishing returns from a Watson-like approach to AI?

I can’t really predict that. But what I can say is that I’ve monitored Google Translate — which uses a similar approach — for many years. Google Translate is developing and it’s making progress because the developers are inventing new, clever ways of milking the quickness of computers and the vastness of its database. But it’s not making progress at all in the sense of understanding your text, and you can still see it falling flat on its face a lot of the time. And I know it’ll never produce polished [translated] text, because real translating involves understanding what is being said and then reproducing the ideas that you just heard in a different language. Translation has to do with ideas, it doesn’t have to do with words, and Google Translate is about words triggering other words.

So why are AI researchers so focused on building programs and computers that don’t do anything like thinking?

They’re not studying the mind and they’re not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They’re doing product development.

I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn’t about making money, and the people in the field weren’t driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That’s a beautiful paradox and very exciting, philosophically. [Continue reading…]

Facebooktwittermail

Technological narcissism and the illusion of self-knowledge offered by the Quantified Self

e13-iconKnow thyself has been a maxim throughout the ages, rooted in the belief that wisdom and wise living demand we acquire self-knowledge.

As Shakespeare wrote:

This above all: to thine own self be true,
And it must follow, as the night the day,
Thou canst not then be false to any man.

New research on human feelings, however, seems to have the absurd implication that if you really want to know your inner being, you should probably carry around a mirror and pay close attention to your facial expressions. The researchers clearly believe that monitoring muscle contractions is a more reliable way of knowing what someone is feeling than using any kind of subjective measure. Reduced to this muscular view, it turns out — according to the research — that we only have four basic feelings.

Likewise, devotees of the Quantified Self seem to believe that it’s not really possible to know what it means to be alive unless one can be hooked up to and study the output from one or several digital devices.

In each of these cases we are witnessing a trend driven by technological development through which the self is externalized.

Thoreau warned that we have “become the tools of our tools,” but the danger laying beyond that is that we become our tools; that our sense of who we are becomes so pervasively mediated by devices that without these devices we conclude we are nothing.

Josh Cohen writes: With January over, the spirit of self-improvement in which you began the year can start to evaporate. Except now your feeble excuses are under assault from a glut of “self-tracking” devices and apps. Your weakness for saturated fats and alcohol, your troubled sleep and mood swings, your tendencies to procrastination, indecision and disorganisation — all your quirks and flaws can now be monitored and remedied with the help of mobile technology.

Technology offer solutions not only to familiar problems of diet, exercise and sleep, but to anxieties you weren’t even aware of. If you can’t resolve a moral dilemma, there’s an app that will solicit your friends’ advice. If you’re concerned about your toddler’s language development, there’s a small device that will measure the number and range of words she’s using against those of her young peers.

Quantified Self (QS) is a growing global movement selling a new form of wisdom, encapsulated in the slogan “self-knowledge through numbers”. Rooted in the American tech scene, it encourages people to monitor all aspects of their physical, emotional, cognitive, social, domestic and working lives. The wearable cameras that enable you to broadcast your life minute by minute; the Nano-sensors that can be installed in any region of the body to track vital functions from blood pressure to cholesterol intake, the voice recorders that pick up the sound of your sleeping self or your baby’s babble—together, these devices can provide you with the means to regain control over your fugitive life.

This vision has traction at a time when our daily lives, as the Snowden leaks have revealed, are being lived in the shadow of state agencies, private corporations and terrorist networks — overwhelming yet invisible forces that leave us feeling powerless to maintain boundaries around our private selves. In a world where our personal data appears vulnerable to intrusion and exploitation, a movement that effectively encourages you to become your own spy is bound to resonate. Surveillance technologies will put us back in the centre of the lives from which they’d displaced us. Our authoritative command of our physiological and behavioural “numbers” can assure us that after all, no one knows us better than we do. [Continue reading…]

Facebooktwittermail

The importance of consilience in science

OpinionPaul Willis writes: Science is not a democracy. A consensus of evidence may be interesting, but technically it may not be significant. The thoughts of a majority of scientists doesn’t mean a hill of beans. It’s all about the evidence. The science is never settled.

These are refrains that I and other science communicators have been using over and over again when we turn to analysing debates and discussions based on scientific principles. I think we get torn between remaining true to the philosophical principles by which science is conducted and trying to make those principles familiar to an audience that probably does not understand them.

So let me introduce a concept that is all-too-often overlooked in science discussions, that can actually shed some light deep into the mechanisms of science and explain the anatomy of a scientific debate. It’s the phonically beautiful term ‘consilience’.

Consilience means to use several different lines of inquiry that converge on the same or similar conclusions. The more independent investigations you have that reach the same result, the more confidence you can have that the conclusion is correct. Moreover, if one independent investigation produces a result that is at odds with the consilience of several other investigations, that is an indication that the error is probably in the methods of the adherent investigation, not in the conclusions of the consilience.

Let’s take an example to unpack this concept, an example where I first came across the term and it is a beautiful case of consilience at work. Charles Darwin’s On Origin Of Species is a masterpiece of consilience. Each chapter is a separate line of investigation and, within each chapter there are numerous examples, investigations and experiments that all join together to reach the same conclusion: that life changes through time and that life has evolved on Earth. Take apart On Origin Of Species case by case and no single piece of evidence that Darwin mustered conclusively demonstrates that evolution is true. But add those cases back together and the consilience is clear: evidence from artificial breeding, palaeontology, comparative morphology and a host of other independent lines of investigation combine to confirm the same inescapable conclusion.

That was 1859. Since then yet more investigations have been added to the consilience for evolution. What’s more, these investigations within the biological and geological sciences have been joined with others from physics and chemistry as well as completely new areas of science such as genetics, radiometric dating and molecular biology. Each independent line of investigation builds the consilience that the world and the universe are extremely old and that life has evolved through unfathomable durations of time here on our home planet.

So, when a new line of investigation comes along claiming evidence and conclusions contrary to evolution, how can that be accommodated within the consilience? How does it relate to so many independent strains conjoined by a similar conclusion at odds with the newcomer? Can one piece of evidence overthrow such a huge body of work?

Such is the thinking of those pesky creationists who regularly come up with “Ah-Ha!” and “Gotcha!” factoids that apparently overturn, not just evolution, but the whole consilience of science. [Continue reading…]

Facebooktwittermail