Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.
A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.
The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.
But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking. [Continue reading...]
Scientific American reports: Earth’s magnetic field, which protects the planet from huge blasts of deadly solar radiation, has been weakening over the past six months, according to data collected by a European Space Agency (ESA) satellite array called Swarm.
The biggest weak spots in the magnetic field — which extends 370,000 miles (600,000 kilometers) above the planet’s surface — have sprung up over the Western Hemisphere, while the field has strengthened over areas like the southern Indian Ocean, according to the magnetometers onboard the Swarm satellites — three separate satellites floating in tandem.
The scientists who conducted the study are still unsure why the magnetic field is weakening, but one likely reason is that Earth’s magnetic poles are getting ready to flip, said Rune Floberghagen, the ESA’s Swarm mission manager. In fact, the data suggest magnetic north is moving toward Siberia.
“Such a flip is not instantaneous, but would take many hundred if not a few thousand years,” Floberghagen told Live Science. “They have happened many times in the past.”
Scientists already know that magnetic north shifts. Once every few hundred thousand years the magnetic poles flip so that a compass would point south instead of north. While changes in magnetic field strength are part of this normal flipping cycle, data from Swarm have shown the field is starting to weaken faster than in the past. Previously, researchers estimated the field was weakening about 5 percent per century, but the new data revealed the field is actually weakening at 5 percent per decade, or 10 times faster than thought. As such, rather than the full flip occurring in about 2,000 years, as was predicted, the new data suggest it could happen sooner. [Continue reading...]
Natalie Wolchover writes: One January afternoon five years ago, Princeton geologist Lincoln Hollister opened an email from a colleague he’d never met bearing the subject line, “Help! Help! Help!” Paul Steinhardt, a theoretical physicist and the director of Princeton’s Center for Theoretical Science, wrote that he had an extraordinary rock on his hands, one that he thought was natural but whose origin and formation he could not identify. Hollister had examined tons of obscure rocks over his five-decade career and agreed to take a look.
Originally a dense grain two or three millimeters across that had been ground down into microscopic fragments, the rock was a mishmash of lustrous metal and matte mineral of a yellowish hue. It reminded Hollister of something from Oregon called josephinite. He told Steinhardt that such rocks typically form deep underground at the boundary between Earth’s core and mantle or near the surface due to a particular weathering phenomenon. “Of course, all of that ended up being a false path,” said Hollister, 75. The more the scientists studied the rock, the stranger it seemed.
After five years, approximately 5,000 Steinhardt-Hollister emails and a treacherous journey to the barren arctic tundra of northeastern Russia, the mystery has only deepened. Today, Steinhardt, Hollister and 15 collaborators reported the curious results of a long and improbable detective story. Their findings, detailed in the journal Nature Communications, reveal new aspects of the solar system as it was 4.5 billion years ago: chunks of incongruous metal inexplicably orbiting the newborn sun, a collision of extraordinary magnitude, and the creation of new minerals, including an entire class of matter never before seen in nature. It’s a drama etched in the geochemistry of a truly singular rock. [Continue reading...]
Philip Ball writes: When the German physicist Arnold Sommerfeld assigned his most brilliant student a subject for his doctoral thesis in 1923, he admitted that “I would not have proposed a topic of this difficulty to any of my other pupils.” Those others included such geniuses as Wolfgang Pauli and Hans Bethe, yet for Sommerfeld the only one who was up to the challenge of this subject was Werner Heisenberg.
Heisenberg went on to be a key founder of quantum theory and was awarded the 1932 Nobel Prize in physics. He developed one of the first mathematical descriptions of this new and revolutionary discipline, discovered the uncertainty principle, and together with Niels Bohr engineered the “Copenhagen interpretation” of quantum theory, to which many physicists still adhere today.
The subject of Heisenberg’s doctoral dissertation, however, wasn’t quantum physics. It was harder than that. The 59-page calculation that he submitted to the faculty of the University of Munich in 1923 was titled “On the stability and turbulence of fluid flow.”
Sommerfeld had been contacted by the Isar Company of Munich, which was contracted to prevent the Isar River from flooding by building up its banks. The company wanted to know at what point the river flow changed from being smooth (the technical term is “laminar”) to being turbulent, beset with eddies. That question requires some understanding of what turbulence is. Heisenberg’s work on the problem was impressive—he solved the mathematical equations of flow at the point of the laminar-to-turbulent change—and it stimulated ideas for decades afterward. But he didn’t really crack it—he couldn’t construct a comprehensive theory of turbulence.
Heisenberg was not given to modesty, but it seems he had no illusions about his achievements here. One popular story goes that he once said, “When I meet God, I am going to ask him two questions. Why relativity? And why turbulence? I really believe he will have an answer for the first.”
It is probably an apocryphal tale. The same remark has been attributed to at least one other person: The British mathematician and expert on fluid flow, Horace Lamb, is said to have hoped that God might enlighten him on quantum electrodynamics and turbulence, saying that “about the former I am rather optimistic.”
You get the point: turbulence, a ubiquitous and eminently practical problem in the real world, is frighteningly hard to understand. [Continue reading...]
No, a ‘supercomputer’ did NOT pass the Turing Test for the first time and everyone should know better
Follow numerous “reports” (i.e. numerous regurgitations of a press release from Reading University) on an “historic milestone in artificial intelligence” having been passed “for the very first time by supercomputer Eugene Goostman” at an event organized by Professor Kevin Warwick, Mike Masnick writes:
If you’ve spent any time at all in the tech world, you should automatically have red flags raised around that name. Warwick is somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question. He’s been doing it for decades. All the way back in 2000, we were writing about all the ridiculous press he got for claiming to be the world’s first “cyborg” for implanting a chip in his arm. There was even a — since taken down — Kevin Warwick Watch website that mocked and categorized all of his media appearances in which gullible reporters simply repeated all of his nutty claims. Warwick had gone quiet for a while, but back in 2010, we wrote about how his lab was getting bogus press for claiming to have “the first human infected with a computer virus.” The Register has rightly referred to Warwick as both “Captain Cyborg” and a “media strumpet” and have long been chronicling his escapades in exaggerating bogus stories about the intersection of humans and computers for many, many years.
Basically, any reporter should view extraordinary claims associated with Warwick with extreme caution. But that’s not what happened at all. Instead, as is all too typical with Warwick claims, the press went nutty over it, including publications that should know better.
Anyone can try having a “conversation” with Eugene Goostman.
If the strings of words it spits out give you the impression you’re talking to a human being, that’s probably an indication that you don’t spend enough time talking to human beings.
The New York Times reports: Scientists in the Netherlands have moved a step closer to overriding one of Albert Einstein’s most famous objections to the implications of quantum mechanics, which he described as “spooky action at a distance.”
In a paper published on Thursday in the journal Science, physicists at the Kavli Institute of Nanoscience at the Delft University of Technology reported that they were able to reliably teleport information between two quantum bits separated by three meters, or about 10 feet.
Quantum teleportation is not the “Star Trek”-style movement of people or things; rather, it involves transferring so-called quantum information — in this case what is known as the spin state of an electron — from one place to another without moving the physical matter to which the information is attached.
Classical bits, the basic units of information in computing, can have only one of two values — either 0 or 1. But quantum bits, or qubits, can simultaneously describe many values. They hold out both the possibility of a new generation of faster computing systems and the ability to create completely secure communication networks.
Moreover, the scientists are now closer to definitively proving Einstein wrong in his early disbelief in the notion of entanglement, in which particles separated by light-years can still appear to remain connected, with the state of one particle instantaneously affecting the state of another. [Continue reading...]
Bruce Falconer writes: In July 2012, a commercial fishing charter called Ocean Pearl motored through the frigid waters of the North Pacific. It carried 100 tons of iron dust and a crew of 11, led by a tall and heavyset 62-year-old American named Russ George. Passing beyond Canada’s territorial limit, the vessel arrived at an area of swirling currents known as the Haida eddies. There, in an eddy that had been chosen for the experiment, George and his crew mixed their cargo of iron with seawater and pumped it into the ocean through a hose, turning the waters a cloudy red. In early August, the ship returned to port, where the crew loaded an additional 20 tons of iron. They dumped it near the same Haida eddy a few weeks later, bringing to an end the most audacious and, before long, notorious attempt yet undertaken by man to modify Earth’s climate.
The expedition was grand in its aims and obscure in its patronage. Funding George’s voyage was a village of Haida Indians on Haida Gwaii, a remote Canadian archipelago about 500 miles northwest of Vancouver. George and his business partners had gained the town’s support for a project of dumping iron dust into the ocean to stimulate the growth of a plankton bloom. The plankton would help feed starving salmon, upon which the Haida had traditionally depended for their livelihood, and also remove a million tons of carbon dioxide from the atmosphere. (In algae form, plankton, like all plants, absorbs CO2 through photosynthesis.) The intended result: a replenished fish population—and millions of dollars’ worth of “carbon credits” that could be sold on the international market.
Back on land, in Vancouver, George and his associates drafted a report on the expedition. It claimed that Ocean Pearl had seeded more than 3,800 square miles of barren waters, leaving in its wake “a verdant emerald sea lush with the growth of a hundred million tonnes of plankton.” According to the account, fin, sperm, and sei whales, rarely seen in the region, appeared in large numbers, along with killer whales, dolphins, schools of albacore tuna, and armies of night-feeding squid. Albatross, storm petrels, sooty shearwaters, and other seabirds had circled above the ship, while flocks of Brant geese came to rest on the water and drifted with the bloom.
But George did little to publicize these findings. Instead, he set about compiling the data in private, telling people that he intended to produce a precise estimate of the CO2 he had removed from the atmosphere and then invite an independent auditor to certify his claims.
If that was the plan, it quickly fell apart. In October 2012, the Guardian of London broke the news of George’s expedition, saying it “contravenes two UN conventions” against large-scale ocean fertilization experiments. Numerous media outlets followed up with alarmed, often savage, reports, some of which went so far as to label George a “rogue geoengineer” or “eco-terrorist.” Amid the uproar, Canadian environment minister Peter Kent accused George of “rogue science” and promised that any violation of the country’s environmental law would be “prosecuted to the full extent.”
George, for his part, spoke of media misrepresentation, and he stressed that he was engaged in cautious research. Amid the controversy, in an interview with Scientific American, he was asked whether his iron fertilization had worked. “We don’t know,” he answered. “The correct attitude is: ‘Data, speak to me.’ Do the work, get the data, let it speak to you and tell you what the facts might be.” While most commenters seemed to think George had gone too far, some expressed sympathy—or at least puzzled ambivalence. A Salon headline the following summer asked, “Does Russ George Deserve a Nobel Prize or a Prison Sentence?”
George’s efforts place him in the company of a small but growing group of people convinced that global warming can be halted only with the aid of dramatic intervention in our planet’s natural processes, an approach known as geoengineering. The fixes envisioned by geoengineers range from the seemingly trivial, like painting roads and roofs white to reflect solar radiation, to the extraterrestrial, like a proposal by one Indian physicist to use the explosive power of nuclear fusion to elongate Earth’s orbit by one or two percent, thus reducing solar intensity. (It would also add 5.5 days to the year.)
Because its methods tend to be both ambitious and untested, geoengineering is closely tied to the dynamics of alarm—feeding on it and causing it in equal measure. [Continue reading...]
Evgeny Morozov writes: In “On What We Can Not Do,” a short and pungent essay published a few years ago, the Italian philosopher Giorgio Agamben outlined two ways in which power operates today. There’s the conventional type that seeks to limit our potential for self-development by restricting material resources and banning certain behaviors. But there’s also a subtler, more insidious type, which limits not what we can do but what we can not do. What’s at stake here is not so much our ability to do things but our capacity not to make use of that very ability.
While each of us can still choose not to be on Facebook, have a credit history or build a presence online, can we really afford not to do any of those things today? It was acceptable not to have a cellphone when most people didn’t have them; today, when almost everybody does and when our phone habits can even be used to assess whether we qualify for a loan, such acts of refusal border on the impossible.
For Agamben, it’s this double power “to be and to not be, to do and to not do” that makes us human. This active necessity to choose (and err) contributes to the development of individual faculties that shape our subjectivity. The tragedy of modern man, then, is that “he has become blind not to his capacities but to his incapacities, not to what he can do but to what he cannot, or can not, do.”
This blindness to the question of incapacities mars most popular books on recent advances in our ability to store, analyze and profit from vast amounts of data generated by our gadgets. (Our wherewithal not to call this phenomenon by the ugly, jargony name of Big Data seems itself to be under threat.) The two books under review, alas, are no exception.
In “The Naked Future,” Patrick Tucker, an editor at large for The Futurist magazine, surveys how this influx of readily available data will transform every domain of our existence, from improving our ability to predict earthquakes (thanks to the proliferation of sensors) to producing highly customized education courses that would tailor their content and teaching style, in real time, to the needs of individual students. His verdict: It’s all for the better.
Since most of us lead rather structured, regular lives — work, home, weekend — even a handful of data points (our location, how often we call our friends) proves useful in predicting what we may be doing a day or a year from now. “A flat tire on a Monday at 10 a.m. isn’t actually random. . . . We just don’t yet know how to model it,” Tucker writes.
Seeking to integrate data streams from multiple sources — our inboxes, our phones, our cars and, with its recent acquisition of a company that makes thermostats and smoke detectors, our bedrooms — a company like Google is well positioned not just to predict our future but also to detect just how much risk we take on every day, be it fire, a flat tire or a default on a loan. (Banks and insurance companies beware: You will be disrupted next!)
With so much predictive power, we may soon know the exact price of “preferring not to,” as a modern-day Bartleby might put it. [Continue reading...]
Luke Barnes and Geraint Lewis write: The recent BICEP2 observations – of swirls in the polarisation of the cosmic microwave background – have been proclaimed as many things, from evidence of the Big Bang and gravitational waves to something strange called the multiverse.
The multiverse theory is that our universe is but one of a vast, variegated ensemble of other universes. We don’t know how many pieces there are to the multiverse but estimates suggest there many be squillions of them.
But (if they exist) there has not been enough time since our cosmic beginning for light from these other universes to reach us. They are beyond our cosmic horizon and thus in principle unobservable.
How, then, can cosmologists say they have seen evidence of them?
Unobservable entities aren’t necessarily out-of-bounds for science. For example, protons and neutrons are made of subatomic particles called quarks. While they cannot be observed directly, their existence and properties are inferred from the way particles behave when smashed together.
But there is no such luxury with the multiverse. No signals from from other universes have or will ever bother our telescopes.
While there is some debate about what actually makes a scientific theory, we should at least ask if the multiverse theory is testable? Does it make predictions that we can test in a laboratory or with our telescopes? [Continue reading...]
Addy Pross writes: Biology is wondrously strange – so familiar, yet so strikingly different to physics and chemistry. We know where we are with inanimate matter. Ever since Isaac Newton, it has answered to a basically mechanical view of nature, blindly following its laws without regard for purposes. But could there be, as Immanuel Kant put it, a Newton of the blade of grass? Living things might be made of the same fundamental stuff as the rest of the material world – ‘dead’ atoms and molecules – but they do not behave in the same way at all. In fact, they seem so purposeful as to defy the materialist philosophy on which the rest of modern science was built.
Even after Charles Darwin, we continue to struggle with that difference. As any biologist will acknowledge, function and purpose remain central themes in the life sciences, though they have long been banished from the physical sciences. How, then, can living things be reconciled with our mechanical-mechanistic universe? This is a conceptual question, of course, but it has a historical dimension: how did life on Earth actually come about? How could it have? Both at the abstract level and in the particular story of our world, there seems to be a chasm between the animate and inanimate realms.
I believe that it is now possible to bridge that gap. [Continue reading...]
In December, and again in February, at the Google Bus blockades in San Francisco, one thing struck me forcefully: the technology corporation employees waiting for their buses were all staring so intently at their phones that they apparently didn’t notice the unusual things going on around them until their buses were surrounded. Sometimes I feel like I’m living in a science-fiction novel, because my region is so afflicted with people who stare at the tiny screens in their hands on trains, in restaurants, while crossing the street, and too often while driving. San Francisco is, after all, where director Phil Kaufman set the 1978 remake of Invasion of the Body Snatchers, the movie wherein a ferny spore-spouting form of alien life colonizes human beings so that they become zombie-like figures.
In the movies, such colonization took place secretly, or by force, or both: it was a war, and (once upon a time) an allegory for the Cold War and a possible communist takeover. Today, however — Hypercapitalism Invades! — we not only choose to carry around those mobile devices, but pay corporations hefty monthly fees to do so. In return, we get to voluntarily join the great hive of people being in touch all the time, so much so that human nature seems in the process of being remade, with the young immersed in a kind of contact that makes solitude seem like polar ice, something that’s melting away.
We got phones, and then Smart Phones, and then Angry Birds and a million apps — and a golden opportunity to be tracked all the time thanks to the GPS devices in those phones. Your cell phone is the shackle that chains you to corporate America (and potentially to government surveillance as well) and like the ankle bracelets that prisoners wear, it’s your monitor. It connects you to the Internet and so to the brave new world that has such men as Larry Ellison and Mark Zuckerberg in it. That world — maybe not so brave after all — is the subject of Astra Taylor’s necessary, alarming, and exciting new book, The People’s Platform: Taking Back Power and Culture in the Digital Age.
The Internet arose with little regulation, little public decision-making, and a whole lot of fantasy about how it was going to make everyone powerful and how everything would be free. Free, as in unregulated and open, got confused with free, as in not getting paid, and somehow everyone from Facebook to Arianna Huffington created massively lucrative sites (based on advertising dollars) in which the people who made the content went unpaid. Just as Russia woke up with oil oligarchs spreading like mushrooms after a night’s heavy rain, so we woke up with new titans of the tech industry throwing their billionaire weight around. The Internet turns out to be a superb mechanism for consolidating money and power and control, even as it gives toys and minor voices to the rest of us.
As Taylor writes in her book, “The online sphere inspires incessant talk of gift economies and public-spiritedness and democracy, but commercialization and privatization and inequality lurk beneath the surface. This contradiction is captured in a single word: ‘open,’ a concept capacious enough to contain both the communal and capitalistic impulses central to Web 2.0.” And she goes on to discuss, “the tendency of open systems to amplify inequality — and new media thinkers’ glib disregard for this fundamental characteristic.” Part of what makes her book exceptional, in fact, is its breadth. It reviews much of the existing critique of the Internet and connects the critiques of specific aspects of it into an overview of how a phenomenon supposed to be wildly democratic has become wildly not that way at all.
And at a certain juncture, she turns to gender. Though far from the only weak point of the Internet as an egalitarian space — after all, there’s privacy (lack of), the environment (massive server farms), and economics (tax cheats, “content providers” like musicians fleeced) — gender politics, as she shows in today’s post adapted from her book, is one of the most spectacular problems online. Let’s imagine this as science fiction: a group of humans apparently dissatisfied with how things were going on Earth — where women were increasing their rights, representation, and participation — left our orbit and started their own society on their own planet. The new planet wasn’t far away or hard to get to (if you could afford the technology): it was called the Internet. We all know it by name; we all visit it; but we don’t name the society that dominates it much.
Taylor does: the dominant society, celebrating itself and pretty much silencing everyone else, makes the Internet bear a striking resemblance to Congress in 1850 or a gentlemen’s club (minus any gentleness). It’s a gated community, and as Taylor describes today, the security detail is ferocious, patrolling its borders by trolling and threatening dissident voices, and just having a female name or being identified as female is enough to become a target of hate and threats.
Early this year, a few essays were published on Internet misogyny that were so compelling I thought 2014 might be the year we revisit these online persecutions, the way that we revisited rape in 2013, thanks to the Steubenville and New Delhi assault cases of late 2012. But the subject hasn’t (yet) quite caught fire, and so not much gets said and less gets done about this dynamic new machinery for privileging male and silencing female voices. Which is why we need to keep examining and discussing this, as well as the other problems of the Internet. And why you need to read Astra Taylor’s book. This excerpt is part of her diagnosis of the problems; the book ends with ideas about a cure. Rebecca Solnit
Open systems and glass ceilings
The disappearing woman and life on the internet
By Astra Taylor
The Web is regularly hailed for its “openness” and that’s where the confusion begins, since “open” in no way means “equal.” While the Internet may create space for many voices, it also reflects and often amplifies real-world inequities in striking ways.
An elaborate system organized around hubs and links, the Web has a surprising degree of inequality built into its very architecture. Its traffic, for instance, tends to be distributed according to “power laws,” which follow what’s known as the 80/20 rule — 80% of a desirable resource goes to 20% of the population.
In fact, as anyone knows who has followed the histories of Google, Apple, Amazon, and Facebook, now among the biggest companies in the world, the Web is increasingly a winner-take-all, rich-get-richer sort of place, which means the disparate percentages in those power laws are only likely to look uglier over time.
Twice in my life — in the 1960s and the post-9/11 years — I was suddenly aware of clicks and other strange noises on my phone. In both periods, I’ve wondered what the story was, and then made self-conscious jokes with whoever was on the other end of the line about those who might (or might not) be listening in. Twice in my life I’ve felt, up close and personal, that ominous, uncomfortable, twitchy sense of being overheard, without ever knowing if it was a manifestation of the paranoia of the times or of realism — or perhaps of both.
I’m conceptually outraged by mass surveillance, but generally my personal attitude has always been: Go ahead. Read my email, listen to my phone calls, follow my web searches, check out my location via my cell phone. My tweets don’t exist — but if they did, I’d say have at ‘em. I don’t give a damn.
And in some sense, I don’t, even though everyone, including me, is embarrassed by something. Everyone says something about someone they would rather not have made public (or perhaps have even said). Everyone has some thing — or sometimes many things — they would rather keep to themselves.
Increasingly, however, as the U.S. surveillance state grows ever more pervasive, domestically and globally, as the corporate version of the same expands exponentially, as prying “eyes” and “ears” of every technological variety proliferate, the question of who exactly we are arises. What are we without privacy, without a certain kind of unknowability? What are we when “our” information is potentially anyone’s information? We may soon find out. A recent experiment by two Stanford University graduate students who gathered just a few month’s worth of phone metadata on 546 volunteers has, for instance, made mincemeat of President Obama’s claim that the NSA’s massive version of metadata collection “is not looking at people’s names and they’re not looking at content.” Using only the phone metadata they got, the Stanford researchers “inferred sensitive information about people’s lives, including: neurological and heart conditions, gun ownership, marijuana cultivation, abortion, and participation in Alcoholics Anonymous.”
And that’s just a crude version of what the future holds for all of us. There are various kinds of extinctions. That superb environmental reporter Elizabeth Kolbert has just written a powerful book, The Sixth Extinction, about the more usual (if horrifying) kind. Our developing surveillance world may offer us an example of another kind of extinction: of what we once knew as the private self. If you want to be chilled to the bone when it comes to this, check out today’s stunning report by the ACLU’s Catherine Crump and Matthew Harwood on where the corporate world is taking your identity. Tom Engelhardt
Estimates vary, but by 2020 there could be over 30 billion devices connected to the Internet. Once dumb, they will have smartened up thanks to sensors and other technologies embedded in them and, thanks to your machines, your life will quite literally have gone online.
The implications are revolutionary. Your smart refrigerator will keep an inventory of food items, noting when they go bad. Your smart thermostat will learn your habits and adjust the temperature to your liking. Smart lights will illuminate dangerous parking garages, even as they keep an “eye” out for suspicious activity.
Techno-evangelists have a nice catchphrase for this future utopia of machines and the never-ending stream of information, known as Big Data, it produces: the Internet of Things. So abstract. So inoffensive. Ultimately, so meaningless.
A future Internet of Things does have the potential to offer real benefits, but the dark side of that seemingly shiny coin is this: companies will increasingly know all there is to know about you. Most people are already aware that virtually everything a typical person does on the Internet is tracked. In the not-too-distant future, however, real space will be increasingly like cyberspace, thanks to our headlong rush toward that Internet of Things. With the rise of the networked device, what people do in their homes, in their cars, in stores, and within their communities will be monitored and analyzed in ever more intrusive ways by corporations and, by extension, the government.
And one more thing: in cyberspace it is at least theoretically possible to log off. In your own well-wired home, there will be no “opt out.”
The New York Times reports: Microsoft has lost customers, including the government of Brazil.
IBM is spending more than a billion dollars to build data centers overseas to reassure foreign customers that their information is safe from prying eyes in the United States government.
And tech companies abroad, from Europe to South America, say they are gaining customers that are shunning United States providers, suspicious because of the revelations by Edward J. Snowden that tied these providers to the National Security Agency’s vast surveillance program.
Even as Washington grapples with the diplomatic and political fallout of Mr. Snowden’s leaks, the more urgent issue, companies and analysts say, is economic. Technology executives, including Mark Zuckerberg of Facebook, raised the issue when they went to the White House on Friday for a meeting with President Obama.
It is impossible to see now the full economic ramifications of the spying disclosures — in part because most companies are locked in multiyear contracts — but the pieces are beginning to add up as businesses question the trustworthiness of American technology products. [Continue reading...]
Philip Ball writes: For centuries, scientists studied light to comprehend the visible world. Why are things colored? What is a rainbow? How do our eyes work? And what is light itself? These are questions that preoccupied scientists and philosophers since the time of Aristotle, including Roger Bacon, Isaac Newton, Michael Faraday, Thomas Young, and James Clerk Maxwell.
But in the late 19th century all that changed, and it was largely Maxwell’s doing. This was the period in which the whole focus of physics — then still emerging as a distinct scientific discipline — shifted from the visible to the invisible. Light itself was instrumental to that change. Not only were the components of light invisible “fields,” but light was revealed as merely a small slice of a rainbow extending far into the unseen.
Physics has never looked back. Today its theories and concepts are concerned largely with invisible entities: not only unseen force fields and insensible rays but particles too small to see even with the most advanced microscopes. We now know that our everyday perception grants us access to only a tiny fraction of reality. Telescopes responding to radio waves, infrared radiation, and X-rays have vastly expanded our view of the universe, while electron microscopes, X-ray beams, and other fine probes of nature’s granularity have unveiled the microworld hidden beyond our visual acuity. Theories at the speculative forefront of physics flesh out this unseen universe with parallel worlds and with mysterious entities named for their very invisibility: dark matter and dark energy.
This move beyond the visible has become a fundamental part of science’s narrative. But it’s a more complicated shift than we often appreciate. Making sense of what is unseen — of what lies “beyond the light” — has a much longer history in human experience. Before science had the means to explore that realm, we had to make do with stories that became enshrined in myth and folklore. Those stories aren’t banished as science advances; they are simply reinvented. Scientists working at the forefront of the invisible will always be confronted with gaps in knowledge, understanding, and experimental capability. In the face of those limits, they draw unconsciously on the imagery of the old stories. This is a necessary part of science, and these stories can sometimes suggest genuinely productive scientific ideas. But the danger is that we will start to believe them at face value, mistaking them for theories.
A backward glance at the history of the invisible shows how the narratives and tropes of myth and folklore can stimulate science, while showing that the truth will probably turn out to be far stranger and more unexpected than these old stories can accommodate. [Continue reading...]
William J Broad writes: American science, long a source of national power and pride, is increasingly becoming a private enterprise.
In Washington, budget cuts have left the nation’s research complex reeling. Labs are closing. Scientists are being laid off. Projects are being put on the shelf, especially in the risky, freewheeling realm of basic research. Yet from Silicon Valley to Wall Street, science philanthropy is hot, as many of the richest Americans seek to reinvent themselves as patrons of social progress through science research.
The result is a new calculus of influence and priorities that the scientific community views with a mix of gratitude and trepidation.
“For better or worse,” said Steven A. Edwards, a policy analyst at the American Association for the Advancement of Science, “the practice of science in the 21st century is becoming shaped less by national priorities or by peer-review groups and more by the particular preferences of individuals with huge amounts of money.”
They have mounted a private war on disease, with new protocols that break down walls between academia and industry to turn basic discoveries into effective treatments. They have rekindled traditions of scientific exploration by financing hunts for dinosaur bones and giant sea creatures. They are even beginning to challenge Washington in the costly game of big science, with innovative ships, undersea craft and giant telescopes — as well as the first private mission to deep space.
The new philanthropists represent the breadth of American business, people like Michael R. Bloomberg, the former New York mayor (and founder of the media company that bears his name), James Simons (hedge funds) and David H. Koch (oil and chemicals), among hundreds of wealthy donors. Especially prominent, though, are some of the boldest-face names of the tech world, among them Bill Gates (Microsoft), Eric E. Schmidt (Google) and Lawrence J. Ellison (Oracle).
This is philanthropy in the age of the new economy — financed with its outsize riches, practiced according to its individualistic, entrepreneurial creed. The donors are impatient with the deliberate, and often politicized, pace of public science, they say, and willing to take risks that government cannot or simply will not consider.
Yet that personal setting of priorities is precisely what troubles some in the science establishment. Many of the patrons, they say, are ignoring basic research — the kind that investigates the riddles of nature and has produced centuries of breakthroughs, even whole industries — for a jumble of popular, feel-good fields like environmental studies and space exploration.
As the power of philanthropic science has grown, so has the pitch, and the edge, of the debate. Nature, a family of leading science journals, has published a number of wary editorials, one warning that while “we applaud and fully support the injection of more private money into science,” the financing could also “skew research” toward fields more trendy than central.
“Physics isn’t sexy,” William H. Press, a White House science adviser, said in an interview. “But everybody looks at the sky.”
Fundamentally at stake, the critics say, is the social contract that cultivates science for the common good. They worry that the philanthropic billions tend to enrich elite universities at the expense of poor ones, while undermining political support for federally sponsored research and its efforts to foster a greater diversity of opportunity — geographic, economic, racial — among the nation’s scientific investigators.
Historically, disease research has been particularly prone to unequal attention along racial and economic lines. A look at major initiatives suggests that the philanthropists’ war on disease risks widening that gap, as a number of the campaigns, driven by personal adversity, target illnesses that predominantly afflict white people — like cystic fibrosis, melanoma and ovarian cancer. [Continue reading...]
Ars Technica reports: The Universe is incredibly regular. The variation of the cosmos’ temperature across the entire sky is tiny: a few millionths of a degree, no matter which direction you look. Yet the same light from the very early cosmos that reveals the Universe’s evenness also tells astronomers a great deal about the conditions that gave rise to irregularities like stars, galaxies, and (incidentally) us.
That light is the cosmic microwave background, and it provides some of the best knowledge we have about the structure, content, and history of the Universe. But it also contains a few mysteries: on very large scales, the cosmos seems to have a certain lopsidedness. That slight asymmetry is reflected in temperature fluctuations much larger than any galaxy, aligned on the sky in a pattern facetiously dubbed “the axis of evil.”
The lopsidedness is real, but cosmologists are divided over whether it reveals anything meaningful about the fundamental laws of physics. The fluctuations are sufficiently small that they could arise from random chance. We have just one observable Universe, but nobody sensible believes we can see all of it. With a sufficiently large cosmos beyond the reach of our telescopes, the rest of the Universe may balance the oddity that we can see, making it a minor, local variation.
However, if the asymmetry can’t be explained away so simply, it could indicate that some new physical mechanisms were at work in the early history of the Universe. As Amanda Yoho, a graduate student in cosmology at Case Western Reserve University, told Ars, “I think the alignments, in conjunction with all of the other large angle anomalies, must point to something we don’t know, whether that be new fundamental physics, unknown astrophysical or cosmological sources, or something else.” [Continue reading...]