Category Archives: Science/Technology

Rosetta the comet-chasing spacecraft wakes up

The Guardian reports: In 2004, the European Space Agency launched the Rosetta probe on an audacious mission to chase down a comet and place a robot on its surface. For nearly three years Rosetta had been hurtling through space in a state of hibernation. On Monday, it awoke.

The radio signal from Rosetta came from 800m kilometres away, a distance made hardly more conceivable by its proximity to Jupiter. The signal appeared on a computer screen as a tremulous green spike, but it meant the world – perhaps the solar system – to the scientists and engineers gathered at European Space Operations Centre in Darmstadt.

In a time when every spacecraft worth its salt has a Twitter account, the inevitable message followed from @Esa_Rosetta. It was brief and joyful: “Hello, world!”.

Speaking to the assembled crowd at Darmstadt, Matt Taylor, project scientist on the Rosetta mission, said: “Now it’s up to us to do the work we’ve promised to do.”

Just 10 minutes before he’d been facing an uncertain future career. If the spacecraft had not woken up, there would be no science to do and the role of project scientist would have been redundant.

The comet hunter had been woken by an internal alarm clock at 10am UK time but only after several hours of warming up its instruments and orientating towards Earth could it send a message home.

In the event, the missive was late. Taylor had been hiding his nerves well, even joking about the wait on Twitter but when the clock passed 19:00CET, making the signal at least 15 minutes late, the mood changed. ESA scientists and engineers started rocking on their heels, clutching their arms around themselves, and stopping the banter than had helped pass the time. Taylor himself sat down, and seemed to withdraw.

Then the flood of relief when the blip on the graph appeared. “I told you it would work,” said Taylor with a grin.

The successful rousing of the distant probe marks a crucial milestone in a mission that is more spectacular and ambitious than any the European Space Agency has conceived. The €1bn, car-sized spacecraft will now close in on a comet, orbit around it, and send down a lander, called Philae, the first time such a feat has been attempted.

The comet, 67P/Churyumov-Gerasimenko, is 4km wide, or roughly the size of Mont Blanc. That is big enough to study, but too measly to have a gravitational field strong enough to hold the lander in place. Instead, the box of sensors on legs will latch on to the comet by firing an explosive harpoon the moment it lands, and twisting ice screws into its surface.

To my mind, the long-standing belief that comets may have had a role in the origin of life on Earth seems even more speculative than Jeremy England’s idea that it could have arisen as a result of physical processes originating on this planet’s surface.

As much as anything, what seems so extraordinary about a project such as Rosetta is its accomplishment as a feat of navigation (assuming that it does in fact make its planned rendezvous with the comet). It’s like aiming an arrow at a speck of dust on a moving target in the dark.

Facebooktwittermail

What it’s like when the FBI asks you to backdoor your software

SecurityWatch: At a recent RSA Security Conference, Nico Sell was on stage announcing that her company — Wickr — was making drastic changes to ensure its users’ security. She said that the company would switch from RSA encryption to elliptic curve encryption, and that the service wouldn’t have a backdoor for anyone.

As she left the stage, before she’d even had a chance to take her microphone off, a man approached her and introduced himself as an agent with the Federal Bureau of Investigation. He then proceeded to “casually” ask if she’d be willing to install a backdoor into Wickr that would allow the FBI to retrieve information.

This encounter, and the agent’s casual demeanor, is apparently business as usual as intelligence and law enforcement agencies seek to gain greater access into protected communication systems. Since her encounter with the agent at RSA, Sell says it’s a story she’s heard again and again. “It sounds like that’s how they do it now,” she told SecurityWatch. “Always casual, testing, because most people would say yes.” [Continue reading…]

Facebooktwittermail

Forget artificial intelligence. It’s artificial idiocy we need to worry about

Tom Chatfield writes: Massive, inconceivable numbers are commonplace in conversations about computers. The exabyte, a one followed by 18 zeroes worth of bits; the petaflop, one quadrillion calculations performed in a single second. Beneath the surface of our lives churns an ocean of information, from whose depths answers and optimisations ascend like munificent kraken.

This is the much-hyped realm of “big data”: unprecedented quantities of information generated at unprecedented speed, in unprecedented variety.

From particle physics to predictive search and aggregated social media sentiments, we reap its benefits across a broadening gamut of fields. We agonise about over-sharing while the numbers themselves tick upwards. Mostly, though, we fail to address a handful of questions more fundamental even than privacy. What are machines good at; what are they less good at; and when are their answers worse than useless?

Consider cats. As commentators like the American psychologist Gary Marcus have noted, it’s extremely difficult to teach a computer to recognise cats. And that’s not for want of trying. Back in the summer of 2012, Google fed 10 million feline-featuring images (there’s no shortage online) into a massively powerful custom-built system. The hope was that the alchemy of big data would do for images what it has already done for machine translation: that an algorithm could learn from a sufficient number of examples to approximate accurate solutions to the question “what is that?”

Sadly, cats proved trickier than words. Although the system did develop a rough measure of “cattiness”, it struggled with variations in size, positioning, setting and complexity. Once expanded to encompass 20,000 potential categories of object, the identification process managed just 15.8% accuracy: a huge improvement on previous efforts, but hardly a new digital dawn. [Continue reading…]

Facebooktwittermail

The intelligent plant

Michael Pollan writes: In 1973, a book claiming that plants were sentient beings that feel emotions, prefer classical music to rock and roll, and can respond to the unspoken thoughts of humans hundreds of miles away landed on the New York Times best-seller list for nonfiction. “The Secret Life of Plants,” by Peter Tompkins and Christopher Bird, presented a beguiling mashup of legitimate plant science, quack experiments, and mystical nature worship that captured the public imagination at a time when New Age thinking was seeping into the mainstream. The most memorable passages described the experiments of a former C.I.A. polygraph expert named Cleve Backster, who, in 1966, on a whim, hooked up a galvanometer to the leaf of a dracaena, a houseplant that he kept in his office. To his astonishment, Backster found that simply by imagining the dracaena being set on fire he could make it rouse the needle of the polygraph machine, registering a surge of electrical activity suggesting that the plant felt stress. “Could the plant have been reading his mind?” the authors ask. “Backster felt like running into the street and shouting to the world, ‘Plants can think!’ ”

Backster and his collaborators went on to hook up polygraph machines to dozens of plants, including lettuces, onions, oranges, and bananas. He claimed that plants reacted to the thoughts (good or ill) of humans in close proximity and, in the case of humans familiar to them, over a great distance. In one experiment designed to test plant memory, Backster found that a plant that had witnessed the murder (by stomping) of another plant could pick out the killer from a lineup of six suspects, registering a surge of electrical activity when the murderer was brought before it. Backster’s plants also displayed a strong aversion to interspecies violence. Some had a stressful response when an egg was cracked in their presence, or when live shrimp were dropped into boiling water, an experiment that Backster wrote up for the International Journal of Parapsychology, in 1968.

In the ensuing years, several legitimate plant scientists tried to reproduce the “Backster effect” without success. Much of the science in “The Secret Life of Plants” has been discredited. But the book had made its mark on the culture. Americans began talking to their plants and playing Mozart for them, and no doubt many still do. This might seem harmless enough; there will probably always be a strain of romanticism running through our thinking about plants. (Luther Burbank and George Washington Carver both reputedly talked to, and listened to, the plants they did such brilliant work with.) But in the view of many plant scientists “The Secret Life of Plants” has done lasting damage to their field. According to Daniel Chamovitz, an Israeli biologist who is the author of the recent book “What a Plant Knows,” Tompkins and Bird “stymied important research on plant behavior as scientists became wary of any studies that hinted at parallels between animal senses and plant senses.” Others contend that “The Secret Life of Plants” led to “self-censorship” among researchers seeking to explore the “possible homologies between neurobiology and phytobiology”; that is, the possibility that plants are much more intelligent and much more like us than most people think—capable of cognition, communication, information processing, computation, learning, and memory.

The quotation about self-censorship appeared in a controversial 2006 article in Trends in Plant Science proposing a new field of inquiry that the authors, perhaps somewhat recklessly, elected to call “plant neurobiology.” The six authors—among them Eric D. Brenner, an American plant molecular biologist; Stefano Mancuso, an Italian plant physiologist; František Baluška, a Slovak cell biologist; and Elizabeth Van Volkenburgh, an American plant biologist—argued that the sophisticated behaviors observed in plants cannot at present be completely explained by familiar genetic and biochemical mechanisms. Plants are able to sense and optimally respond to so many environmental variables—light, water, gravity, temperature, soil structure, nutrients, toxins, microbes, herbivores, chemical signals from other plants—that there may exist some brainlike information-processing system to integrate the data and coördinate a plant’s behavioral response. The authors pointed out that electrical and chemical signalling systems have been identified in plants which are homologous to those found in the nervous systems of animals. They also noted that neurotransmitters such as serotonin, dopamine, and glutamate have been found in plants, though their role remains unclear.

Hence the need for plant neurobiology, a new field “aimed at understanding how plants perceive their circumstances and respond to environmental input in an integrated fashion.” The article argued that plants exhibit intelligence, defined by the authors as “an intrinsic ability to process information from both abiotic and biotic stimuli that allows optimal decisions about future activities in a given environment.” Shortly before the article’s publication, the Society for Plant Neurobiology held its first meeting, in Florence, in 2005. A new scientific journal, with the less tendentious title Plant Signaling & Behavior, appeared the following year.

Depending on whom you talk to in the plant sciences today, the field of plant neurobiology represents either a radical new paradigm in our understanding of life or a slide back down into the murky scientific waters last stirred up by “The Secret Life of Plants.” Its proponents believe that we must stop regarding plants as passive objects—the mute, immobile furniture of our world—and begin to treat them as protagonists in their own dramas, highly skilled in the ways of contending in nature. They would challenge contemporary biology’s reductive focus on cells and genes and return our attention to the organism and its behavior in the environment. It is only human arrogance, and the fact that the lives of plants unfold in what amounts to a much slower dimension of time, that keep us from appreciating their intelligence and consequent success. Plants dominate every terrestrial environment, composing ninety-nine per cent of the biomass on earth. By comparison, humans and all the other animals are, in the words of one plant neurobiologist, “just traces.” [Continue reading…]

Facebooktwittermail

Most scientific research data from the 1990s is lost forever

Smithsonian.com: One of the foundations of the scientific method is the reproducibility of results. In a lab anywhere around the world, a researcher should be able to study the same subject as another scientist and reproduce the same data, or analyze the same data and notice the same patterns.

This is why the findings of a study published today in Current Biology are so concerning. When a group of researchers tried to email the authors of 516 biological studies published between 1991 and 2011 and ask for the raw data, they were dismayed to find that more 90 percent of the oldest data (from papers written more than 20 years ago) were inaccessible. In total, even including papers published as recently as 2011, they were only able to track down the data for 23 percent.

“Everybody kind of knows that if you ask a researcher for data from old studies, they’ll hem and haw, because they don’t know where it is,” says Timothy Vines, a zoologist at the University of British Columbia, who led the effort. “But there really hadn’t ever been systematic estimates of how quickly the data held by authors actually disappears.”

To make their estimate, his group chose a type of data that’s been relatively consistent over time—anatomical measurements of plants and animals—and dug up between 25 and 40 papers for each odd year during the period that used this sort of data, to see if they could hunt down the raw numbers.

A surprising amount of their inquiries were halted at the very first step: for 25 percent of the studies, active email addresses couldn’t be found, with defunct addresses listed on the paper itself and web searches not turning up any current ones. For another 38 percent of studies, their queries led to no response. Another 7 percent of the data sets were lost or inaccessible.

“Some of the time, for instance, it was saved on three-and-a-half inch floppy disks, so no one could access it, because they no longer had the proper drives,” Vines says. Because the basic idea of keeping data is so that it can be used by others in future research, this sort of obsolescence essentially renders the data useless. [Continue reading..]

Facebooktwittermail

How computers are making people stupid

The pursuit of artificial intelligence has been driven by the assumption that if human intelligence can be replicated or advanced upon by machines then this accomplishment will in various ways serve the human good. At the same time, thanks to the technophobia promoted in some dystopian science fiction, there is a popular fear that if machines become smarter than people we will end up becoming their slaves.

It turns out that even if there are some irrational fears wrapped up in technophobia, there are good reasons to regard computing devices as a threat to human intelligence.

It’s not that we are creating machines that harbor evil designs to take over the world, but simply that each time we delegate a function of the brain to an external piece of circuitry, our mental faculties inevitably atrophy.

Use it or lose it applies just as much to the brain as it does to any other part of the body.

Carolyn Gregoire writes: Take a moment to think about the last time you memorized someone’s phone number. Was it way back when, perhaps circa 2001? And when was the last time you were at a dinner party or having a conversation with friends, when you whipped out your smartphone to Google the answer to someone’s question? Probably last week.

Technology changes the way we live our daily lives, the way we learn, and the way we use our faculties of attention — and a growing body of research has suggested that it may have profound effects on our memories (particularly the short-term, or working, memory), altering and in some cases impairing its function.

The implications of a poor working memory on our brain functioning and overall intelligence levels are difficult to over-estimate.

“The depth of our intelligence hinges on our ability to transfer information from working memory, the scratch pad of consciousness, to long-term memory, the mind’s filing system,” Nicholas Carr, author of The Shallows: What The Internet Is Doing To Our Brains, wrote in Wired in 2010. “When facts and experiences enter our long-term memory, we are able to weave them into the complex ideas that give richness to our thought.”

While our long-term memory has a nearly unlimited capacity, the short-term memory has more limited storage, and that storage is very fragile. “A break in our attention can sweep its contents from our mind,” Carr explains.

Meanwhile, new research has found that taking photos — an increasingly ubiquitous practice in our smartphone-obsessed culture — actually hinders our ability to remember that which we’re capturing on camera.

Concerned about premature memory loss? You probably should be. Here are five things you should know about the way technology is affecting your memory.

1. Information overload makes it harder to retain information.

Even a single session of Internet usage can make it more difficult to file away information in your memory, says Erik Fransén, computer science professor at Sweden’s KTH Royal Institute of Technology. And according to Tony Schwartz, productivity expert and author of The Way We’re Working Isn’t Working, most of us aren’t able to effectively manage the overload of information we’re constantly bombarded with. [Continue reading…]

As I pointed out in a recent post, the externalization of intelligence long preceded the creation of smart phones and personal computers. Indeed, it goes all the way back to the beginning of civilization when we first learned how to transform language into a material form as the written word, thereby creating a substitute for memory.

Plato foresaw the consequences of writing.

In Phaedrus, he describes an exchange between the god Thamus, king and ruler of all Egypt, and the god Theuth, who has invented writing. Theuth, who is very proud of what he has created says: “This invention, O king, will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.” But Thamus points out that while one man has the ability to invent, the ability to judge an invention’s usefulness or harmfulness belongs to another.

If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.

Bedazzled by our ingenuity and its creations, we are fast forgetting the value of this quality that can never be implanted in a machine (or a text): wisdom.

Facebooktwittermail

Google’s plan to prolong human suffering

The idiots in Silicon Valley — many of whom aren’t yet old enough to have experienced adventures of aging like getting a colonoscopy — seem to picture “life-extension” as though it means more time to improve one’s tennis strokes in a future turned into a never-ending vacation. But what life extension will much more likely simply mean is prolonged infirmity.

If it was possible to build an economy around “health care” — which should much more accurately be called disease management — then the prospect of a perpetually expanding population of the infirm might look like a golden business opportunity, but what we’re really looking at is an economy built on false promises.

Daniel Callahan writes: This fall Google announced that it would venture into territory far removed from Internet search. Through a new company, Calico, it will be “tackling” the “challenge” of aging.

The announcement, though, was vague about what exactly the challenge is and how exactly Google means to tackle it. Calico may, with the aid of Big Data, simply intensify present efforts to treat the usual chronic diseases that afflict the elderly, like cancer, heart disease and Alzheimer’s. But there is a more ambitious possibility: to “treat” the aging process itself, in an attempt to slow it.

Of course, the dream of beating back time is an old one. Shakespeare had King Lear lament the tortures of aging, while the myth of Ponce de Leon’s Fountain of Youth in Florida and the eternal life of the Struldbrugs in “Gulliver’s Travels” both fed the notion of overcoming aging.

For some scientists, recent anti-aging research — on gene therapy, body-part replacement by regeneration and nanotechnology for repairing aging cells — has breathed new life into this dream. Optimists about average life expectancy’s surpassing 100 years in the coming century, like James W. Vaupel, the founder and director of the Max Planck Institute for Demographic Research in Germany, cite promising animal studies in which the lives of mice have been extended through genetic manipulation and low-calorie diets. They also point to the many life-extending medical advances of the past century as precedents, with no end in sight, and note that average life expectancy in the United States has long been rising, from 47.3 in 1900 to 78.7 in 2010. Others are less sanguine. S. Jay Olshansky, a research associate at the Center on Aging at the University of Chicago, has pointed out that sharp reductions in infant mortality explain most of that rise. Even if some people lived well into old age, the death of 50 percent or more of infants and children for most of history kept the average life expectancy down. As those deaths fell drastically over the past century, life expectancy increased, helped by improvements in nutrition, a decline in infectious disease and advances in medicine. But there is no reason to think another sharp drop of that sort is in the cards.

Even if anti-aging research could give us radically longer lives someday, though, should we even be seeking them? Regardless of what science makes possible, or what individual people want, aging is a public issue with social consequences, and these must be thought through.

Consider how dire the cost projections for Medicare already are. In 2010 more than 40 million Americans were over 65. In 2030 there will be slightly more than 72 million, and in 2050 more than 83 million. The Congressional Budget Office has projected a rise of Medicare expenditures to 5.8 percent of gross domestic product in 2038 from 3.5 percent today, a burden often declared unsustainable.

Modern medicine is very good at keeping elderly people with chronic diseases expensively alive. At 83, I’m a good example. I’m on oxygen at night for emphysema, and three years ago I needed a seven-hour emergency heart operation to save my life. Just 10 percent of the population — mainly the elderly — consumes about 80 percent of health care expenditures, primarily on expensive chronic illnesses and end-of-life costs. Historically, the longer lives that medical advances have given us have run exactly parallel to the increase in chronic illness and the explosion in costs. Can we possibly afford to live even longer — much less radically longer?

At the heart of the idiocy which Silicon Valley cultivates are a host of profound misconceptions about the nature of time.

Having become slaves of technology, we take it as given that time is measured by clocks and calendars. Life extension is thus conceived in purely numerical terms. Yet the time that matters is not the time that can be measured by any device.

That’s why in an age in which technology was supposed to reward us all with extra time, instead we experience time as being perpetually compressed.

Having been provided with the means to do more and more things at the same time — text, tweet, talk etc. — our attention gets sliced into narrower and narrower slivers, and the more time gets filled, the more time-impoverished we become.

For the narcissist, there can be no greater fear than the prospect of the termination of individual existence, yet death is truly intrinsic to life. What it enables is not simply annihilation but more importantly, renewal.

Facebooktwittermail

Techies vs. NSA: Encryption arms race escalates

The Associated Press reports: Encrypted email, secure instant messaging and other privacy services are booming in the wake of the National Security Agency’s recently revealed surveillance programs. But the flood of new computer security services is of variable quality, and much of it, experts say, can bog down computers and isn’t likely to keep out spies.

In the end, the new geek wars —between tech industry programmers on the one side and government spooks, fraudsters and hacktivists on the other— may leave people’s PCs and businesses’ computer systems encrypted to the teeth but no better protected from hordes of savvy code crackers.

“Every time a situation like this erupts you’re going to have a frenzy of snake oil sellers who are going to throw their products into the street,” says Carson Sweet, CEO of San Francisco-based data storage security firm CloudPassage. “It’s quite a quandary for the consumer.” [Continue reading…]

Facebooktwittermail

Cisco demonstrates how the NSA is seriously damaging the U.S. economy

Quartz reports: Cisco announced two important things in today’s earnings report: The first is that the company is aggressively moving into the Internet of Things — the effort to connect just about every object on earth to the internet — by rolling out new technologies. The second is that Cisco has seen a huge drop-off in demand for its hardware in emerging markets, which the company blames on fears about the NSA using American hardware to spy on the rest of the world.

Cisco chief executive John Chambers said on the company’s earnings call that he believes other American technology companies will be similarly affected. Cisco saw orders in Brazil drop 25% and Russia drop 30%. Both Brazil and Russia have expressed official outrage over NSA spying and have announced plans to curb the NSA’s reach.

Analysts had expected Cisco’s business in emerging markets to increase 6%, but instead it dropped 12%, sending shares of Cisco plunging 10% in after-hours trading. [Continue reading…]

If Cisco currently feels like its operations have been undermined by the NSA, it hasn’t shown much reticence in the past about making its technology available where it would likely be used for surveillance.

In 2011, the Wall Street Journal reported: Western companies including Cisco Systems Inc. are poised to help build an ambitious new surveillance project in China—a citywide network of as many as 500,000 cameras that officials say will prevent crime but that human-rights advocates warn could target political dissent.

The system, being built in the city of Chongqing over the next two to three years, is among the largest and most sophisticated video-surveillance projects of its kind in China, and perhaps the world. Dubbed “Peaceful Chongqing,” it is planned to cover a half-million intersections, neighborhoods and parks over nearly 400 square miles, an area more than 25% larger than New York City.

The project sheds light on how Western tech companies sell their wares in China, the Middle East and other places where there is potential for the gear to be used for political purposes and not just safety. The products range from Internet-censoring software to sophisticated networking gear. China in particular has drawn criticism for treating political dissent as a crime and has a track record of using technology to suppress it.

An examination of the Peaceful Chongqing project by The Wall Street Journal shows Cisco is expected to supply networking equipment that is essential to operating large and complicated surveillance systems, according to people familiar with the deal.

Facebooktwittermail

Plato foresaw the danger of automation

Automation takes many forms and as members of a culture that reveres technology, we generally perceive automation in terms of its output: what it accomplishes, be that through manufacturing, financial transactions, flying aircraft, and so forth.

But automation doesn’t merely accomplish things for human beings; it simultaneously changes us by externalizing intelligence. The intelligence required by a person is transferred to a machine with its embedded commands, allowing the person to turn his intelligence elsewhere — or nowhere.

Automation is invariably sold on the twin claims that it offers greater efficiency, while freeing people from tedious tasks so that — at least in theory — they can give their attention to something more fulfilling.

There’s no disputing the efficiency argument — there could never have been such a thing as mass production without automation — but the promise of freedom has always been oversold. Automation has resulted in the creation of many of the most tedious, soul-destroying forms of labor in human history.

Automated systems are, however, never perfect, and when they break, they reveal the corrupting effect they have had on human intelligence — intelligence whose skilful application has atrophied through lack of use.

Nicholas Carr writes: On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York. As is typical of commercial flights today, the pilots didn’t have all that much to do during the hour-long trip. The captain, Marvin Renslow, manned the controls briefly during takeoff, guiding the Bombardier Q400 turboprop into the air, then switched on the autopilot and let the software do the flying. He and his co-pilot, Rebecca Shaw, chatted — about their families, their careers, the personalities of air-traffic controllers — as the plane cruised uneventfully along its northwesterly route at 16,000 feet. The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity. Rather than preventing a stall, Renslow’s action caused one. The plane spun out of control, then plummeted. “We’re down,” the captain said, just before the Q400 slammed into a house in a Buffalo suburb.

The crash, which killed all 49 people on board as well as one person on the ground, should never have happened. A National Transportation Safety Board investigation concluded that the cause of the accident was pilot error. The captain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.” An executive from the company that operated the flight, the regional carrier Colgan Air, admitted that the pilots seemed to lack “situational awareness” as the emergency unfolded.

The Buffalo crash was not an isolated incident. An eerily similar disaster, with far more casualties, occurred a few months later. On the night of May 31, an Air France Airbus A330 took off from Rio de Janeiro, bound for Paris. The jumbo jet ran into a storm over the Atlantic about three hours after takeoff. Its air-speed sensors, coated with ice, began giving faulty readings, causing the autopilot to disengage. Bewildered, the pilot flying the plane, Pierre-Cédric Bonin, yanked back on the stick. The plane rose and a stall warning sounded, but he continued to pull back heedlessly. As the plane climbed sharply, it lost velocity. The airspeed sensors began working again, providing the crew with accurate numbers. Yet Bonin continued to slow the plane. The jet stalled and began to fall. If he had simply let go of the control, the A330 would likely have righted itself. But he didn’t. The plane dropped 35,000 feet in three minutes before hitting the ocean. All 228 passengers and crew members died.

The first automatic pilot, dubbed a “metal airman” in a 1930 Popular Science article, consisted of two gyroscopes, one mounted horizontally, the other vertically, that were connected to a plane’s controls and powered by a wind-driven generator behind the propeller. The horizontal gyroscope kept the wings level, while the vertical one did the steering. Modern autopilot systems bear little resemblance to that rudimentary device. Controlled by onboard computers running immensely complex software, they gather information from electronic sensors and continuously adjust a plane’s attitude, speed, and bearings. Pilots today work inside what they call “glass cockpits.” The old analog dials and gauges are mostly gone. They’ve been replaced by banks of digital displays. Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes. What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.

And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes, leading to what Jan Noyes, an ergonomics expert at Britain’s University of Bristol, terms “a de-skilling of the crew.” No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and a leading authority on automation. When an autopilot system fails, too many pilots, thrust abruptly into what has become a rare role, make mistakes. Rory Kay, a veteran United captain who has served as the top safety official of the Air Line Pilots Association, put the problem bluntly in a 2011 interview with the Associated Press: “We’re forgetting how to fly.” The Federal Aviation Administration has become so concerned that in January it issued a “safety alert” to airlines, urging them to get their pilots to do more manual flying. An overreliance on automation, the agency warned, could put planes and passengers at risk.

The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result. [Continue reading…]

Now if we think of automation as a form of forgetfulness, we will see that it extends much more deeply into civilization than just its modern manifestations through mechanization and digitization.

In the beginning was the Word and later came the Fall: the point at which language — the primary tool for shaping, expressing and sharing human intelligence — was cut adrift from the human mind and given autonomy in the form of writing.

Through the written word, thought can be immortalized and made universal. No other mechanism could have ever had such a dramatic effect on the exchange of ideas. Without writing, there would have been no such thing as humanity. But we also incurred a loss and because we have such little awareness of this loss, we might find it hard to imagine that preliterate people possessed forms of intelligence we now lack.

Plato described what writing would do — and by extension, what would happen to pilots.

In Phaedrus, he describes an exchange between the god Thamus, king and ruler of all Egypt, and the god Theuth, who has invented writing. Theuth, who is very proud of what he has created says: “This invention, O king, will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.” But Thamus points out that while one man has the ability to invent, the ability to judge an invention’s usefulness or harmfulness belongs to another.

If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.

Bedazzled by our ingenuity and its creations, we are fast forgetting the value of this quality that can never be implanted in a machine (or a text): wisdom.

Even the word itself is beginning to sound arcane — as though it should be reserved for philosophers and storytellers and is no longer something we should all strive to possess.

Facebooktwittermail

Smartphones are killing society

Henry Grabar writes: The host collects phones at the door of the dinner party. At a law firm, partners maintain a no-device policy at meetings. Each day, a fleet of vans assembles outside New York’s high schools, offering, for a small price, to store students’ contraband during the day. In situations where politeness and concentration are expected, backlash is mounting against our smartphones.

In public, of course, it’s a free country. It’s hard to think of a place beyond the sublime darkness of the movie theater where phone use is shunned, let alone regulated. (Even the cinematic exception is up for debate.) At restaurants, phones occupy that choice tablecloth real estate once reserved for a pack of cigarettes. In truly public space — on sidewalks, in parks, on buses and on trains — we move face down, our phones cradled like amulets.

No observer can fail to notice how deeply this development has changed urban life. A deft user can digitally enhance her experience of the city. She can study a map; discover an out-of-the-way restaurant; identify the trees that line the block and the architect who designed the building at the corner. She can photograph that building, share it with friends, and in doing so contribute her observations to a digital community. On her way to the bus (knowing just when it will arrive) she can report the existence of a pothole and check a local news blog.

It would be unfair to say this person isn’t engaged in the city; on the contrary, she may be more finely attuned to neighborhood history and happenings than her companions. But her awareness is secondhand: She misses the quirks and cues of the sidewalk ballet, fails to make eye contact, and limits her perception to a claustrophobic one-fifth of normal. Engrossed in the virtual, she really isn’t here with the rest of us.

Consider the case of a recent murder on a San Francisco train. On Sept. 23, in a crowded car, a man pulls a pistol from his jacket. In Vivian Ho’s words: “He raises the gun, pointing it across the aisle, before tucking it back against his side. He draws it out several more times, once using the hand holding the gun to wipe his nose. Dozens of passengers stand and sit just feet away — but none reacts. Their eyes, focused on smartphones and tablets, don’t lift until the gunman fires a bullet into the back of a San Francisco State student getting off the train.” [Continue reading…]

Facebooktwittermail

How science is calling for a global revolution

Naomi Klein writes: In December 2012, a pink-haired complex systems researcher named Brad Werner made his way through the throng of 24,000 earth and space scientists at the Fall Meeting of the American Geophysical Union, held annually in San Francisco. This year’s conference had some big-name participants, from Ed Stone of Nasa’s Voyager project, explaining a new milestone on the path to interstellar space, to the film-maker James Cameron, discussing his adventures in deep-sea submersibles.

But it was Werner’s own session that was attracting much of the buzz. It was titled “Is Earth F**ked?” (full title: “Is Earth F**ked? Dynamical Futility of Global Environmental Management and Possibilities for Sustainability via Direct Action Activism”).

Standing at the front of the conference room, the geophysicist from the University of California, San Diego walked the crowd through the advanced computer model he was using to answer that question. He talked about system boundaries, perturbations, dissipation, attractors, bifurcations and a whole bunch of other stuff largely incomprehensible to those of us uninitiated in complex systems theory. But the bottom line was clear enough: global capitalism has made the depletion of resources so rapid, convenient and barrier-free that “earth-human systems” are becoming dangerously unstable in response. When pressed by a journalist for a clear answer on the “are we f**ked” question, Werner set the jargon aside and replied, “More or less.”

There was one dynamic in the model, however, that offered some hope. Werner termed it “resistance” – movements of “people or groups of people” who “adopt a certain set of dynamics that does not fit within the capitalist culture”. According to the abstract for his presentation, this includes “environmental direct action, resistance taken from outside the dominant culture, as in protests, blockades and sabotage by indigenous peoples, workers, anarchists and other activist groups”. [Continue reading…]

Facebooktwittermail

The inexact mirrors of the human mind

Douglas Hofstadter, author of Gödel, Escher, Bach: An Eternal Golden Braid (GEB), published in 1979, and one of the pioneers of artificial intelligence (AI), gained prominence right at a juncture when the field was abandoning its interest in human intelligence.

douglas-hofstadterJames Somers writes: In GEB, Hofstadter was calling for an approach to AI concerned less with solving human problems intelligently than with understanding human intelligence — at precisely the moment that such an approach, having borne so little fruit, was being abandoned. His star faded quickly. He would increasingly find himself out of a mainstream that had embraced a new imperative: to make machines perform in any way possible, with little regard for psychological plausibility.

Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force. For each legal move it could make at a given point in the game, it would consider its opponent’s responses, its own responses to those responses, and so on for six or more steps down the line. With a fast evaluation function, it would calculate a score for each possible position, and then make the move that led to the best score. What allowed Deep Blue to beat the world’s best humans was raw computational power. It could evaluate up to 330 million positions a second, while Kasparov could evaluate only a few dozen before having to make a decision.

Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?” A brand of AI that didn’t try to answer such questions—however impressive it might have been—was, in Hofstadter’s mind, a diversion. He distanced himself from the field almost as soon as he became a part of it. “To me, as a fledgling AI person,” he says, “it was self-evident that I did not want to get involved in that trickery. It was obvious: I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”

One answer is that the AI enterprise went from being worth a few million dollars in the early 1980s to billions by the end of the decade. (After Deep Blue won in 1997, the value of IBM’s stock increased by $18 billion.) The more staid an engineering discipline AI became, the more it accomplished. Today, on the strength of techniques bearing little relation to the stuff of thought, it seems to be in a kind of golden age. AI pervades heavy industry, transportation, and finance. It powers many of Google’s core functions, Netflix’s movie recommendations, Watson, Siri, autonomous drones, the self-driving car.

“The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics,” Stuart Russell and Peter Norvig write in their leading textbook, Artificial Intelligence: A Modern Approach. AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?

It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something. Russell, a computer-science professor at Berkeley, said to me, “What’s the combined market cap of all of the search companies on the Web? It’s probably four hundred, five hundred billion dollars. Engines that could actually extract all that information and understand it would be worth 10 times as much.”

This, then, is the trillion-dollar question: Will the approach undergirding AI today — an approach that borrows little from the mind, that’s grounded instead in big data and big engineering — get us to where we want to go? How do you make a search engine that understands if you don’t know how you understand? Perhaps, as Russell and Norvig politely acknowledge in the last chapter of their textbook, in taking its practical turn, AI has become too much like the man who tries to get to the moon by climbing a tree: “One can report steady progress, all the way to the top of the tree.”

Consider that computers today still have trouble recognizing a handwritten A. In fact, the task is so difficult that it forms the basis for CAPTCHAs (“Completely Automated Public Turing tests to tell Computers and Humans Apart”), those widgets that require you to read distorted text and type the characters into a box before, say, letting you sign up for a Web site.

In Hofstadter’s mind, there is nothing to be surprised about. To know what all A’s have in common would be, he argued in a 1982 essay, to “understand the fluid nature of mental categories.” And that, he says, is the core of human intelligence. [Continue reading…]

Facebooktwittermail

The NSA has harmed global cybersecurity. It needs to reveal what it’s done

Computer scientists Nadia Heninger and J. Alex Halderman write: Of all of the revelations about the NSA that have come to light in recent months, two stand out as the most worrisome and surprising to cybersecurity experts. The first is that the NSA has worked to weaken the international cryptographic standards that define how computers secure communications and data. The second is that the NSA has deliberately introduced backdoors into security-critical software and hardware. If the NSA has indeed engaged in such activities, it has risked the computer security of the United States (and the world) as much as any malicious attacks have to date.

No one is surprised that the NSA breaks codes; the agency is famous for its cryptanalytic prowess. And, in general, the race between designers who try to build strong codes and cryptanalysts who try to break them ultimately benefits security. But surreptitiously implanting deliberate weaknesses or actively encouraging the public to use codes that have secretly been broken — especially under the aegis of government authority — is a dirty trick. It diminishes computer security for everyone and harms the United States’ national cyberdefense interests in a number of ways.

Few people realize the extent to which the cryptography that underpins Internet security relies on trust. One of the dirty secrets of the crypto world is that nobody knows how to prove mathematically that core crypto algorithms — the foundations of online financial transactions and encrypted laptops — are secure. Instead, we trust that they are secure because they were created by some of the world’s most experienced cryptographers and because other specialists tried diligently to break them and failed.

Since the 1970s, the U.S. National Institute of Standards and Technology (NIST) has played a central role in coordinating this trust, and in deciding which algorithms are worthwhile, by setting the cryptographic standards used by governments and industries the world over. NIST has done an admirable job of organizing the efforts of cryptographic experts to design and evaluate ciphers. It has also been able to harness the clout of the U.S. government to get those designs — including such state-of-the-art technology as the AES cipher, the SHA-2 hash functions, and public-key cryptography based on elliptic curves — adopted by industry. In turn, American industry believed that it could trust that these technologies had been designed by a competent organization with its interests at heart.

There is now credible evidence that the NSA has pushed NIST, in at least one case, to canonize an inferior algorithm designed with a backdoor for NSA use. Dozens of companies implemented the standardized algorithm in their software, which means that the NSA could potentially get around security software on millions of computers worldwide. Many in the crypto community now fear that other NIST algorithms may have been subverted as well. Since no one knows which ones, though, some renowned cryptographers are questioning the trustworthiness of all NIST standards. [Continue reading…]

Facebooktwittermail

How we have become the tools of our devices

The New York Times reports: Once, only hairdressers and bartenders knew people’s secrets.

Now, smartphones know everything — where people go, what they search for, what they buy, what they do for fun and when they go to bed. That is why advertisers, and tech companies like Google and Facebook, are finding new, sophisticated ways to track people on their phones and reach them with individualized, hypertargeted ads. And they are doing it without cookies, those tiny bits of code that follow users around the Internet, because cookies don’t work on mobile devices.

Privacy advocates fear that consumers do not realize just how much of their private information is on their phones and how much is made vulnerable simply by downloading and using apps, searching the mobile Web or even just going about daily life with a phone in your pocket. And this new focus on tracking users through their devices and online habits comes against the backdrop of a spirited public debate on privacy and government surveillance.

On Wednesday, the National Security Agency confirmed it had collected data from cellphone towers in 2010 and 2011 to locate Americans’ cellphones, though it said it never used the information.

“People don’t understand tracking, whether it’s on the browser or mobile device, and don’t have any visibility into the practices going on,” said Jennifer King, who studies privacy at the University of California, Berkeley and has advised the Federal Trade Commission on mobile tracking. “Even as a tech professional, it’s often hard to disentangle what’s happening.”

Drawbridge is one of several start-ups that have figured out how to follow people without cookies, and to determine that a cellphone, work computer, home computer and tablet belong to the same person, even if the devices are in no way connected. Before, logging onto a new device presented advertisers with a clean slate.

“We’re observing your behaviors and connecting your profile to mobile devices,” said Eric Rosenblum, chief operating officer at Drawbridge. But don’t call it tracking. “Tracking is a dirty word,” he said. [Continue reading…]

Facebooktwittermail