Forget ideology, liberal democracy’s newest threats come from technology and bioscience

John Naughton writes: The BBC Reith Lectures in 1967 were given by Edmund Leach, a Cambridge social anthropologist. “Men have become like gods,” Leach began. “Isn’t it about time that we understood our divinity? Science offers us total mastery over our environment and over our destiny, yet instead of rejoicing we feel deeply afraid.”

That was nearly half a century ago, and yet Leach’s opening lines could easily apply to today. He was speaking before the internet had been built and long before the human genome had been decoded, and so his claim about men becoming “like gods” seems relatively modest compared with the capabilities that molecular biology and computing have subsequently bestowed upon us. Our science-based culture is the most powerful in history, and it is ceaselessly researching, exploring, developing and growing. But in recent times it seems to have also become plagued with existential angst as the implications of human ingenuity begin to be (dimly) glimpsed.

The title that Leach chose for his Reith Lecture – A Runaway World – captures our zeitgeist too. At any rate, we are also increasingly fretful about a world that seems to be running out of control, largely (but not solely) because of information technology and what the life sciences are making possible. But we seek consolation in the thought that “it was always thus”: people felt alarmed about steam in George Eliot’s time and got worked up about electricity, the telegraph and the telephone as they arrived on the scene. The reassuring implication is that we weathered those technological storms, and so we will weather this one too. Humankind will muddle through.

But in the last five years or so even that cautious, pragmatic optimism has begun to erode. There are several reasons for this loss of confidence. One is the sheer vertiginous pace of technological change. Another is that the new forces at loose in our society – particularly information technology and the life sciences – are potentially more far-reaching in their implications than steam or electricity ever were. And, thirdly, we have begun to see startling advances in these fields that have forced us to recalibrate our expectations.[Continue reading…]

Facebooktwittermail

China launches quantum satellite for ‘hack-proof’ communications

The Guardian reports: China says it has launched the world’s first quantum satellite, a project Beijing hopes will enable it to build a coveted “hack-proof” communications system with potentially significant military and commercial applications.

Xinhua, Beijing’s official news service, said Micius, a 600kg satellite that is nicknamed after an ancient Chinese philosopher, “roared into the dark sky” over the Gobi desert at 1.40am local time on Tuesday, carried by a Long March-2D rocket.

“The satellite’s two-year mission will be to develop ‘hack-proof’ quantum communications, allowing users to send messages securely and at speeds faster than light,” Xinhua reported.

The Quantum Experiments at Space Scale, or Quess, satellite programme is part of an ambitious space programme that has accelerated since Xi Jinping became Communist party chief in late 2012.

“There’s been a race to produce a quantum satellite, and it is very likely that China is going to win that race,” Nicolas Gisin, a professor and quantum physicist at the University of Geneva, told the Wall Street Journal. “It shows again China’s ability to commit to large and ambitious projects and to realise them.”

The satellite will be tasked with sending secure messages between Beijing and Urumqi, the capital of Xinjiang, a sprawling region of deserts and snow-capped mountains in China’s extreme west.

Highly complex attempts to build such a “hack-proof” communications network are based on the scientific principle of entanglement. [Continue reading…]

Facebooktwittermail

How a solo voyage around the world led to a vision for a sustainable global economy

 

The Ellen MacArthur Foundation works with business, government and academia to build a framework for an economy that is restorative and regenerative by design — a circular economy.

Facebooktwittermail

A society staring at machines

Jacob Weisberg writes: “As smoking gives us something to do with our hands when we aren’t using them, Time gives us something to do with our minds when we aren’t thinking,” Dwight Macdonald wrote in 1957. With smartphones, the issue never arises. Hands and mind are continuously occupied texting, e-mailing, liking, tweeting, watching YouTube videos, and playing Candy Crush.

Americans spend an average of five and a half hours a day with digital media, more than half of that time on mobile devices, according to the research firm eMarketer. Among some groups, the numbers range much higher. In one recent survey, female students at Baylor University reported using their cell phones an average of ten hours a day. Three quarters of eighteen-to-twenty-four-year-olds say that they reach for their phones immediately upon waking up in the morning. Once out of bed, we check our phones 221 times a day — an average of every 4.3 minutes — according to a UK study. This number actually may be too low, since people tend to underestimate their own mobile usage. In a 2015 Gallup survey, 61 percent of people said they checked their phones less frequently than others they knew.

Our transformation into device people has happened with unprecedented suddenness. The first touchscreen-operated iPhones went on sale in June 2007, followed by the first Android-powered phones the following year. Smartphones went from 10 percent to 40 percent market penetration faster than any other consumer technology in history. In the United States, adoption hit 50 percent only three years ago. Yet today, not carrying a smartphone indicates eccentricity, social marginalization, or old age. [Continue reading…]

It perhaps also indicates being at less risk of stumbling off a cliff.

Facebooktwittermail

Learning from nature: Record-efficiency turbine farms are being inspired by sealife

Alex Riley writes: As they drove on featureless dirt roads on the first Tuesday of 2010, John Dabiri, professor of aeronautics and bioengineering at the California Institute of Technology, and his then-student Robert Whittlesey, were inspecting a remote area of land that they hoped to purchase to test new concepts in wind power. They named their site FLOWE for Field Laboratory for Optimized Wind Energy. Situated between gentle knolls covered in sere vegetation, the four-acre parcel in Antelope Valley, California, was once destined to become a mall, but those plans fell through. The land was cheap. And, more importantly, it was windy.

Estimated at 250 trillion Watts, the amount of wind on Earth has the potential to provide more than 20 times our current global energy consumption. Yet, only four countries — Spain, Portugal, Ireland, and Denmark — generate more than 10 percent of their electricity this way. The United States, one of the largest, wealthiest, and windiest of countries, comes in at about 4 percent. There are reasons for that. Wind farm expansion brings with it huge engineering costs, unsightly countryside, loud noises, disruption to military radar, and death of wildlife. Recent estimates blamed turbines for killing 600,000 bats and up to 440,000 birds a year. On June 19, 2014, the American Bird Conservancy filed a lawsuit against the federal government asking it to curtail the impact of wind farms on the dwindling eagle populations. And while standalone horizontal-axis turbines harvest wind energy well, in a group they’re highly profligate. As their propeller-like blades spin, the turbines facing into the wind disrupt free-flowing air, creating a wake of slow-moving, infertile air behind them. [Continue reading…]

Facebooktwittermail

The growing risk of a war in space

Geoff Manaugh writes: In Ghost Fleet, a 2015 novel by security theorists Peter Singer and August Cole, the next world war begins in space.

Aboard an apparently civilian space station called the Tiangong, or “Heavenly Palace,” Chinese astronauts—taikonauts—maneuver a chemical oxygen iodine laser (COIL) into place. They aim their clandestine electromagnetic weapon at its first target, a U.S. Air Force communications satellite that helps to coordinate forces in the Pacific theater far below. The laser “fired a burst of energy that, if it were visible light instead of infrared, would have been a hundred thousand times brighter than the sun.” The beam melts through the external hull of the U.S. satellite and shuts down its sensitive inner circuitry.

From there, the taikonauts work their way through a long checklist of strategic U.S. space assets, disabling the nation’s military capabilities from above. It is a Pearl Harbor above the atmosphere, an invisible first strike.

“The emptiness of outer space might be the last place you’d expect militaries to vie over contested territory,” Lee Billings has written, “except that outer space isn’t so empty anymore.” It is not only science fiction, in other words, to suggest that the future of war could be offworld. The high ground of the global battlefield is no longer defined merely by a topographical advantage, but by strategic orbitals and potential weapons stationed in the skies above distant continents.

When China shot down one of its own weather satellites in January 2007, the event was, among other things, a clear demonstration to the United States that China could wage war beyond the Earth’s atmosphere. In the decade since, both China and the United States have continued to pursue space-based armaments and defensive systems. A November 2015 “Report to Congress,” for example, filed by the U.S.-China Economic and Security Review Commission (PDF), specifically singles out China’s “Counterspace Program” as a subject of needed study. China’s astral arsenal, the report explains, most likely includes “direct-ascent” missiles, directed-energy weapons, and also what are known as “co-orbital antisatellite systems.” [Continue reading…]

Facebooktwittermail

Artificial intelligence: ‘We’re like children playing with a bomb’

The Observer reports: You’ll find the Future of Humanity Institute down a medieval backstreet in the centre of Oxford. It is beside St Ebbe’s church, which has stood on this site since 1005, and above a Pure Gym, which opened in April. The institute, a research faculty of Oxford University, was established a decade ago to ask the very biggest questions on our behalf. Notably: what exactly are the “existential risks” that threaten the future of our species; how do we measure them; and what can we do to prevent them? Or to put it another way: in a world of multiple fears, what precisely should we be most terrified of?

When I arrive to meet the director of the institute, Professor Nick Bostrom, a bed is being delivered to the second-floor office. Existential risk is a round-the-clock kind of operation; it sleeps fitfully, if at all.

Bostrom, a 43-year-old Swedish-born philosopher, has lately acquired something of the status of prophet of doom among those currently doing most to shape our civilisation: the tech billionaires of Silicon Valley. His reputation rests primarily on his book Superintelligence: Paths, Dangers, Strategies, which was a surprise New York Times bestseller last year and now arrives in paperback, trailing must-read recommendations from Bill Gates and Tesla’s Elon Musk. (In the best kind of literary review, Musk also gave Bostrom’s institute £1m to continue to pursue its inquiries.)

The book is a lively, speculative examination of the singular threat that Bostrom believes – after years of calculation and argument – to be the one most likely to wipe us out. This threat is not climate change, nor pandemic, nor nuclear winter; it is the possibly imminent creation of a general machine intelligence greater than our own. [Continue reading…]

Facebooktwittermail

‘Gene drives’ that tinker with evolution are an unknown risk, researchers say

MIT Technology Review reports: With great power — in this case, a technology that can alter the rules of evolution — comes great responsibility. And since there are “considerable gaps in knowledge” about the possible consequences of releasing this technology, called a gene drive, into natural environments, it is not yet responsible to do so. That’s the major conclusion of a report published today by the National Academies of Science, Engineering, and Medicine.

Gene drives hold immense promise for controlling or eradicating vector-borne diseases like Zika virus and malaria, or in managing agricultural pests or invasive species. But the 200-page report, written by a committee of 16 experts, highlights how ill-equipped we are to assess the environmental and ecological risks of using gene drives. And it provides a glimpse at the challenges they will create for policymakers.

The technology is inspired by natural phenomena through which particular “selfish” genes are passed to offspring at higher rate than is normally allowed by nature in sexually reproducing organisms. There are multiple ways to make gene drives in the lab, but scientists are now using the gene-editing tool known as CRISPR to very rapidly and effectively do the trick. Evidence in mosquitoes, fruit flies, and yeast suggests that this could be used to spread a gene through nearly 100 percent of a population.

The possible ecological effects, intended or not, are far from clear, though. How long will gene drives persist in the environment? What is the chance that an engineered organism could pass the gene drive to an unintended recipient? How might these things affect the whole ecosystem? How much does all this vary depending on the particular organism and ecosystem?

Research on the molecular biology of gene drives has outpaced ecological research on how genes move through populations and between species, the report says, making it impossible to adequately answer these and other thorny questions. Substantially more laboratory research and confined field testing is needed to better grasp the risks. [Continue reading…]

Jim Thomas writes: If there is a prize for the fastest emerging tech controversy of the century the ‘gene drive’ may have just won it. In under eighteen months the sci-fi concept of a ‘mutagenic chain reaction’ that can drive a genetic trait through an entire species (and maybe eradicate that species too) has gone from theory to published proof of principle to massively-shared TED talk (apparently an important step these days) to the subject of a US National Academy of Sciences high profile study – complete with committees, hearings, public inputs and a glossy 216 page report release. Previous technology controversies have taken anywhere from a decade to over a century to reach that level of policy attention. So why were Gene Drives put on the turbo track to science academy report status? One word: leverage.

What a gene drive does is simple: it ensures that a chosen genetic trait will reliably be passed on to the next generation and every generation thereafter. This overcomes normal Mendelian genetics where a trait may be diluted or lost through the generations. The effect is that the engineered trait is driven through an entire population, re-engineering not just single organisms but enforcing the change in every descendant – re-shaping entire species and ecosystems at will.

It’s a perfect case of a very high-leverage technology. Archimedes famously said “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. ” Gene drive developers are in effect saying “Give me a gene drive and an organism to put it in and I can wipe out species, alter ecosystems and cause large-scale modifications.” Gene drive pioneer Kevin Esvelt calls gene drives “an experiment where if you screw up, it affects the whole world”. [Continue reading…]

Facebooktwittermail

Has the quantum era has begun?

IDG News Service reports: Quantum computing’s full potential may still be years away, but there are plenty of benefits to be realized right now.

So argues Vern Brownell, president and CEO of D-Wave Systems, whose namesake quantum system is already in its second generation.

Launched 17 years ago by a team with roots at Canada’s University of British Columbia, D-Wave introduced what it called “the world’s first commercially available quantum computer” back in 2010. Since then the company has doubled the number of qubits, or quantum bits, in its machines roughly every year. Today, its D-Wave 2X system boasts more than 1,000.

The company doesn’t disclose its full customer list, but Google, NASA and Lockheed-Martin are all on it, D-Wave says. In a recent experiment, Google reported that D-Wave’s technology outperformed a conventional machine by 100 million times. [Continue reading…]

Facebooktwittermail

The range of the mind’s eye is restricted by the skill of the hand

structure12

Jonathan Waldman writes: Sometime in 1882, a skinny, dark-haired, 11-year-old boy named Harry Brearley entered a steelworks for the first time. A shy kid — he was scared of the dark, and a picky eater — he was also curious, and the industrial revolution in Sheffield, England, offered much in the way of amusements. He enjoyed wandering around town — he later called himself a Sheffield Street Arab — watching road builders, bricklayers, painters, coal deliverers, butchers, and grinders. He was drawn especially to workshops; if he couldn’t see in a shop window, he would knock on the door and offer to run an errand for the privilege of watching whatever work was going on inside. Factories were even more appealing, and he had learned to gain access by delivering, or pretending to deliver, lunch or dinner to an employee. Once inside, he must have reveled, for not until the day’s end did he emerge, all grimy and gray but for his blue eyes. Inside the steelworks, the action compelled him so much that he spent hours sitting inconspicuously on great piles of coal, breathing through his mouth, watching brawny men shoveling fuel into furnaces, hammering white-hot ingots of iron.

There was one operation in particular that young Harry liked: a toughness test performed by the blacksmith. After melting and pouring a molten mixture from a crucible, the blacksmith would cast a bar or two of that alloy, and after it cooled, he would cut notches in the ends of those bars. Then he’d put the bars in a vise, and hammer away at them.

The effort required to break the metal bars, as interpreted through the blacksmith’s muscles, could vary by an order of magnitude, but the result of the test was expressed qualitatively. The metal was pronounced on the spot either rotten or darned good stuff. The latter was simply called D.G.S. The aim of the men at that steelworks, and every other, was to produce D.G.S., and Harry took that to heart.

In this way, young Harry became familiar with steelmaking long before he formally taught himself as much as there was to know about the practice. It was the beginning of a life devoted to steel, without the distractions of hobbies, vacations, or church. It was the origin of a career in which Brearley wrote eight books on metals, five of which contain the word steel in the title; in which he could argue about steelmaking — but not politics — all night; and in which the love and devotion he bestowed upon inanimate metals exceeded that which he bestowed upon his parents or wife or son. Steel was Harry’s true love. It would lead, eventually, to the discovery of stainless steel.

Harry Brearley was born on Feb. 18, 1871, and grew up poor, in a small, cramped house on Marcus Street, in Ramsden’s Yard, on a hill in Sheffield. The city was the world capital of steelmaking; by 1850 Sheffield steelmakers produced half of all the steel in Europe, and 90 percent of the steel in England. By 1860, no fewer than 178 edge tool and saw makers were registered in Sheffield. In the first half of the 19th century, as Sheffield rose to prominence, the population of the city grew fivefold, and its filth grew proportionally. A saying at the time, that “where there’s muck there’s money,” legitimized the grime, reek, and dust of industrial Sheffield, but Harry recognized later that it was a misfortune to be from there, for nobody had much ambition. [Continue reading…]

Facebooktwittermail

Why we need to tackle the growing mountain of ‘digital waste’

By Chris Preist, University of Bristol

We are very aware of waste in our lives today, from the culture of recycling to the email signatures that urge us not to print them off. But as more and more aspects of life become reliant on digital technology, have we stopped to consider the new potential avenues of waste that are being generated? It’s not just about the energy and resources used by our devices – the services we run over the cloud can generate “digital waste” of their own.

Current approaches to reducing energy use focus on improving the hardware: better datacentre energy management, improved electronics that provide more processing power for less energy, and compression techniques that mean images, videos and other files use less bandwidth as they are transmitted across networks. Our research, rather than focusing on making individual system components more efficient, seeks to understand the impact of any particular digital service – one delivered via a website or through the internet – and re-designing the software involved to make better, more efficient use of the technology that supports it.

We also examine what aspects of a digital service actually provide value to the end user, as establishing where resources and effort are wasted – digital waste – reveals what can be cut out. For example, MP3 audio compression works by removing frequencies that are inaudible or less audible to the human ear – shrinking the size of the file for minimal loss of audible quality.

This is no small task. Estimates have put the technology sector’s global carbon footprint at roughly 2% of worldwide emissions – almost as much as that generated by aviation. But there is a big difference: IT is a more pervasive, and in some ways more democratic, technology. Perhaps 6% or so of the world’s population will fly in a given year, while around 40% have access to the internet at home. More than a billion people have Facebook accounts. Digital technology and the online services it provides are used by far more of us, and far more often.

[Read more…]

Facebooktwittermail

Google’s new YouTube analysis app crowdsources war reporting

Wired reports: In armed conflicts of the past, the “fog of war” meant a lack of data. In the era of ubiquitous pocket-sized cameras, it often means an information overload.

Four years ago, when analysts at the non-profit Carter Center began using YouTube videos to analyze the escalating conflicts in Syria and Libya, they found that, in contrast to older wars, it was nearly impossible to keep up with the thousands of clips uploaded every month from the smartphones and cameras of both armed groups and bystanders. “The difference with Syria and Libya is that they’re taking place in a truly connected environment. Everyone is online,” says Chris McNaboe, the manager of the Carter Center’s Syria Mapping Project. “The amount of video coming out was overwhelming…There have been more minutes of video from Syria than there have been minutes of real time.”

To handle that flood of digital footage, his team has been testing a tool called Montage. Montage was built by the human rights-focused tech incubator Jigsaw, the subsidiary of Google’s parent company Alphabet that was formerly known as a Google Ideas, to sort, map, and tag video evidence from conflict zones. Over the last few months, it allowed six Carter Center analysts to categorize video coming out of Syria—identifying government forces and each of the slew of armed opposition groups, recording the appearance of different armaments and vehicles, and keeping all of that data carefully marked with time stamps and locations to create a searchable, sortable and mappable catalog of the Syrian conflict. “Some of our Montage investigations have had over 600 videos in them,” says McNaboe. “Even with a small team we’ve been able to go through days worth of video in a relatively short amount of time.” [Continue reading…]

Facebooktwittermail

Technology is not ruining our kids. Parents (and their technology) are ruining them

Jenny Anderson writes: Many of us worry what technology is doing to our kids. A cascade of reports show that their addiction to iAnything is diminishing empathy, increasing bullying (pdf), robbing them of time to play, and just be. So we parents set timers, lock away devices and drone on about the importance of actual real-live human interaction. And then we check our phones.

Sherry Turkle, a professor in the program in Science, Technology and Society at M.I.T. and the author, most recently, of Reclaiming Conversation: The Power of Talk in a Digital Age, turned the tables by imploring parents to take control and model better behavior.

A 15-year-old boy told her that: “someday he wanted to raise a family, not the way his parents are raising him (with phones out during meals and in the park and during his school sports events) but the way his parents think they are raising him — with no phones at meals and plentiful family conversation.”

Turkle explains the cost of too-much technology in stark terms: Our children can’t engage in conversation, or experience solitude, making it very hard for them to be empathetic. “In one experiment, many student subjects opted to give themselves mild electric shocks rather than sit alone with their thoughts,” she noted.

Unfortunately, it seems we parents are the solution. (Newsflash, kids aren’t going to give up their devices because they are worried about how it may influence their future ability to empathize.)

That means exercising some self-control. Many of us aren’t exactly paragons of virtue in this arena. [Continue reading…]

Facebooktwittermail

Yuri Milner is spending $100 million on a probe that could travel to Alpha Centauri within a generation

Ross Andersen writes: In the Southern Hemisphere’s sky, there is a constellation, a centaur holding a spear, its legs raised in mid-gallop. The creature’s front hoof is marked by a star that has long hypnotized humanity, with its brightness, and more recently, its proximity.

Since the dawn of written culture, at least, humans have dreamt of star travel. As the nearest star system to Earth, Alpha Centauri is the most natural subject of these dreams. To a certain cast of mind, the star seems destined to figure prominently in our future.

In the four centuries since the Scientific Revolution, a series of increasingly powerful instruments has slowly brought Alpha Centauri into focus. In 1689, the Jesuit priest Jean Richaud fixed his telescope on a comet, as it was streaking through the stick-figure centaur. He was startled to find not one, but two stars twinkling in its hoof. In 1915, a third star was spotted, this one a small, red satellite of the system’s two central, sunlike stars.

To say that Alpha Centauri is the nearest star system to Earth is not to say that it’s near. A 25 trillion mile abyss separates us. Alpha Centauri’s light travels to Earth at the absurd rate of 186,000 miles per second, and still takes more than four years to arrive. [Continue reading…]

Facebooktwittermail

Technology, the faux equalizer

structure3

Adrienne LaFrance writes: Just over a century ago, an electric company in Minnesota took out a full-page newspaper advertisement and listed 1,000 uses for electricity.

Bakers could get ice-cream freezers and waffle irons! Hat makers could put up electric signs! Paper-box manufacturers could use glue pots and fans! Then there were the at-home uses: decorative lights, corn poppers, curling irons, foot warmers, massage machines, carpet sweepers, sewing machines, and milk warmers all made the list. “Make electricity cut your housework in two,” the advertisement said.

This has long been the promise of new technology: That it will make your work easier, which will make your life better. The idea is that the arc of technology bends toward social progress. This is practically the mantra of Silicon Valley, so it’s not surprising that Google’s CEO, Sundar Pichai, seems similarly teleological in his views. [Continue reading…]

Facebooktwittermail

FBI backs off from its day in court with Apple this time – but there will be others

By Martin Kleppmann, University of Cambridge

After a very public stand-off over an encrypted terrorist’s smartphone, the FBI has backed down in its court case against Apple, stating that an “outside party” – rumoured to be an Israeli mobile forensics company – has found a way of accessing the data on the phone.

The exact method is not known. Forensics experts have speculated that it involves tricking the hardware into not recording how many passcode combinations have been tried, which would allow all 10,000 possible four-digit passcodes to be tried within a fairly short time. This technique would apply to the iPhone 5C in question, but not newer models, which have stronger hardware protection through the so-called secure enclave, a chip that performs security-critical operations in hardware. The FBI has denied that the technique involves copying storage chips.

So while the details of the technique remain classified, it’s reasonable to assume that any security technology can be broken given sufficient resources. In fact, the technology industry’s dirty secret is that most products are frighteningly insecure.

[Read more…]

Facebooktwittermail

Computer’s Go victory reminds us that we need to question our reliance on AI

By Nello Cristianini, University of Bristol

The victory of a computer over one of the world’s strongest players of the game Go has been hailed by many as a landmark event in artificial intelligence. But why? After all, computers have beaten us at games before, most notably in 1997 when the computer Deep Blue triumphed over chess grandmaster Gary Kasparov.

We can get a hint of why the Go victory is important, however, by looking at the difference between the companies behind these game-playing computers. Deep Blue was the product of IBM, which was back then largely a hardware company. But the software – AlphaGo – that beat Go player Lee Sedol was created by DeepMind, a branch of Google based in the UK specialising in machine learning.

AlphaGo’s success wasn’t because of so-called “Moore’s law”, which states that computer processor speed doubles roughly every two years. Computers haven’t yet become powerful enough to calculate all the possible moves in Go – which is much harder to do than in chess. Instead, DeepMind’s work was based on carefully deploying new machine-learning methods and integrating them within more standard game-playing algorithms. Using vast amounts of data, AlphaGo has learnt how to focus its resources where they are most needed, and how to do a better job with those resources.

[Read more…]

Facebooktwittermail

Blind faith in robots

Melissa Dahl writes: The fire alarm goes off, and it’s apparently not a mistake or a drill: Just outside the door, smoke fills the hallway. Luckily, you happen to have a guide for such a situation: a little bot with a sign that literally reads EMERGENCY GUIDE ROBOT. But, wait — it’s taking you in the opposite direction of the way you came in, and it seems to be wanting you to go down an unfamiliar hallway. Do you trust your own instinct and escape the way you came? Or do you trust the robot?

Probably, you will blindly follow the robot, according to the findings of a fascinating new study from the Georgia Institute of Technology. In an emergency situation — a fake one, though the test subjects didn’t know that — most people trusted the robot over their own instincts, even when the robot had showed earlier signs of malfunctioning. It’s a new wrinkle for researchers who study trust in human-robot interactions. Previously, this work had been focused on getting people to trust robotics, such as Google’s driverless cars. Now this new research hints at another problem: How do you stop people from trusting robots too much? It’s a timely question, especially considering the news this week of the first crash caused by one of Google’s self-driving cars. [Continue reading…]

Facebooktwittermail