Category Archives: technology

FBI backs off from its day in court with Apple this time – but there will be others

By Martin Kleppmann, University of Cambridge

After a very public stand-off over an encrypted terrorist’s smartphone, the FBI has backed down in its court case against Apple, stating that an “outside party” – rumoured to be an Israeli mobile forensics company – has found a way of accessing the data on the phone.

The exact method is not known. Forensics experts have speculated that it involves tricking the hardware into not recording how many passcode combinations have been tried, which would allow all 10,000 possible four-digit passcodes to be tried within a fairly short time. This technique would apply to the iPhone 5C in question, but not newer models, which have stronger hardware protection through the so-called secure enclave, a chip that performs security-critical operations in hardware. The FBI has denied that the technique involves copying storage chips.

So while the details of the technique remain classified, it’s reasonable to assume that any security technology can be broken given sufficient resources. In fact, the technology industry’s dirty secret is that most products are frighteningly insecure.

Continue reading

Facebooktwittermail

Computer’s Go victory reminds us that we need to question our reliance on AI

By Nello Cristianini, University of Bristol

The victory of a computer over one of the world’s strongest players of the game Go has been hailed by many as a landmark event in artificial intelligence. But why? After all, computers have beaten us at games before, most notably in 1997 when the computer Deep Blue triumphed over chess grandmaster Gary Kasparov.

We can get a hint of why the Go victory is important, however, by looking at the difference between the companies behind these game-playing computers. Deep Blue was the product of IBM, which was back then largely a hardware company. But the software – AlphaGo – that beat Go player Lee Sedol was created by DeepMind, a branch of Google based in the UK specialising in machine learning.

AlphaGo’s success wasn’t because of so-called “Moore’s law”, which states that computer processor speed doubles roughly every two years. Computers haven’t yet become powerful enough to calculate all the possible moves in Go – which is much harder to do than in chess. Instead, DeepMind’s work was based on carefully deploying new machine-learning methods and integrating them within more standard game-playing algorithms. Using vast amounts of data, AlphaGo has learnt how to focus its resources where they are most needed, and how to do a better job with those resources.

Continue reading

Facebooktwittermail

Blind faith in robots

Melissa Dahl writes: The fire alarm goes off, and it’s apparently not a mistake or a drill: Just outside the door, smoke fills the hallway. Luckily, you happen to have a guide for such a situation: a little bot with a sign that literally reads EMERGENCY GUIDE ROBOT. But, wait — it’s taking you in the opposite direction of the way you came in, and it seems to be wanting you to go down an unfamiliar hallway. Do you trust your own instinct and escape the way you came? Or do you trust the robot?

Probably, you will blindly follow the robot, according to the findings of a fascinating new study from the Georgia Institute of Technology. In an emergency situation — a fake one, though the test subjects didn’t know that — most people trusted the robot over their own instincts, even when the robot had showed earlier signs of malfunctioning. It’s a new wrinkle for researchers who study trust in human-robot interactions. Previously, this work had been focused on getting people to trust robotics, such as Google’s driverless cars. Now this new research hints at another problem: How do you stop people from trusting robots too much? It’s a timely question, especially considering the news this week of the first crash caused by one of Google’s self-driving cars. [Continue reading…]

Facebooktwittermail

Inside the artificial intelligence revolution

Rolling Stone reports: “Welcome to robot nursery school,” Pieter Abbeel says as he opens the door to the Robot Learning Lab on the seventh floor of a sleek new building on the northern edge of the UC-Berkeley campus. The lab is chaotic: bikes leaning against the wall, a dozen or so grad students in disorganized cubicles, whiteboards covered with indecipherable equations. Abbeel, 38, is a thin, wiry guy, dressed in jeans and a stretched-out T-shirt. He moved to the U.S. from Belgium in 2000 to get a Ph.D. in computer science at Stanford and is now one of the world’s foremost experts in understanding the challenge of teaching robots to think intelligently. But first, he has to teach them to “think” at all. “That’s why we call this nursery school,” he jokes. He introduces me to Brett, a six-foot-tall humanoid robot made by Willow Garage, a high-profile Silicon Valley robotics manufacturer that is now out of business. The lab acquired the robot several years ago to experiment with. Brett, which stands for “Berkeley robot for the elimination of tedious tasks,” is a friendly-looking creature with a big, flat head and widely spaced cameras for eyes, a chunky torso, two arms with grippers for hands and wheels for feet. At the moment, Brett is off-duty and stands in the center of the lab with the mysterious, quiet grace of an unplugged robot. On the floor nearby is a box of toys that Abbeel and the students teach Brett to play with: a wooden hammer, a plastic toy airplane, some giant Lego blocks. Brett is only one of many robots in the lab. In another cubicle, a nameless 18-inch-tall robot hangs from a sling on the back of a chair. Down in the basement is an industrial robot that plays in the equivalent of a robot sandbox for hours every day, just to see what it can teach itself. Across the street in another Berkeley lab, a surgical robot is learning how to stitch up human flesh, while a graduate student teaches drones to pilot themselves intelligently around objects. “We don’t want to have drones crashing into things and falling out of the sky,” Abbeel says. “We’re trying to teach them to see.”

Industrial robots have long been programmed with specific tasks: Move arm six inches to the left, grab module, twist to the right, insert module into PC board. Repeat 300 times each hour. These machines are as dumb as lawn mowers. But in recent years, breakthroughs in machine learning – algorithms that roughly mimic the human brain and allow machines to learn things for themselves – have given computers a remarkable ability to recognize speech and identify visual patterns. Abbeel’s goal is to imbue robots with a kind of general intelligence – a way of understanding the world so they can learn to complete tasks on their own. He has a long way to go. “Robots don’t even have the learning capabilities of a two-year-old,” he says. For example, Brett has learned to do simple tasks, such as tying a knot or folding laundry. Things that are simple for humans, such as recognizing that a crumpled ball of fabric on a table is in fact a towel, are surprisingly difficult for a robot, in part because a robot has no common sense, no memory of earlier attempts at towel-folding and, most important, no concept of what a towel is. All it sees is a wad of color. [Continue reading…]

Facebooktwittermail

U.S. government agency says it has beaten Elon Musk and Bill Gates to holy grail of battery storage

The Guardian reports: A US government agency says it has attained the “holy grail” of energy – the next-generation system of battery storage, that has has been hotly pursued by the likes of Bill Gates and Elon Musk.

Advanced Research Projects Agency-Energy (Arpa-E) – a branch of the Department of Energy – says it achieved its breakthrough technology in seven years.

Ellen Williams, Arpa-E’s director, said: “I think we have reached some holy grails in batteries – just in the sense of demonstrating that we can create a totally new approach to battery technology, make it work, make it commercially viable, and get it out there to let it do its thing,”

If that’s the case, Arpa-E has come out ahead of Gates and Musk in the multi-billion-dollar race to build the next generation battery for power companies and home storage.

Arpa-E was founded in 2009 under Barack Obama’s economic recovery plan to fund early stage research into the generation and storage of energy.

Such projects, or so-called moonshots, were widely seen as too risky for regular investors, but – if they succeed – could potentially be game-changing. [Continue reading…]

Facebooktwittermail

The technical reasons the FBI’s claim, ‘just this one phone,’ is bogus

fbi

Jonathan Zdziarski, an expert in iOS forensics, writes: For years, the government could come to Apple with a subpoena and a phone, and have the manufacturer provide a disk image of the device. This largely worked because Apple didn’t have to hack into their phones to do this. Up until iOS 8, the encryption Apple chose to use in their design was easily reversible when you had code execution on the phone (which Apple does). So all through iOS 7, Apple only needed to insert the key into the safe and provide FBI with a copy of the data.

This service worked like a “black box”, and while Apple may have needed to explain their methods in court at some point, they were more likely considered a neutral third party lab as most forensics companies would be if you sent them a DNA sample. The level of validation and accountability here is relatively low, and methods can often be opaque; that is, Apple could simply claim that the tech involved was a trade secret, and gotten off without much more than an explanation. An engineer at Apple could hack up a quick and dirty tool to dump disk, and nobody would need to ever see it because they were providing a lab service and were considered more or less trade secrets.

Now lets contrast that history with what FBI and the courts are ordering Apple to do here. FBI could have come to Apple with a court order stating they must brute force the PIN on the phone and deliver the contents. It would have been difficult to get a judge to sign off on that, since this quite boldly exceeds the notion of “reasonable assistance” to hack into your own devices. No, to slide this by, FBI was more clever. They requested that Apple developed a forensics tool but did not do the actual brute force themselves. [Continue reading…]

Facebooktwittermail

Even if your data is secure, your phone could set your head on fire

SF Gate reports: There is a chance your cell phone could set your head on fire, according to science (and the New York Police Department).

Depending on how intimate you are with your phone, those who charge their smartphones under their pillow while they sleep could find themselves fanning out the flames of their smothered device.

In an ominous and unexplained public service announcement, the 33rd Precinct in New York’s Washington Heights sent out the following tweet Monday: [Continue reading…]

Facebooktwittermail

Doing more with less: The economic lesson of Peak Paper

structure2

John Quiggin writes: For most of us, the industrial economy is a thing of the past. In the entire United States, large factories employ fewer than 2 million people. Even adding China to the picture does not change things much. And yet the conceptual categories of the 20th century still dominate our thinking. We remain fixated on the industrial model of economic growth, where ‘growth’ means ‘more of everything’, and we can express our rate of development in a single number. This model leads naturally to the conclusion that economic expansion must eventually run up against constraints on the availability of natural resources, such as trees to make paper.

And yet in 2013, despite positive growth overall, the world reached ‘Peak Paper’: global paper production and consumption reached its maximum, flattened out, and is now falling. A prediction that was over-hyped in the 20th century and then derided in the early 2000s – namely, the Paperless Office – is finally being realised. Growth continues, but paper is in retreat. Why did this seem so unlikely only a decade ago?

The problem is a standard assumption of macroeconomics – namely, that all sectors of the economy expand at a roughly equal rate. If this ‘fixed proportions’ assumption does not hold, the theory used to construct GDP numbers ceases to work, and the concept of a ‘rate of growth’ is no longer meaningful. Until the end of the 20th century, these assumptions did in fact work reasonably well for paper, books and newspapers. The volume of information increased somewhat more rapidly than the economy as a whole, but not so rapidly as to undermine the notion of an overall rate of economic growth. The volume of printed matter grew steadily, to around a million new books every year, and the demand for paper for printing grew in line with demand for books. [Continue reading…]

Facebooktwittermail

Invasion of the body snatchers

Jacob Weisberg writes: “As smoking gives us something to do with our hands when we aren’t using them, Time gives us something to do with our minds when we aren’t thinking,” Dwight Macdonald wrote in 1957. With smartphones, the issue never arises. Hands and mind are continuously occupied texting, e-mailing, liking, tweeting, watching YouTube videos, and playing Candy Crush.

Americans spend an average of five and a half hours a day with digital media, more than half of that time on mobile devices, according to the research firm eMarketer. Among some groups, the numbers range much higher. In one recent survey, female students at Baylor University reported using their cell phones an average of ten hours a day. Three quarters of eighteen-to-twenty-four-year-olds say that they reach for their phones immediately upon waking up in the morning. Once out of bed, we check our phones 221 times a day — an average of every 4.3 minutes — according to a UK study. This number actually may be too low, since people tend to underestimate their own mobile usage. In a 2015 Gallup survey, 61 percent of people said they checked their phones less frequently than others they knew.

Our transformation into device people has happened with unprecedented suddenness. The first touchscreen-operated iPhones went on sale in June 2007, followed by the first Android-powered phones the following year. Smartphones went from 10 percent to 40 percent market penetration faster than any other consumer technology in history. In the United States, adoption hit 50 percent only three years ago. Yet today, not carrying a smartphone indicates eccentricity, social marginalization, or old age.

What does it mean to shift overnight from a society in which people walk down the street looking around to one in which people walk down the street looking at machines? [Continue reading…]

As one of those eccentric, socially marginalized but not quite old aged people without a smartphone, it means I now live in a world where it seems the mass of humanity has become myopic.

A driver remains stationary in front of a green light.

A couple sit next to each other in an airport, wrapped in silence with attention directed elsewhere down their mutually exclusive wormholes.

A jogger in the woods, hears no birdsong because his ears are stuffed with plastic buds delivering private tunes.

Amidst all this divided attention, one thing seems abundantly clearly: devices tap into and amplify the desire to be some place else.

To be confined to the present place and the present time is to be trapped in a prison cell from which the smartphone offers escape — though of course it doesn’t.

What it does is produce an itch in time; a restless sense that we don’t have enough — that an elusive missing something might soon appear on that mesmerizing little touchscreen.

The effect of this refusal to be where we are is to impoverish life as our effort to make it larger ends up doing the reverse.

Facebooktwittermail

Did the Vikings use crystal ‘sunstones’ to discover America?

By Stephen Harding, University of Nottingham

Ancient records tell us that the intrepid Viking seafarers who discovered Iceland, Greenland and eventually North America navigated using landmarks, birds and whales, and little else. There’s little doubt that Viking sailors would also have used the positions of stars at night and the sun during the daytime, and archaeologists have discovered what appears to be a kind of Viking navigational sundial. But without magnetic compasses, like all ancient sailors they would have struggled to find their way once the clouds came over.

However, there are also several reports in Nordic sagas and other sources of a sólarsteinn “sunstone”. The literature doesn’t say what this was used for but it has sparked decades of research examining if this might be a reference to a more intriguing form of navigational tool.

The idea is that the Vikings may have used the interaction of sunlight with particular types of crystal to create a navigational aid that may even have worked in overcast conditions. This would mean the Vikings had discovered the basic principles of measuring polarised light centuries before they were explained scientifically and which are today used to identify and measure different chemicals. Scientists are now getting closer to establishing if this form of navigation would have been possible, or if it is just a fanciful theory.

Continue reading

Facebooktwittermail

Most threats to humans come from science and technology, warns Hawking

structure10

The Guardian reports: The human race faces one its most dangerous centuries yet as progress in science and technology becomes an ever greater threat to our existence, Stephen Hawking warns.

The chances of disaster on planet Earth will rise to a near certainty in the next one to ten thousand years, the eminent cosmologist said, but it will take more than a century to set up colonies in space where human beings could live on among the stars.

“We will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period,” Hawking said. His comments echo those of Lord Rees, the astronomer royal, who raised his own concerns about the risks of self-annihilation in his 2003 book Our Final Century.

Speaking to the Radio Times ahead of the BBC Reith Lecture, in which he will explain the science of black holes, Hawking said most of the threats humans now face come from advances in science and technology, such as nuclear weapons and genetically engineered viruses. [Continue reading…]

Facebooktwittermail

The deep space of digital reading

pattern7

Paul La Farge writes: In A History of Reading, the Canadian novelist and essayist Alberto Manguel describes a remarkable transformation of human consciousness, which took place around the 10th century A.D.: the advent of silent reading. Human beings have been reading for thousands of years, but in antiquity, the normal thing was to read aloud. When Augustine (the future St. Augustine) went to see his teacher, Ambrose, in Milan, in 384 A.D., he was stunned to see him looking at a book and not saying anything. With the advent of silent reading, Manguel writes,

… the reader was at last able to establish an unrestricted relationship with the book and the words. The words no longer needed to occupy the time required to pronounce them. They could exist in interior space, rushing on or barely begun, fully deciphered or only half-said, while the reader’s thoughts inspected them at leisure, drawing new notions from them, allowing comparisons from memory or from other books left open for simultaneous perusal.

To read silently is to free your mind to reflect, to remember, to question and compare. The cognitive scientist Maryanne Wolf calls this freedom “the secret gift of time to think”: When the reading brain becomes able to process written symbols automatically, the thinking brain, the I, has time to go beyond those symbols, to develop itself and the culture in which it lives.

A thousand years later, critics fear that digital technology has put this gift in peril. The Internet’s flood of information, together with the distractions of social media, threaten to overwhelm the interior space of reading, stranding us in what the journalist Nicholas Carr has called “the shallows,” a frenzied flitting from one fact to the next. In Carr’s view, the “endless, mesmerizing buzz” of the Internet imperils our very being: “One of the greatest dangers we face,” he writes, “as we automate the work of our minds, as we cede control over the flow of our thoughts and memories to a powerful electronic system, is … a slow erosion of our humanness and our humanity.”

There’s no question that digital technology presents challenges to the reading brain, but, seen from a historical perspective, these look like differences of degree, rather than of kind. To the extent that digital reading represents something new, its potential cuts both ways. Done badly (which is to say, done cynically), the Internet reduces us to mindless clickers, racing numbly to the bottom of a bottomless feed; but done well, it has the potential to expand and augment the very contemplative space that we have prized in ourselves ever since we learned to read without moving our lips. [Continue reading…]

Facebooktwittermail

Russia confirms jet broke up in mid-air; was 2001 ‘tail strike’ the cause?

Clive Irving writes: The head of Russia’s aviation accident investigation body has confirmed that the Russian Airbus A321 that crashed in the Sinai on Saturday broke up in mid-air. Victor Sorochenko said that the wreckage was spread over an area of eight square miles — not concentrated in one debris field.

This would be consistent with a severe and very sudden structural failure that resulted in the airplane literally falling out of the sky from its cruise altitude of 31,000 feet. (An Egyptian statement that the pilot had reported a technical problem and asked for a diversion to the nearest airport was later withdrawn.)
The Airbus A321 was 18 years old, and had made 21,000 flights, a relatively low number when compared with the much higher daily frequency of flights made on budget airlines. With a modern airplane like this and regular maintenance its age is not in itself a cause for concern.

What does, however, jump out from this particular airplane’s record is an accident that it suffered on November 16, 2001, while landing at Cairo (while owned and operated by Middle East Airlines). As it touched down the nose was pointing at too high an angle and the tail hit the tarmac — heavily enough to cause substantial damage.

Tail strikes like this are not uncommon. The airplane was repaired and would have been rigorously inspected then and during subsequent maintenance checks. (Although the airplane was owned by a Russian company, Kogalymavia, operating as Metrojet, it was registered in Ireland and the Irish authorities were responsible for its certification checks.) Nonetheless investigators who will soon have access to the Airbus’s flight data recorder will take a hard look at what is called the rear pressure bulkhead, a critical seal in the cabin’s pressurization system. [Continue reading…]

Facebooktwittermail

Why did it take the U.S. so long to build its first offshore wind farm?

Slate reports: Wind-generated electricity has become a big business in the United States. From virtually nothing a decade ago, it has boomed to account for about 5 percent of electricity generated each year. In certain states, at certain times, cheap, emission-free wind can account for a huge chunk of supply, as happened recently in Texas. Wind adds capacity in large chunks — a wind farm may consist of scores of turbines arrayed across vast expanses of land. So far this year, according to the Federal Energy Regulatory Commission, nearly 3 gigawatts of wind capacity has come online in the U.S., accounting for 40 percent of all new electricity-generating capacity.

But although the U.S. has become a global leader in wind, there’s a subsector in which it’s lagged behind: offshore wind.

Around the world, and especially in northern Europe, anchoring wind turbines to the bed of the sea—where the wind is consistent and strong—has become a huge business. Denmark has installed so many offshore wind turbines that it often produces far more wind power than it can actually use. Earlier this week, DONG Energy announced plans to develop the largest offshore wind farm in the world, an 87-turbine site off the coast of Wales with a capacity of 660 megawatts. That’s about the size of a decent coal-fired plant. [Continue reading…]

Facebooktwittermail

It’s completely ridiculous to think that humans could live on Mars

Danielle and Astro Teller write: Our 12-year-old daughter who, like us, is a big fan of The Martian by Andy Weir, said, “I can’t stand that people think we’re all going to live on Mars after we destroy our own planet. Even after we’ve made the Earth too hot and polluted for humans, it still won’t be as bad as Mars. At least there’s plenty of water here, and the atmosphere won’t make your head explode.”

What makes The Martian so wonderful is that the protagonist survives in a brutally hostile environment, against all odds, by exploiting science in clever and creative ways. To nerds like us, that’s better than Christmas morning or a hot fudge sundae. (One of us is nerdier than the other — I’m not naming any names, but his job title is “Captain of Moonshots.”) The idea of using our ingenuity to explore other planets is thrilling. Our daughter has a good point about escaping man-made disaster on Earth by colonizing Mars, though. It doesn’t make a lot of sense.

Mars has almost no surface water; a toxic atmosphere that is too thin for humans to survive without pressure suits; deadly solar radiation; temperatures lower than Antarctica; and few to none of the natural resources that have been critical to human success on Earth. Smart people have proposed solutions for those pesky environmental issues, some of which are seriously sci-fi, like melting the polar ice caps with nuclear bombs. But those aren’t even the real problems.

The real problems have to do with human nature and economics. First, we live on a planet that is perfect for us, and we seem to be unable to prevent ourselves from making it less and less habitable. We’re like a bunch of teenagers destroying our parents’ mansion in one long, crazy party, figuring that our backup plan is to run into the forest and build our own house. We’ll worry about how to get food and a good sound system later. Proponents of Mars colonization talk about “terraforming” Mars to make it more like Earth, but in the meantime, we’re “marsforming” Earth by making our atmosphere poisonous and annihilating our natural resources. We are also well on our way to making Earth one big desert, just like Mars. [Continue reading…]

Facebooktwittermail

Technology is implicated in an assault on empathy

Sherry Turkle writes: Studies of conversation both in the laboratory and in natural settings show that when two people are talking, the mere presence of a phone on a table between them or in the periphery of their vision changes both what they talk about and the degree of connection they feel. People keep the conversation on topics where they won’t mind being interrupted. They don’t feel as invested in each other. Even a silent phone disconnects us.

In 2010, a team at the University of Michigan led by the psychologist Sara Konrath put together the findings of 72 studies that were conducted over a 30-year period. They found a 40 percent decline in empathy among college students, with most of the decline taking place after 2000.

Across generations, technology is implicated in this assault on empathy. We’ve gotten used to being connected all the time, but we have found ways around conversation — at least from conversation that is open-ended and spontaneous, in which we play with ideas and allow ourselves to be fully present and vulnerable. But it is in this type of conversation — where we learn to make eye contact, to become aware of another person’s posture and tone, to comfort one another and respectfully challenge one another — that empathy and intimacy flourish. In these conversations, we learn who we are.

Of course, we can find empathic conversations today, but the trend line is clear. It’s not only that we turn away from talking face to face to chat online. It’s that we don’t allow these conversations to happen in the first place because we keep our phones in the landscape. [Continue reading…]

Facebooktwittermail

Why access to computers won’t automatically boost children’s grades

By Steve Higgins, Durham University

Filling classrooms to the brim with computers and tablets won’t necessarily help children get better grades. That’s the finding of a new report from the Organisation for Economic Co-operation and Development (OECD).

The report reviews the links between test results of 15-year-olds from 64 countries who took part in the OECD’s 2012 Programme for International Student Assessment (PISA) and how much the pupils used technology at home and school.

Pupils in 31 countries, not including the UK, also took part in extra online tests of digital reading, navigation and mathematics. The countries and cities that came top in these online tests were Singapore, South Korea, Hong Kong and Japan – who also perform well in paper-based tests. But pupils in these countries don’t necessarily spend a lot of time on computers in class.

The report also shows that in 2012, 96% of 15-year-old students in the 64 countries in the study reported that they have a computer at home, but only 72% reported that they used a desktop, laptop or tablet computer at school.

The OECD found that it was not the amount of digital technology used in schools that was linked with scores in the PISA tests, but what teachers ask pupils do with computers or tablets that counts. There is also an increasing digital divide between school and home.

Continue reading

Facebooktwittermail

Why futurism has a cultural blindspot

Tom Vanderbilt writes: In early 1999, during the halftime of a University of Washington basketball game, a time capsule from 1927 was opened. Among the contents of this portal to the past were some yellowing newspapers, a Mercury dime, a student handbook, and a building permit. The crowd promptly erupted into boos. One student declared the items “dumb.”

Such disappointment in time capsules seems to run endemic, suggests William E. Jarvis in his book Time Capsules: A Cultural History. A headline from The Onion, he notes, sums it up: “Newly unearthed time capsule just full of useless old crap.” Time capsules, after all, exude a kind of pathos: They show us that the future was not quite as advanced as we thought it would be, nor did it come as quickly. The past, meanwhile, turns out to not be as radically distinct as we thought.

In his book Predicting the Future, Nicholas Rescher writes that “we incline to view the future through a telescope, as it were, thereby magnifying and bringing nearer what we can manage to see.” So too do we view the past through the other end of the telescope, making things look farther away than they actually were, or losing sight of some things altogether.

These observations apply neatly to technology. We don’t have the personal flying cars we predicted we would. Coal, notes the historian David Edgerton in his book The Shock of the Old, was a bigger source of power at the dawn of the 21st century than in sooty 1900; steam was more significant in 1900 than 1800.

But when it comes to culture we tend to believe not that the future will be very different than the present day, but that it will be roughly the same. Try to imagine yourself at some future date. Where do you imagine you will be living? What will you be wearing? What music will you love?

Chances are, that person resembles you now. As the psychologist George Lowenstein and colleagues have argued, in a phenomenon they termed “projection bias,” people “tend to exaggerate the degree to which their future tastes will resemble their current tastes.” [Continue reading…]

Facebooktwittermail