Category Archives: Science/Technology

Why access to computers won’t automatically boost children’s grades

By Steve Higgins, Durham University

Filling classrooms to the brim with computers and tablets won’t necessarily help children get better grades. That’s the finding of a new report from the Organisation for Economic Co-operation and Development (OECD).

The report reviews the links between test results of 15-year-olds from 64 countries who took part in the OECD’s 2012 Programme for International Student Assessment (PISA) and how much the pupils used technology at home and school.

Pupils in 31 countries, not including the UK, also took part in extra online tests of digital reading, navigation and mathematics. The countries and cities that came top in these online tests were Singapore, South Korea, Hong Kong and Japan – who also perform well in paper-based tests. But pupils in these countries don’t necessarily spend a lot of time on computers in class.

The report also shows that in 2012, 96% of 15-year-old students in the 64 countries in the study reported that they have a computer at home, but only 72% reported that they used a desktop, laptop or tablet computer at school.

The OECD found that it was not the amount of digital technology used in schools that was linked with scores in the PISA tests, but what teachers ask pupils do with computers or tablets that counts. There is also an increasing digital divide between school and home.

Continue reading

Facebooktwittermail

Why futurism has a cultural blindspot

Tom Vanderbilt writes: In early 1999, during the halftime of a University of Washington basketball game, a time capsule from 1927 was opened. Among the contents of this portal to the past were some yellowing newspapers, a Mercury dime, a student handbook, and a building permit. The crowd promptly erupted into boos. One student declared the items “dumb.”

Such disappointment in time capsules seems to run endemic, suggests William E. Jarvis in his book Time Capsules: A Cultural History. A headline from The Onion, he notes, sums it up: “Newly unearthed time capsule just full of useless old crap.” Time capsules, after all, exude a kind of pathos: They show us that the future was not quite as advanced as we thought it would be, nor did it come as quickly. The past, meanwhile, turns out to not be as radically distinct as we thought.

In his book Predicting the Future, Nicholas Rescher writes that “we incline to view the future through a telescope, as it were, thereby magnifying and bringing nearer what we can manage to see.” So too do we view the past through the other end of the telescope, making things look farther away than they actually were, or losing sight of some things altogether.

These observations apply neatly to technology. We don’t have the personal flying cars we predicted we would. Coal, notes the historian David Edgerton in his book The Shock of the Old, was a bigger source of power at the dawn of the 21st century than in sooty 1900; steam was more significant in 1900 than 1800.

But when it comes to culture we tend to believe not that the future will be very different than the present day, but that it will be roughly the same. Try to imagine yourself at some future date. Where do you imagine you will be living? What will you be wearing? What music will you love?

Chances are, that person resembles you now. As the psychologist George Lowenstein and colleagues have argued, in a phenomenon they termed “projection bias,” people “tend to exaggerate the degree to which their future tastes will resemble their current tastes.” [Continue reading…]

Facebooktwittermail

Why should we place our faith in science?

By Jonathan Keith, Monash University

Most of us would like to think scientific debate does not operate like the comments section of online news articles. These are frequently characterised by inflexibility, truculence and expostulation. Scientists are generally a little more civil, but sometimes not much so!

There is a more fundamental issue here than politeness, though. Science has a reputation as an arbiter of fact above and beyond just personal opinion or bias. The term “scientific method” suggests there exists an agreed upon procedure for processing evidence which, while not infallible, is at least impartial.

So when even the most respected scientists can arrive at different deeply held convictions when presented with the same evidence, it undermines the perceived impartiality of the scientific method. It demonstrates that science involves an element of subjective or personal judgement.

Yet personal judgements are not mere occasional intruders on science, they are a necessary part of almost every step of reasoning about evidence.

Continue reading

Facebooktwittermail

The climate story nobody talks about

Adam Frank writes: On Nov. 30, world leaders will gather in Paris for a pivotal United Nations conference on climate change.

Given its importance, I want to use the next couple months to explore some alternative perspectives on the unruly aggregate of topics lumped together as “climate change.”

There is an urgent demand for such alternative narratives and it rises, in part, from the ridiculous stalemate we find ourselves in today. But the endless faux “debate” about the state of climate science also obscures a deeper — and more profound — reality: We’ve become a species of enormous capacities with the power to change an entire planet. So, what exactly does this mean?

In service of answering this question and looking for perspectives on climate change beyond the usual focus on controversy, let’s begin by acknowledging a single fact that’s rarely discussed in the media: Climate science is a triumph of human civilization.

Landing on the moon. The development of relativity theory. The discovery of DNA. We rightfully hail these accomplishments as testaments to the creative power of the human imagination. We point to them as the highest achievements of our species, calling them milestones in our collective evolution.

But climate science is no different. It, too, belongs in that short list of epoch making human efforts. [Continue reading…]

Facebooktwittermail

Over half of psychology studies fail reproducibility test

Nature reports: Don’t trust everything you read in the psychology literature. In fact, two thirds of it should probably be distrusted.

In the biggest project of its kind, Brian Nosek, a social psychologist and head of the Center for Open Science in Charlottesville, Virginia, and 269 co-authors repeated work reported in 98 original papers from three psychology journals, to see if they independently came up with the same results.

The studies they took on ranged from whether expressing insecurities perpetuates them to differences in how children and adults respond to fear stimuli, to effective ways to teach arithmetic. [Continue reading…]

Facebooktwittermail

Living and working under the control of invisible digital overlords

Frank Pasquale writes: In a recent podcast series called Instaserfs, a former Uber driver named Mansour gave a chilling description of the new, computer-mediated workplace. First, the company tried to persuade him to take a predatory loan to buy a new car. Apparently a number cruncher deemed him at high risk of defaulting. Second, Uber would never respond in person to him – it just sent text messages and emails. This style of supervision was a series of take-it-or-leave-it ultimatums – a digital boss coded in advance.

Then the company suddenly took a larger cut of revenues from him and other drivers. And finally, what seemed most outrageous to Mansour: his job could be terminated without notice if a few passengers gave him one-star reviews, since that could drag his average below 4.7. According to him, Uber has no real appeal recourse or other due process in play for a rating system that can instantly put a driver out of work – it simply crunches the numbers.

Mansour’s story compresses long-standing trends in credit and employment – and it’s by no means unique. Online retailers live in fear of a ‘Google Death Penalty’ – a sudden, mysterious drop in search-engine rankings if they do something judged fraudulent by Google’s spam detection algorithms. Job applicants at Walmart in the US and other large companies take mysterious ‘personality tests’, which process their responses in undisclosed ways. And white-collar workers face CV-sorting software that may understate, or entirely ignore, their qualifications. One algorithmic CV analyser found all 29,000 people who applied for a ‘reasonably standard engineering position’ unqualified.

The infancy of the internet is over. As online spaces mature, Facebook, Google, Apple, Amazon, and other powerful corporations are setting the rules that govern competition among journalists, writers, coders, and e-commerce firms. Uber and Postmates and other platforms are adding a code layer to occupations like driving and service work. Cyberspace is no longer an escape from the ‘real world’. It is now a force governing it via algorithms: recipe-like sets of instructions to solve problems. From Google search to OkCupid matchmaking, software orders and weights hundreds of variables into clean, simple interfaces, taking us from query to solution. Complex mathematics govern such answers, but it is hidden from plain view, thanks either to secrecy imposed by law, or to complexity outsiders cannot unravel. [Continue reading…]

Facebooktwittermail

Landmark discoveries that were later debunked

Shannon Hall writes: It begins with the smallest anomaly. The first exoplanets were the slightest shifts in a star’s light. The Higgs boson was just a bump in the noise. And the Big Bang sprung from a few rapidly moving galaxies that should have been staying put. Great scientific discoveries are born from puny signals that prompt attention.

And now, another tantalizing, result is gathering steam, stirring the curiosity of physicists worldwide. It’s a bump in the data gathered by the Large Hadron Collider (LHC), the world’s most powerful particle accelerator. If the bump matures into a clearer peak during the LHC’s second run, it could indicate the existence of a new, unexpected particle that’s 2,000 times heavier than the proton. Ultimately, it could provoke a major update to our understanding of physics.

Or it could simply be a statistical fluke, doomed to disappear over time. But the bump currently has a significance level of three sigma, meaning that this little guy just might be here to stay. The rule of thumb in physics is that a one-sigma result could easily be due to random fluctuations, like the fair coin that flipped tails twice. A three-sigma result counts as an observation, worth discussing and publishing. But for physicists to proclaim a discovery, a finding that rewrites textbooks, a result has to be at the five-sigma level. At that point, the chance of the signal arising randomly is only one in a million.

There’s no knowing if the LHC researchers’ new finding is real until they gather more data. And even bigger would-be discoveries — those with five-sigma results and better — have led physicists astray before, raising hopes for new insights into the Universe before being disproved by other data. When pushing the very limits of what we can possibly measure, false positives are always a danger. Here are five examples where seemingly solid findings came undone. [Continue reading…]

Facebooktwittermail

Google’s search algorithm could steal the presidency

Wired: Imagine an election — a close one. You’re undecided. So you type the name of one of the candidates into your search engine of choice. (Actually, let’s not be coy here. In most of the world, one search engine dominates; in Europe and North America, it’s Google.) And Google coughs up, in fractions of a second, articles and facts about that candidate. Great! Now you are an informed voter, right? But a study published this week says that the order of those results, the ranking of positive or negative stories on the screen, can have an enormous influence on the way you vote. And if the election is close enough, the effect could be profound enough to change the outcome.

In other words: Google’s ranking algorithm for search results could accidentally steal the presidency. “We estimate, based on win margins in national elections around the world,” says Robert Epstein, a psychologist at the American Institute for Behavioral Research and Technology and one of the study’s authors, “that Google could determine the outcome of upwards of 25 percent of all national elections.” [Continue reading…]

Facebooktwittermail

Musk, Hawking, and Chomsky warn of impending robot wars fueled by artificial intelligence

The Verge reports: Leading artificial intelligence researchers have warned that an “AI arms race” could be disastrous for humanity, and are urging the UN to consider a ban on “offensive autonomous weapons.” An open letter published by the Future of Life Institute (FLI) and signed by high-profile figures including Stephen Hawking, Elon Musk, and Noam Chomsky, warns that weapons that automatically “select and engage targets without human intervention” could become the “Kalashnikovs of tomorrow,” fueling war, terrorism, and global instability.

“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades,” states the letter, citing armed quadcopters (a technology that has already been deployed in a very crude fashion) as an example. The letter notes that although it’s possible that the use of autonomous weapons could reduce human casualties on the battlefield, this itself could be a mistake as it would “[lower] the threshold” for going to war. [Continue reading…]

Facebooktwittermail

No, the Earth is not heading for a ‘mini ice age’

Eric Holthaus writes: A new study and related press release from the Royal Astronomical Society is making the rounds in recent days, claiming that a new statistical analysis of sunspot cycles shows “solar activity will fall by 60 per cent during the 2030s” to a level that last occurred during the so-called Little Ice Age, which ended 300 years ago.

Since climate change deniers have a particular fascination with sunspot cycles, this story has predictably been picked up by all manner of conservative news media, with a post in the Telegraph quickly gathering up tens of thousands of shares. The only problem is, it’s a wildly inaccurate reading of the research.

Sunspots have been observed on a regular basis for at least 400 years, and over that period, there’s a weak correlation between the number of sunspots and global temperature — most notably during a drastic downturn in the number of sunspots from about 1645 to 1715. Known as the Maunder minimum, this phenomenon happened about the same time as a decades-long European cold snap known as the Little Ice Age. That connection led to theory that this variability remains the dominant factor in Earth’s climate. Though that idea is still widely circulated, it’s been disproved. In reality, sunspots fluctuate in an 11-year cycle, and the current cycle is the weakest in 100 years — yet 2014 was the planet’s hottest year in recorded history. [Continue reading…]

Facebooktwittermail

The risks that GMOs may pose to the global ecosystem

Mark Spitznagel and Nassim Nicholas Taleb, who both anticipated the failure of the financial system in 2007, see eerie parallels in the reasoning being used by those who believed in stability then and those who insist now that there are no significant risks involved in the promotion of genetically modified organisms (GMOs).

Spitznagel and Taleb write: First, there has been a tendency to label anyone who dislikes G.M.O.s as anti-science — and put them in the anti-antibiotics, antivaccine, even Luddite category. There is, of course, nothing scientific about the comparison. Nor is the scholastic invocation of a “consensus” a valid scientific argument.

Interestingly, there are similarities between arguments that are pro-G.M.O. and snake oil, the latter having relied on a cosmetic definition of science. The charge of “therapeutic nihilism” was leveled at people who contested snake oil medicine at the turn of the 20th century. (At that time, anything with the appearance of sophistication was considered “progress.”)

Second, we are told that a modified tomato is not different from a naturally occurring tomato. That is wrong: The statistical mechanism by which a tomato was built by nature is bottom-up, by tinkering in small steps (as with the restaurant business, distinct from contagion-prone banks). In nature, errors stay confined and, critically, isolated.

Third, the technological salvation argument we faced in finance is also present with G.M.O.s, which are intended to “save children by providing them with vitamin-enriched rice.” The argument’s flaw is obvious: In a complex system, we do not know the causal chain, and it is better to solve a problem by the simplest method, and one that is unlikely to cause a bigger problem.

Fourth, by leading to monoculture — which is the same in finance, where all risks became systemic — G.M.O.s threaten more than they can potentially help. Ireland’s population was decimated by the effect of monoculture during the potato famine. Just consider that the same can happen at a planetary scale.

Fifth, and what is most worrisome, is that the risk of G.M.O.s are more severe than those of finance. They can lead to complex chains of unpredictable changes in the ecosystem, while the methods of risk management with G.M.O.s — unlike finance, where some effort was made — are not even primitive.

The G.M.O. experiment, carried out in real time and with our entire food and ecological system as its laboratory, is perhaps the greatest case of human hubris ever. It creates yet another systemic, “too big too fail” enterprise — but one for which no bailouts will be possible when it fails. [Continue reading…]

Facebooktwittermail

IBM announces major breakthrough: the world’s first 7-nanometer chips

The New York Times reports: IBM said on Thursday that it had made working versions of ultradense computer chips, with roughly four times the capacity of today’s most powerful chips.

The announcement, made on behalf of an international consortium led by IBM, the giant computer company, is part of an effort to manufacture the most advanced computer chips in New York’s Hudson Valley, where IBM is investing $3 billion in a private-public partnership with New York State, GlobalFoundries, Samsung and equipment vendors.

The development lifts a bit of the cloud that has fallen over the semiconductor industry, which has struggled to maintain its legendary pace of doubling transistor density every two years.

Intel, which for decades has been the industry leader, has faced technical challenges in recent years. Moreover, technologists have begun to question whether the longstanding pace of chip improvement, known as Moore’s Law, would continue past the current 14-nanometer generation of chips.

Each generation of chip technology is defined by the minimum size of fundamental components that switch current at nanosecond intervals. Today the industry is making the commercial transition from what the industry generally describes as 14-nanometer manufacturing to 10-nanometer manufacturing.

Each generation brings roughly a 50 percent reduction in the area required by a given amount of circuitry. IBM’s new chips, though still in a research phase, suggest that semiconductor technology will continue to shrink at least through 2018.

The company said on Thursday that it had working samples of chips with seven-nanometer transistors. It made the research advance by using silicon-germanium instead of pure silicon in key regions of the molecular-size switches.

The new material makes possible faster transistor switching and lower power requirements. The tiny size of these transistors suggests that further advances will require new materials and new manufacturing techniques.

As points of comparison to the size of the seven-nanometer transistors, a strand of DNA is about 2.5 nanometers in diameter and a red blood cell is roughly 7,500 nanometers in diameter. IBM said that would make it possible to build microprocessors with more than 20 billion transistors. [Continue reading…]

Facebooktwittermail

On the value of not knowing everything

James McWilliams writes: In January 2010, while driving from Chicago to Minneapolis, Sam McNerney played an audiobook and had an epiphany. The book was Jonah Lehrer’s How We Decide, and the epiphany was that consciousness could reside in the brain. The quest for an empirical understanding of consciousness has long preoccupied neurobiologists. But McNerney was no neurobiologist. He was a twenty-year-old philosophy major at Hamilton College. The standard course work — ancient, modern, and contemporary philosophy — enthralled him. But after this drive, after he listened to Lehrer, something changed. “I had to rethink everything I knew about everything,” McNerney said.

Lehrer’s publisher later withdrew How We Decide for inaccuracies. But McNerney was mentally galvanized for good reason. He had stumbled upon what philosophers call the “Hard Problem” — the quest to understand the enigma of the gap between mind and body. Intellectually speaking, what McNerney experienced was like diving for a penny in a pool and coming up with a gold nugget.

The philosopher Thomas Nagel drew popular attention to the Hard Problem four decades ago in an influential essay titled “What Is It Like to Be a Bat?” Frustrated with the “recent wave of reductionist euphoria,” Nagel challenged the reductive conception of mind — the idea that consciousness resides as a physical reality in the brain — by highlighting the radical subjectivity of experience. His main premise was that “an organism has conscious mental states if and only if there is something that it is like to be that organism.”

If that idea seems elusive, consider it this way: A bat has consciousness only if there is something that it is like for that bat to be a bat. Sam has consciousness only if there is something it is like for Sam to be Sam. You have consciousness only if there is something that it is like for you to be you (and you know that there is). And here’s the key to all this: Whatever that “like” happens to be, according to Nagel, it necessarily defies empirical verification. You can’t put your finger on it. It resists physical accountability.

McNerney returned to Hamilton intellectually turbocharged. This was an idea worth pondering. “It took hold of me,” he said. “It chose me — I know you hear that a lot, but that’s how it felt.” He arranged to do research in cognitive science as an independent study project with Russell Marcus, a trusted professor. Marcus let him loose to write what McNerney calls “a seventy-page hodgepodge of psychological research and philosophy and everything in between.” Marcus remembered the project more charitably, as “a huge, ambitious, wide-ranging, smart, and engaging paper.” Once McNerney settled into his research, Marcus added, “it was like he had gone into a phone booth and come out as a super-student.”

When he graduated in 2011, McNerney was proud. “I pulled it off,” he said about earning a degree in philosophy. Not that he had any hard answers to any big problems, much less the Hard Problem. Not that he had a job. All he knew was that he “wanted to become the best writer and thinker I could be.”

So, as one does, he moved to New York City.

McNerney is the kind of young scholar adored by the humanities. He’s inquisitive, open-minded, thrilled by the world of ideas, and touched with a tinge of old-school transcendentalism. What Emerson said of Thoreau — “he declined to give up his large ambition of knowledge and action for any narrow craft or profession” — is certainly true of McNerney. [Continue reading…]

Facebooktwittermail

On not being there: The data-driven body at work and at play

Rebecca Lemov writes: The protagonist of William Gibson’s 2014 science-fiction novel The Peripheral, Flynne Fisher, works remotely in a way that lends a new and fuller sense to that phrase. The novel features a double future: One set of characters inhabits the near future, ten to fifteen years from the present, while another lives seventy years on, after a breakdown of the climate and multiple other systems that has apocalyptically altered human and technological conditions around the world.

In that “further future,” only 20 percent of the Earth’s human population has survived. Each of these fortunate few is well off and able to live a life transformed by healing nanobots, somaticized e-mail (which delivers messages and calls to the roof of the user’s mouth), quantum computing, and clean energy. For their amusement and profit, certain “hobbyists” in this future have the Borgesian option of cultivating an alternative path in history — it’s called “opening up a stub” — and mining it for information as well as labor.

Flynne, the remote worker, lives on one of those paths. A young woman from the American Southeast, possibly Appalachia or the Ozarks, she favors cutoff jeans and resides in a trailer, eking out a living as a for-hire sub playing video games for wealthy aficionados. Recruited by a mysterious entity that is beta-testing drones that are doing “security” in a murky skyscraper in an unnamed city, she thinks at first that she has been taken on to play a kind of video game in simulated reality. As it turns out, she has been employed to work in the future as an “information flow” — low-wage work, though the pay translates to a very high level of remuneration in the place and time in which she lives.

What is of particular interest is the fate of Flynne’s body. Before she goes to work she must tend to its basic needs (nutrition and elimination), because during her shift it will effectively be “vacant.” Lying on a bed with a special data-transmitting helmet attached to her head, she will be elsewhere, inhabiting an ambulatory robot carapace — a “peripheral” — built out of bio-flesh that can receive her consciousness.

Bodies in this data-driven economic backwater of a future world economy are abandoned for long stretches of time — disposable, cheapened, eerily vacant in the temporary absence of “someone at the helm.” Meanwhile, fleets of built bodies, grown from human DNA, await habitation.

Alex Rivera explores similar territory in his Mexican sci-fi film The Sleep Dealer (2008), set in a future world after a wall erected on the US–Mexican border has successfully blocked migrants from entering the United States. Digital networks allow people to connect to strangers all over the world, fostering fantasies of physical and emotional connection. At the same time, low-income would-be migrant workers in Tijuana and elsewhere can opt to do remote work by controlling robots building a skyscraper in a faraway city, locking their bodies into devices that transmit their labor to the site. In tank-like warehouses, lined up in rows of stalls, they “jack in” by connecting data-transmitting cables to nodes implanted in their arms and backs. Their bodies are in Mexico, but their work is in New York or San Francisco, and while they are plugged in and wearing their remote-viewing spectacles, their limbs move like the appendages of ghostly underwater creatures. Their life force drained by the taxing labor, these “sleep dealers” end up as human discards.

What is surprising about these sci-fi conceits, from “transitioning” in The Peripheral to “jacking in” in The Sleep Dealer, is how familiar they seem, or at least how closely they reflect certain aspects of contemporary reality. Almost daily, we encounter people who are there but not there, flickering in and out of what we think of as presence. A growing body of research explores the question of how users interact with their gadgets and media outlets, and how in turn these interactions transform social relationships. The defining feature of this heavily mediated reality is our presence “elsewhere,” a removal of at least part of our conscious awareness from wherever our bodies happen to be. [Continue reading…]

Facebooktwittermail

Why the modern world is bad for your brain

Daniel J Levitin writes: Our brains are busier than ever before. We’re assaulted with facts, pseudo facts, jibber-jabber, and rumour, all posing as information. Trying to figure out what you need to know and what you can ignore is exhausting. At the same time, we are all doing more. Thirty years ago, travel agents made our airline and rail reservations, salespeople helped us find what we were looking for in shops, and professional typists or secretaries helped busy people with their correspondence. Now we do most of those things ourselves. We are doing the jobs of 10 different people while still trying to keep up with our lives, our children and parents, our friends, our careers, our hobbies, and our favourite TV shows.

Our smartphones have become Swiss army knife–like appliances that include a dictionary, calculator, web browser, email, Game Boy, appointment calendar, voice recorder, guitar tuner, weather forecaster, GPS, texter, tweeter, Facebook updater, and flashlight. They’re more powerful and do more things than the most advanced computer at IBM corporate headquarters 30 years ago. And we use them all the time, part of a 21st-century mania for cramming everything we do into every single spare moment of downtime. We text while we’re walking across the street, catch up on email while standing in a queue – and while having lunch with friends, we surreptitiously check to see what our other friends are doing. At the kitchen counter, cosy and secure in our domicile, we write our shopping lists on smartphones while we are listening to that wonderfully informative podcast on urban beekeeping.

But there’s a fly in the ointment. Although we think we’re doing several things at once, multitasking, this is a powerful and diabolical illusion. Earl Miller, a neuroscientist at MIT and one of the world experts on divided attention, says that our brains are “not wired to multitask well… When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.” So we’re not actually keeping a lot of balls in the air like an expert juggler; we’re more like a bad amateur plate spinner, frantically switching from one task to another, ignoring the one that is not right in front of us but worried it will come crashing down any minute. Even though we think we’re getting a lot done, ironically, multitasking makes us demonstrably less efficient. [Continue reading…]

Facebooktwittermail

Even the tech savvy should turn off their phones and listen

Bruno Giussani writes: Nothing exists nowadays unless it is Facebooked, Tweeted or Instagrammed with emphasis on “insta”. So perhaps the event that I hosted on Tuesday at the Royal Institution, TEDGlobal London, didn’t exist. Because we ran a little experiment, banning the use of smartphones, tablets, laptops, cameras – any electronic device – during the conference.

At the end of the event (which over two sessions of 100 minutes each featured scientists, technologists, historians, a photographer, a slam poet, a singer, a racing car driver and a writer) I asked the 350 attendees whether we should apply the same rule next time. It’s a safe guess that at least two-thirds of them use Twitter or FB with some regularity, but pretty much every hand in the theatre shot up, with maybe two exceptions. I have heard nothing but positive feedback since. [Continue reading…]

Facebooktwittermail

Artificial neural networks on acid

red-tree-small-long-unsmoothed

Quartz reports: American sci-fi novelist Philip K. Dick once famously asked, Do Androids Dream of Electric Sheep? While he was on the right track, the answer appears to be, no, they don’t. They dream of dog-headed knights atop horses, of camel-birds and pig-snails, and of Dali-esque mutated landscapes.

Google’s image recognition software, which can detect, analyze, and even auto-caption images, uses artificial neural networks to simulate the human brain. In a process they’re calling “inceptionism,” Google engineers sought out to see what these artificial networks “dream” of — what, if anything, do they see in a nondescript image of clouds, for instance? What does a fake brain that’s trained to detect images of dogs see when it’s shown a picture of a knight?

Google trains the software by feeding it millions of images, eventually teaching it to recognize specific objects within a picture. When it’s fed an image, it is asked to emphasize the object in the image that it recognizes. The network is made up of layers — the higher the layer, the more precise the interpretation. Eventually, in the final output layer, the network makes a “decision” as to what’s in the image.

But the networks aren’t restricted to only identifying images. Their training allows them to generate images as well. [Continue reading…]

Facebooktwittermail

How technology is damaging our brains

The New York Times reports: When one of the most important e-mail messages of his life landed in his in-box a few years ago, Kord Campbell overlooked it.

Not just for a day or two, but 12 days. He finally saw it while sifting through old messages: a big company wanted to buy his Internet start-up.

“I stood up from my desk and said, ‘Oh my God, oh my God, oh my God,’ ” Mr. Campbell said. “It’s kind of hard to miss an e-mail like that, but I did.”

The message had slipped by him amid an electronic flood: two computer screens alive with e-mail, instant messages, online chats, a Web browser and the computer code he was writing.

While he managed to salvage the $1.3 million deal after apologizing to his suitor, Mr. Campbell continues to struggle with the effects of the deluge of data. Even after he unplugs, he craves the stimulation he gets from his electronic gadgets. He forgets things like dinner plans, and he has trouble focusing on his family.

His wife, Brenda, complains, “It seems like he can no longer be fully in the moment.”

This is your brain on computers.

Scientists say juggling e-mail, phone calls and other incoming information can change how people think and behave. They say our ability to focus is being undermined by bursts of information.

These play to a primitive impulse to respond to immediate opportunities and threats. The stimulation provokes excitement — a dopamine squirt — that researchers say can be addictive. In its absence, people feel bored.

The resulting distractions can have deadly consequences, as when cellphone-wielding drivers and train engineers cause wrecks. And for millions of people like Mr. Campbell, these urges can inflict nicks and cuts on creativity and deep thought, interrupting work and family life.

While many people say multitasking makes them more productive, research shows otherwise. Heavy multitaskers actually have more trouble focusing and shutting out irrelevant information, scientists say, and they experience more stress.

And scientists are discovering that even after the multitasking ends, fractured thinking and lack of focus persist. In other words, this is also your brain off computers. [Continue reading…]

Facebooktwittermail