A new technology for detecting neutrinos represents a ‘monumental’ advance for science

Scientific American reports: Neutrinos are famously antisocial. Of all the characters in the particle physics cast, they are the most reluctant to interact with other particles. Among the hundred trillion neutrinos that pass through you every second, only about one per week actually grazes a particle in your body.

That rarity has made life miserable for physicists, who resort to building huge underground detector tanks for a chance at catching the odd neutrino. But in a study published today in Science, researchers working at Oak Ridge National Laboratory (ORNL) detected never-before-seen neutrino interactions using a detector the size of a fire extinguisher. Their feat paves the way for new supernova research, dark matter searches and even nuclear nonproliferation monitoring.

Under previous approaches, a neutrino reveals itself by stumbling across a proton or neutron amidst the vast emptiness surrounding atomic nuclei, producing a flash of light or a single-atom chemical change. But neutrinos deign to communicate with other particles only via the “weak” force—the fundamental force that causes radioactive materials to decay. Because the weak force operates only at subatomic distances, the odds of a tiny neutrino bouncing off of an individual neutron or proton are minuscule. Physicists must compensate by offering thousands of tons of atoms for passing neutrinos to strike. [Continue reading…]

Facebooktwittermail

‘I think we like our phones more than we like actual people’

Jean M Twenge writes: One day last summer, around noon, I called Athena, a 13-year-old who lives in Houston, Texas. She answered her phone—she’s had an iPhone since she was 11—sounding as if she’d just woken up. We chatted about her favorite songs and TV shows, and I asked her what she likes to do with her friends. “We go to the mall,” she said. “Do your parents drop you off?,” I asked, recalling my own middle-school days, in the 1980s, when I’d enjoy a few parent-free hours shopping with my friends. “No—I go with my family,” she replied. “We’ll go with my mom and brothers and walk a little behind them. I just have to tell my mom where we’re going. I have to check in every hour or every 30 minutes.”

Those mall trips are infrequent—about once a month. More often, Athena and her friends spend time together on their phones, unchaperoned. Unlike the teens of my generation, who might have spent an evening tying up the family landline with gossip, they talk on Snapchat, the smartphone app that allows users to send pictures and videos that quickly disappear. They make sure to keep up their Snapstreaks, which show how many days in a row they have Snapchatted with each other. Sometimes they save screenshots of particularly ridiculous pictures of friends. “It’s good blackmail,” Athena said. (Because she’s a minor, I’m not using her real name.) She told me she’d spent most of the summer hanging out alone in her room with her phone. That’s just the way her generation is, she said. “We didn’t have a choice to know any life without iPads or iPhones. I think we like our phones more than we like actual people.”

I’ve been researching generational differences for 25 years, starting when I was a 22-year-old doctoral student in psychology. Typically, the characteristics that come to define a generation appear gradually, and along a continuum. Beliefs and behaviors that were already rising simply continue to do so. Millennials, for instance, are a highly individualistic generation, but individualism had been increasing since the Baby Boomers turned on, tuned in, and dropped out. I had grown accustomed to line graphs of trends that looked like modest hills and valleys. Then I began studying Athena’s generation.

Around 2012, I noticed abrupt shifts in teen behaviors and emotional states. The gentle slopes of the line graphs became steep mountains and sheer cliffs, and many of the distinctive characteristics of the Millennial generation began to disappear. In all my analyses of generational data—some reaching back to the 1930s—I had never seen anything like it.

At first I presumed these might be blips, but the trends persisted, across several years and a series of national surveys. The changes weren’t just in degree, but in kind. The biggest difference between the Millennials and their predecessors was in how they viewed the world; teens today differ from the Millennials not just in their views but in how they spend their time. The experiences they have every day are radically different from those of the generation that came of age just a few years before them.

What happened in 2012 to cause such dramatic shifts in behavior? It was after the Great Recession, which officially lasted from 2007 to 2009 and had a starker effect on Millennials trying to find a place in a sputtering economy. But it was exactly the moment when the proportion of Americans who owned a smartphone surpassed 50 percent.

The more I pored over yearly surveys of teen attitudes and behaviors, and the more I talked with young people like Athena, the clearer it became that theirs is a generation shaped by the smartphone and by the concomitant rise of social media. [Continue reading…]

Facebooktwittermail

The real threat of artificial intelligence

Kai-Fu Lee writes: What worries you about the coming world of artificial intelligence?

Too often the answer to this question resembles the plot of a sci-fi thriller. People worry that developments in A.I. will bring about the “singularity” — that point in history when A.I. surpasses human intelligence, leading to an unimaginable revolution in human affairs. Or they wonder whether instead of our controlling artificial intelligence, it will control us, turning us, in effect, into cyborgs.

These are interesting issues to contemplate, but they are not pressing. They concern situations that may not arise for hundreds of years, if ever. At the moment, there is no known path from our best A.I. tools (like the Google computer program that recently beat the world’s best player of the game of Go) to “general” A.I. — self-aware computer programs that can engage in common-sense reasoning, attain knowledge in multiple domains, feel, express and understand emotions and so on.

This doesn’t mean we have nothing to worry about. On the contrary, the A.I. products that now exist are improving faster than most people realize and promise to radically transform our world, not always for the better. They are only tools, not a competing form of intelligence. But they will reshape what work means and how wealth is created, leading to unprecedented economic inequalities and even altering the global balance of power.

It is imperative that we turn our attention to these imminent challenges.

What is artificial intelligence today? Roughly speaking, it’s technology that takes in huge amounts of information from a specific domain (say, loan repayment histories) and uses it to make a decision in a specific case (whether to give an individual a loan) in the service of a specified goal (maximizing profits for the lender). Think of a spreadsheet on steroids, trained on big data. These tools can outperform human beings at a given task.

This kind of A.I. is spreading to thousands of domains (not just loans), and as it does, it will eliminate many jobs. Bank tellers, customer service representatives, telemarketers, stock and bond traders, even paralegals and radiologists will gradually be replaced by such software. Over time this technology will come to control semiautonomous and autonomous hardware like self-driving cars and robots, displacing factory workers, construction workers, drivers, delivery workers and many others.

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too. [Continue reading…]

Facebooktwittermail

China makes leap toward ‘unhackable’ quantum network

The Wall Street Journal reports: Chinese scientists have succeeded in sending specially linked pairs of light particles from space to Earth, an achievement experts in the field say gives China a leg up in using quantum technology to build an “unhackable” global communications network.

The result is an important breakthrough that establishes China as a pioneer in efforts to harness the enigmatic properties of matter and energy at the subatomic level, the experts said.

In an experiment described in the latest issue of Science, a team of Chinese researchers used light particles, or photons, sent from the country’s recently launched quantum-communications satellite to establish an instantaneous connection between two ground stations more than 1,200 kilometers (744 miles) apart.

Using the quantum properties of tiny particles to create a secure communications network is scientifically and technically demanding, according to security researchers, and China is years away from being able to build one.

If China ultimately succeeds, such a system could undermine U.S. advantages in penetrating computer networks.

The Pentagon, in an annual report on China’s military delivered to Congress last week, described the quantum satellite launch in August as a “notable advance in cryptography research.”

While the U.S. is also pursuing quantum communications, it has concentrated more attention and resources on research into quantum computing. European physicists have developed many of the theories and basic practices underlying quantum encryption, but their Chinese counterparts are better-funded with government resources.

Disclosures by former National Security Agency contractor Edward Snowden in 2013 about U.S. spying on Chinese networks rattled Beijing, and have pushed the country to bolster cybersecurity measures in a variety of ways.

“The Snowden revelations undoubtedly played a part in the drive towards quantum technologies, as it revealed the degree of sophisticated threat Chinese counterespionage and cyberdefenses were facing,” said John Costello, a fellow specializing in China and cybersecurity at the nonpartisan Washington-based think tank New America. [Continue reading…]

Facebooktwittermail

Technology doesn’t make us better people

Nicholas Carr writes: Welcome to the global village. It’s a nasty place.

On Easter Sunday, a man in Cleveland filmed himself murdering a random 74-year-old and posted the video on Facebook. The social network took the grisly clip down within two or three hours, but not before users shared it on other websites — where people around the world can still view it.

Surely incidents like this aren’t what Mark Zuckerberg had in mind. In 2012, as his company was preparing to go public, the Facebook founder wrote an earnest letter to would-be shareholders explaining that his company was more than just a business. It was pursuing a “social mission” to make the world a better place by encouraging self-expression and conversation. “People sharing more,” the young entrepreneur wrote, “creates a more open culture and leads to a better understanding of the lives and perspectives of others.”

Earlier this year, Zuckerberg penned another public letter, expressing even grander ambitions. Facebook, he announced, is expanding its mission from “connecting friends and family” to building “a global community that works for everyone.” The ultimate goal is to turn the already vast social network into a sort of supranational state “spanning cultures, nations and regions.”

But the murder in Cleveland, and any similar incidents that inevitably follow, reveal the hollowness of Silicon Valley’s promise that digital networks would bring us together in a more harmonious world.

Whether he knows it or not, Zuckerberg is part of a long tradition in Western thought. Ever since the building of the telegraph system in the 19th century, people have believed that advances in communication technology would promote social harmony. The more we learned about each other, the more we would recognize that we’re all one. In an 1899 article celebrating the laying of transatlantic Western Union cables, a New York Times columnist expressed the popular assumption well: “Nothing so fosters and promotes a mutual understanding and a community of sentiment and interests as cheap, speedy, and convenient communication.”

The great networks of the 20th century — radio, telephone, TV — reinforced this sunny notion. Spanning borders and erasing distances, they shrank the planet. Guglielmo Marconi declared in 1912 that his invention of radio would “make war impossible, because it will make war ridiculous.” AT&T’s top engineer, J.J. Carty, predicted in a 1923 interview that the telephone system would “join all the peoples of the earth in one brotherhood.” In his 1962 book “The Gutenberg Galaxy,” the media theorist Marshall McLuhan gave us the memorable term “global village” to describe the world’s “new electronic interdependence.” Most people took the phrase optimistically, as a prophecy of inevitable social progress. What, after all, could be nicer than a village?

If our assumption that communication brings people together were true, we should today be seeing a planetary outbreak of peace, love, and understanding. Thanks to the Internet and cellular networks, humanity is more connected than ever. Of the world’s 7 billion people, 6 billion have access to a mobile phone — a billion and a half more, the United Nations reports, than have access to a working toilet. Nearly 2 billion are on Facebook, more than a billion upload and download YouTube videos, and billions more converse through messaging apps like WhatsApp and WeChat. With smartphone in hand, everyone becomes a media hub, transmitting and receiving ceaselessly.

Yet we live in a fractious time, defined not by concord but by conflict. Xenophobia is on the rise. Political and social fissures are widening. From the White House down, public discourse is characterized by vitriol and insult. We probably shouldn’t be surprised. [Continue reading…]

Facebooktwittermail

Humans aren’t the only primates that can make sharp stone tools

 

The Guardian reports: Monkeys have been observed producing sharp stone flakes that closely resemble the earliest known tools made by our ancient relatives, proving that this ability is not uniquely human.

Previously, modifying stones to create razor-edged fragments was thought to be an activity confined to hominins, the family including early humans and their more primitive cousins. The latest observations re-write this view, showing that monkeys unintentionally produce almost identical artefacts simply by smashing stones together.

The findings put archaeologists on alert that they can no longer assume that stone flakes they discover are linked to the deliberate crafting of tools by early humans as their brains became more sophisticated.

Tomos Proffitt, an archaeologist at the University of Oxford and the study’s lead author, said: “At a very fundamental level – if you’re looking at a very simple flake – if you had a capuchin flake and a human flake they would be the same. It raises really important questions about what level of cognitive complexity is required to produce a sophisticated cutting tool.”

Unlike early humans, the flakes produced by the capuchins were the unintentional byproduct of hammering stones – an activity that the monkeys pursued decisively, but the purpose of which was not clear. Originally scientists thought the behaviour was a flamboyant display of aggression in response to an intruder, but after more extensive observations the monkeys appeared to be seeking out the quartz dust produced by smashing the rocks, possibly because it has a nutritional benefit. [Continue reading…]

Facebooktwittermail

Forget software — now hackers are exploiting physics

Andy Greenberg reports: Practically every word we use to describe a computer is a metaphor. “File,” “window,” even “memory” all stand in for collections of ones and zeros that are themselves representations of an impossibly complex maze of wires, transistors and the electrons moving through them. But when hackers go beyond those abstractions of computer systems and attack their actual underlying physics, the metaphors break.

Over the last year and a half, security researchers have been doing exactly that: honing hacking techniques that break through the metaphor to the actual machine, exploiting the unexpected behavior not of operating systems or applications, but of computing hardware itself—in some cases targeting the actual electricity that comprises bits of data in computer memory. And at the Usenix security conference earlier this month, two teams of researchers presented attacks they developed that bring that new kind of hack closer to becoming a practical threat.

Both of those new attacks use a technique Google researchers first demonstrated last March called “Rowhammer.” The trick works by running a program on the target computer, which repeatedly overwrites a certain row of transistors in its DRAM flash memory, “hammering” it until a rare glitch occurs: Electric charge leaks from the hammered row of transistors into an adjacent row. The leaked charge then causes a certain bit in that adjacent row of the computer’s memory to flip from one to zero or vice versa. That bit flip gives you access to a privileged level of the computer’s operating system.

It’s messy. And mind-bending. And it works. [Continue reading…]

Facebooktwittermail

Forget ideology, liberal democracy’s newest threats come from technology and bioscience

John Naughton writes: The BBC Reith Lectures in 1967 were given by Edmund Leach, a Cambridge social anthropologist. “Men have become like gods,” Leach began. “Isn’t it about time that we understood our divinity? Science offers us total mastery over our environment and over our destiny, yet instead of rejoicing we feel deeply afraid.”

That was nearly half a century ago, and yet Leach’s opening lines could easily apply to today. He was speaking before the internet had been built and long before the human genome had been decoded, and so his claim about men becoming “like gods” seems relatively modest compared with the capabilities that molecular biology and computing have subsequently bestowed upon us. Our science-based culture is the most powerful in history, and it is ceaselessly researching, exploring, developing and growing. But in recent times it seems to have also become plagued with existential angst as the implications of human ingenuity begin to be (dimly) glimpsed.

The title that Leach chose for his Reith Lecture – A Runaway World – captures our zeitgeist too. At any rate, we are also increasingly fretful about a world that seems to be running out of control, largely (but not solely) because of information technology and what the life sciences are making possible. But we seek consolation in the thought that “it was always thus”: people felt alarmed about steam in George Eliot’s time and got worked up about electricity, the telegraph and the telephone as they arrived on the scene. The reassuring implication is that we weathered those technological storms, and so we will weather this one too. Humankind will muddle through.

But in the last five years or so even that cautious, pragmatic optimism has begun to erode. There are several reasons for this loss of confidence. One is the sheer vertiginous pace of technological change. Another is that the new forces at loose in our society – particularly information technology and the life sciences – are potentially more far-reaching in their implications than steam or electricity ever were. And, thirdly, we have begun to see startling advances in these fields that have forced us to recalibrate our expectations.[Continue reading…]

Facebooktwittermail

China launches quantum satellite for ‘hack-proof’ communications

The Guardian reports: China says it has launched the world’s first quantum satellite, a project Beijing hopes will enable it to build a coveted “hack-proof” communications system with potentially significant military and commercial applications.

Xinhua, Beijing’s official news service, said Micius, a 600kg satellite that is nicknamed after an ancient Chinese philosopher, “roared into the dark sky” over the Gobi desert at 1.40am local time on Tuesday, carried by a Long March-2D rocket.

“The satellite’s two-year mission will be to develop ‘hack-proof’ quantum communications, allowing users to send messages securely and at speeds faster than light,” Xinhua reported.

The Quantum Experiments at Space Scale, or Quess, satellite programme is part of an ambitious space programme that has accelerated since Xi Jinping became Communist party chief in late 2012.

“There’s been a race to produce a quantum satellite, and it is very likely that China is going to win that race,” Nicolas Gisin, a professor and quantum physicist at the University of Geneva, told the Wall Street Journal. “It shows again China’s ability to commit to large and ambitious projects and to realise them.”

The satellite will be tasked with sending secure messages between Beijing and Urumqi, the capital of Xinjiang, a sprawling region of deserts and snow-capped mountains in China’s extreme west.

Highly complex attempts to build such a “hack-proof” communications network are based on the scientific principle of entanglement. [Continue reading…]

Facebooktwittermail

How a solo voyage around the world led to a vision for a sustainable global economy

 

The Ellen MacArthur Foundation works with business, government and academia to build a framework for an economy that is restorative and regenerative by design — a circular economy.

Facebooktwittermail

A society staring at machines

Jacob Weisberg writes: “As smoking gives us something to do with our hands when we aren’t using them, Time gives us something to do with our minds when we aren’t thinking,” Dwight Macdonald wrote in 1957. With smartphones, the issue never arises. Hands and mind are continuously occupied texting, e-mailing, liking, tweeting, watching YouTube videos, and playing Candy Crush.

Americans spend an average of five and a half hours a day with digital media, more than half of that time on mobile devices, according to the research firm eMarketer. Among some groups, the numbers range much higher. In one recent survey, female students at Baylor University reported using their cell phones an average of ten hours a day. Three quarters of eighteen-to-twenty-four-year-olds say that they reach for their phones immediately upon waking up in the morning. Once out of bed, we check our phones 221 times a day — an average of every 4.3 minutes — according to a UK study. This number actually may be too low, since people tend to underestimate their own mobile usage. In a 2015 Gallup survey, 61 percent of people said they checked their phones less frequently than others they knew.

Our transformation into device people has happened with unprecedented suddenness. The first touchscreen-operated iPhones went on sale in June 2007, followed by the first Android-powered phones the following year. Smartphones went from 10 percent to 40 percent market penetration faster than any other consumer technology in history. In the United States, adoption hit 50 percent only three years ago. Yet today, not carrying a smartphone indicates eccentricity, social marginalization, or old age. [Continue reading…]

It perhaps also indicates being at less risk of stumbling off a cliff.

Facebooktwittermail

Learning from nature: Record-efficiency turbine farms are being inspired by sealife

Alex Riley writes: As they drove on featureless dirt roads on the first Tuesday of 2010, John Dabiri, professor of aeronautics and bioengineering at the California Institute of Technology, and his then-student Robert Whittlesey, were inspecting a remote area of land that they hoped to purchase to test new concepts in wind power. They named their site FLOWE for Field Laboratory for Optimized Wind Energy. Situated between gentle knolls covered in sere vegetation, the four-acre parcel in Antelope Valley, California, was once destined to become a mall, but those plans fell through. The land was cheap. And, more importantly, it was windy.

Estimated at 250 trillion Watts, the amount of wind on Earth has the potential to provide more than 20 times our current global energy consumption. Yet, only four countries — Spain, Portugal, Ireland, and Denmark — generate more than 10 percent of their electricity this way. The United States, one of the largest, wealthiest, and windiest of countries, comes in at about 4 percent. There are reasons for that. Wind farm expansion brings with it huge engineering costs, unsightly countryside, loud noises, disruption to military radar, and death of wildlife. Recent estimates blamed turbines for killing 600,000 bats and up to 440,000 birds a year. On June 19, 2014, the American Bird Conservancy filed a lawsuit against the federal government asking it to curtail the impact of wind farms on the dwindling eagle populations. And while standalone horizontal-axis turbines harvest wind energy well, in a group they’re highly profligate. As their propeller-like blades spin, the turbines facing into the wind disrupt free-flowing air, creating a wake of slow-moving, infertile air behind them. [Continue reading…]

Facebooktwittermail

The growing risk of a war in space

Geoff Manaugh writes: In Ghost Fleet, a 2015 novel by security theorists Peter Singer and August Cole, the next world war begins in space.

Aboard an apparently civilian space station called the Tiangong, or “Heavenly Palace,” Chinese astronauts—taikonauts—maneuver a chemical oxygen iodine laser (COIL) into place. They aim their clandestine electromagnetic weapon at its first target, a U.S. Air Force communications satellite that helps to coordinate forces in the Pacific theater far below. The laser “fired a burst of energy that, if it were visible light instead of infrared, would have been a hundred thousand times brighter than the sun.” The beam melts through the external hull of the U.S. satellite and shuts down its sensitive inner circuitry.

From there, the taikonauts work their way through a long checklist of strategic U.S. space assets, disabling the nation’s military capabilities from above. It is a Pearl Harbor above the atmosphere, an invisible first strike.

“The emptiness of outer space might be the last place you’d expect militaries to vie over contested territory,” Lee Billings has written, “except that outer space isn’t so empty anymore.” It is not only science fiction, in other words, to suggest that the future of war could be offworld. The high ground of the global battlefield is no longer defined merely by a topographical advantage, but by strategic orbitals and potential weapons stationed in the skies above distant continents.

When China shot down one of its own weather satellites in January 2007, the event was, among other things, a clear demonstration to the United States that China could wage war beyond the Earth’s atmosphere. In the decade since, both China and the United States have continued to pursue space-based armaments and defensive systems. A November 2015 “Report to Congress,” for example, filed by the U.S.-China Economic and Security Review Commission (PDF), specifically singles out China’s “Counterspace Program” as a subject of needed study. China’s astral arsenal, the report explains, most likely includes “direct-ascent” missiles, directed-energy weapons, and also what are known as “co-orbital antisatellite systems.” [Continue reading…]

Facebooktwittermail

Artificial intelligence: ‘We’re like children playing with a bomb’

The Observer reports: You’ll find the Future of Humanity Institute down a medieval backstreet in the centre of Oxford. It is beside St Ebbe’s church, which has stood on this site since 1005, and above a Pure Gym, which opened in April. The institute, a research faculty of Oxford University, was established a decade ago to ask the very biggest questions on our behalf. Notably: what exactly are the “existential risks” that threaten the future of our species; how do we measure them; and what can we do to prevent them? Or to put it another way: in a world of multiple fears, what precisely should we be most terrified of?

When I arrive to meet the director of the institute, Professor Nick Bostrom, a bed is being delivered to the second-floor office. Existential risk is a round-the-clock kind of operation; it sleeps fitfully, if at all.

Bostrom, a 43-year-old Swedish-born philosopher, has lately acquired something of the status of prophet of doom among those currently doing most to shape our civilisation: the tech billionaires of Silicon Valley. His reputation rests primarily on his book Superintelligence: Paths, Dangers, Strategies, which was a surprise New York Times bestseller last year and now arrives in paperback, trailing must-read recommendations from Bill Gates and Tesla’s Elon Musk. (In the best kind of literary review, Musk also gave Bostrom’s institute £1m to continue to pursue its inquiries.)

The book is a lively, speculative examination of the singular threat that Bostrom believes – after years of calculation and argument – to be the one most likely to wipe us out. This threat is not climate change, nor pandemic, nor nuclear winter; it is the possibly imminent creation of a general machine intelligence greater than our own. [Continue reading…]

Facebooktwittermail

‘Gene drives’ that tinker with evolution are an unknown risk, researchers say

MIT Technology Review reports: With great power — in this case, a technology that can alter the rules of evolution — comes great responsibility. And since there are “considerable gaps in knowledge” about the possible consequences of releasing this technology, called a gene drive, into natural environments, it is not yet responsible to do so. That’s the major conclusion of a report published today by the National Academies of Science, Engineering, and Medicine.

Gene drives hold immense promise for controlling or eradicating vector-borne diseases like Zika virus and malaria, or in managing agricultural pests or invasive species. But the 200-page report, written by a committee of 16 experts, highlights how ill-equipped we are to assess the environmental and ecological risks of using gene drives. And it provides a glimpse at the challenges they will create for policymakers.

The technology is inspired by natural phenomena through which particular “selfish” genes are passed to offspring at higher rate than is normally allowed by nature in sexually reproducing organisms. There are multiple ways to make gene drives in the lab, but scientists are now using the gene-editing tool known as CRISPR to very rapidly and effectively do the trick. Evidence in mosquitoes, fruit flies, and yeast suggests that this could be used to spread a gene through nearly 100 percent of a population.

The possible ecological effects, intended or not, are far from clear, though. How long will gene drives persist in the environment? What is the chance that an engineered organism could pass the gene drive to an unintended recipient? How might these things affect the whole ecosystem? How much does all this vary depending on the particular organism and ecosystem?

Research on the molecular biology of gene drives has outpaced ecological research on how genes move through populations and between species, the report says, making it impossible to adequately answer these and other thorny questions. Substantially more laboratory research and confined field testing is needed to better grasp the risks. [Continue reading…]

Jim Thomas writes: If there is a prize for the fastest emerging tech controversy of the century the ‘gene drive’ may have just won it. In under eighteen months the sci-fi concept of a ‘mutagenic chain reaction’ that can drive a genetic trait through an entire species (and maybe eradicate that species too) has gone from theory to published proof of principle to massively-shared TED talk (apparently an important step these days) to the subject of a US National Academy of Sciences high profile study – complete with committees, hearings, public inputs and a glossy 216 page report release. Previous technology controversies have taken anywhere from a decade to over a century to reach that level of policy attention. So why were Gene Drives put on the turbo track to science academy report status? One word: leverage.

What a gene drive does is simple: it ensures that a chosen genetic trait will reliably be passed on to the next generation and every generation thereafter. This overcomes normal Mendelian genetics where a trait may be diluted or lost through the generations. The effect is that the engineered trait is driven through an entire population, re-engineering not just single organisms but enforcing the change in every descendant – re-shaping entire species and ecosystems at will.

It’s a perfect case of a very high-leverage technology. Archimedes famously said “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. ” Gene drive developers are in effect saying “Give me a gene drive and an organism to put it in and I can wipe out species, alter ecosystems and cause large-scale modifications.” Gene drive pioneer Kevin Esvelt calls gene drives “an experiment where if you screw up, it affects the whole world”. [Continue reading…]

Facebooktwittermail

Has the quantum era has begun?

IDG News Service reports: Quantum computing’s full potential may still be years away, but there are plenty of benefits to be realized right now.

So argues Vern Brownell, president and CEO of D-Wave Systems, whose namesake quantum system is already in its second generation.

Launched 17 years ago by a team with roots at Canada’s University of British Columbia, D-Wave introduced what it called “the world’s first commercially available quantum computer” back in 2010. Since then the company has doubled the number of qubits, or quantum bits, in its machines roughly every year. Today, its D-Wave 2X system boasts more than 1,000.

The company doesn’t disclose its full customer list, but Google, NASA and Lockheed-Martin are all on it, D-Wave says. In a recent experiment, Google reported that D-Wave’s technology outperformed a conventional machine by 100 million times. [Continue reading…]

Facebooktwittermail

The range of the mind’s eye is restricted by the skill of the hand

structure12

Jonathan Waldman writes: Sometime in 1882, a skinny, dark-haired, 11-year-old boy named Harry Brearley entered a steelworks for the first time. A shy kid — he was scared of the dark, and a picky eater — he was also curious, and the industrial revolution in Sheffield, England, offered much in the way of amusements. He enjoyed wandering around town — he later called himself a Sheffield Street Arab — watching road builders, bricklayers, painters, coal deliverers, butchers, and grinders. He was drawn especially to workshops; if he couldn’t see in a shop window, he would knock on the door and offer to run an errand for the privilege of watching whatever work was going on inside. Factories were even more appealing, and he had learned to gain access by delivering, or pretending to deliver, lunch or dinner to an employee. Once inside, he must have reveled, for not until the day’s end did he emerge, all grimy and gray but for his blue eyes. Inside the steelworks, the action compelled him so much that he spent hours sitting inconspicuously on great piles of coal, breathing through his mouth, watching brawny men shoveling fuel into furnaces, hammering white-hot ingots of iron.

There was one operation in particular that young Harry liked: a toughness test performed by the blacksmith. After melting and pouring a molten mixture from a crucible, the blacksmith would cast a bar or two of that alloy, and after it cooled, he would cut notches in the ends of those bars. Then he’d put the bars in a vise, and hammer away at them.

The effort required to break the metal bars, as interpreted through the blacksmith’s muscles, could vary by an order of magnitude, but the result of the test was expressed qualitatively. The metal was pronounced on the spot either rotten or darned good stuff. The latter was simply called D.G.S. The aim of the men at that steelworks, and every other, was to produce D.G.S., and Harry took that to heart.

In this way, young Harry became familiar with steelmaking long before he formally taught himself as much as there was to know about the practice. It was the beginning of a life devoted to steel, without the distractions of hobbies, vacations, or church. It was the origin of a career in which Brearley wrote eight books on metals, five of which contain the word steel in the title; in which he could argue about steelmaking — but not politics — all night; and in which the love and devotion he bestowed upon inanimate metals exceeded that which he bestowed upon his parents or wife or son. Steel was Harry’s true love. It would lead, eventually, to the discovery of stainless steel.

Harry Brearley was born on Feb. 18, 1871, and grew up poor, in a small, cramped house on Marcus Street, in Ramsden’s Yard, on a hill in Sheffield. The city was the world capital of steelmaking; by 1850 Sheffield steelmakers produced half of all the steel in Europe, and 90 percent of the steel in England. By 1860, no fewer than 178 edge tool and saw makers were registered in Sheffield. In the first half of the 19th century, as Sheffield rose to prominence, the population of the city grew fivefold, and its filth grew proportionally. A saying at the time, that “where there’s muck there’s money,” legitimized the grime, reek, and dust of industrial Sheffield, but Harry recognized later that it was a misfortune to be from there, for nobody had much ambition. [Continue reading…]

Facebooktwittermail