The New York Times reports: Microsoft has lost customers, including the government of Brazil.
IBM is spending more than a billion dollars to build data centers overseas to reassure foreign customers that their information is safe from prying eyes in the United States government.
And tech companies abroad, from Europe to South America, say they are gaining customers that are shunning United States providers, suspicious because of the revelations by Edward J. Snowden that tied these providers to the National Security Agency’s vast surveillance program.
Even as Washington grapples with the diplomatic and political fallout of Mr. Snowden’s leaks, the more urgent issue, companies and analysts say, is economic. Technology executives, including Mark Zuckerberg of Facebook, raised the issue when they went to the White House on Friday for a meeting with President Obama.
It is impossible to see now the full economic ramifications of the spying disclosures — in part because most companies are locked in multiyear contracts — but the pieces are beginning to add up as businesses question the trustworthiness of American technology products. [Continue reading...]
Philip Ball writes: For centuries, scientists studied light to comprehend the visible world. Why are things colored? What is a rainbow? How do our eyes work? And what is light itself? These are questions that preoccupied scientists and philosophers since the time of Aristotle, including Roger Bacon, Isaac Newton, Michael Faraday, Thomas Young, and James Clerk Maxwell.
But in the late 19th century all that changed, and it was largely Maxwell’s doing. This was the period in which the whole focus of physics — then still emerging as a distinct scientific discipline — shifted from the visible to the invisible. Light itself was instrumental to that change. Not only were the components of light invisible “fields,” but light was revealed as merely a small slice of a rainbow extending far into the unseen.
Physics has never looked back. Today its theories and concepts are concerned largely with invisible entities: not only unseen force fields and insensible rays but particles too small to see even with the most advanced microscopes. We now know that our everyday perception grants us access to only a tiny fraction of reality. Telescopes responding to radio waves, infrared radiation, and X-rays have vastly expanded our view of the universe, while electron microscopes, X-ray beams, and other fine probes of nature’s granularity have unveiled the microworld hidden beyond our visual acuity. Theories at the speculative forefront of physics flesh out this unseen universe with parallel worlds and with mysterious entities named for their very invisibility: dark matter and dark energy.
This move beyond the visible has become a fundamental part of science’s narrative. But it’s a more complicated shift than we often appreciate. Making sense of what is unseen — of what lies “beyond the light” — has a much longer history in human experience. Before science had the means to explore that realm, we had to make do with stories that became enshrined in myth and folklore. Those stories aren’t banished as science advances; they are simply reinvented. Scientists working at the forefront of the invisible will always be confronted with gaps in knowledge, understanding, and experimental capability. In the face of those limits, they draw unconsciously on the imagery of the old stories. This is a necessary part of science, and these stories can sometimes suggest genuinely productive scientific ideas. But the danger is that we will start to believe them at face value, mistaking them for theories.
A backward glance at the history of the invisible shows how the narratives and tropes of myth and folklore can stimulate science, while showing that the truth will probably turn out to be far stranger and more unexpected than these old stories can accommodate. [Continue reading...]
William J Broad writes: American science, long a source of national power and pride, is increasingly becoming a private enterprise.
In Washington, budget cuts have left the nation’s research complex reeling. Labs are closing. Scientists are being laid off. Projects are being put on the shelf, especially in the risky, freewheeling realm of basic research. Yet from Silicon Valley to Wall Street, science philanthropy is hot, as many of the richest Americans seek to reinvent themselves as patrons of social progress through science research.
The result is a new calculus of influence and priorities that the scientific community views with a mix of gratitude and trepidation.
“For better or worse,” said Steven A. Edwards, a policy analyst at the American Association for the Advancement of Science, “the practice of science in the 21st century is becoming shaped less by national priorities or by peer-review groups and more by the particular preferences of individuals with huge amounts of money.”
They have mounted a private war on disease, with new protocols that break down walls between academia and industry to turn basic discoveries into effective treatments. They have rekindled traditions of scientific exploration by financing hunts for dinosaur bones and giant sea creatures. They are even beginning to challenge Washington in the costly game of big science, with innovative ships, undersea craft and giant telescopes — as well as the first private mission to deep space.
The new philanthropists represent the breadth of American business, people like Michael R. Bloomberg, the former New York mayor (and founder of the media company that bears his name), James Simons (hedge funds) and David H. Koch (oil and chemicals), among hundreds of wealthy donors. Especially prominent, though, are some of the boldest-face names of the tech world, among them Bill Gates (Microsoft), Eric E. Schmidt (Google) and Lawrence J. Ellison (Oracle).
This is philanthropy in the age of the new economy — financed with its outsize riches, practiced according to its individualistic, entrepreneurial creed. The donors are impatient with the deliberate, and often politicized, pace of public science, they say, and willing to take risks that government cannot or simply will not consider.
Yet that personal setting of priorities is precisely what troubles some in the science establishment. Many of the patrons, they say, are ignoring basic research — the kind that investigates the riddles of nature and has produced centuries of breakthroughs, even whole industries — for a jumble of popular, feel-good fields like environmental studies and space exploration.
As the power of philanthropic science has grown, so has the pitch, and the edge, of the debate. Nature, a family of leading science journals, has published a number of wary editorials, one warning that while “we applaud and fully support the injection of more private money into science,” the financing could also “skew research” toward fields more trendy than central.
“Physics isn’t sexy,” William H. Press, a White House science adviser, said in an interview. “But everybody looks at the sky.”
Fundamentally at stake, the critics say, is the social contract that cultivates science for the common good. They worry that the philanthropic billions tend to enrich elite universities at the expense of poor ones, while undermining political support for federally sponsored research and its efforts to foster a greater diversity of opportunity — geographic, economic, racial — among the nation’s scientific investigators.
Historically, disease research has been particularly prone to unequal attention along racial and economic lines. A look at major initiatives suggests that the philanthropists’ war on disease risks widening that gap, as a number of the campaigns, driven by personal adversity, target illnesses that predominantly afflict white people — like cystic fibrosis, melanoma and ovarian cancer. [Continue reading...]
Ars Technica reports: The Universe is incredibly regular. The variation of the cosmos’ temperature across the entire sky is tiny: a few millionths of a degree, no matter which direction you look. Yet the same light from the very early cosmos that reveals the Universe’s evenness also tells astronomers a great deal about the conditions that gave rise to irregularities like stars, galaxies, and (incidentally) us.
That light is the cosmic microwave background, and it provides some of the best knowledge we have about the structure, content, and history of the Universe. But it also contains a few mysteries: on very large scales, the cosmos seems to have a certain lopsidedness. That slight asymmetry is reflected in temperature fluctuations much larger than any galaxy, aligned on the sky in a pattern facetiously dubbed “the axis of evil.”
The lopsidedness is real, but cosmologists are divided over whether it reveals anything meaningful about the fundamental laws of physics. The fluctuations are sufficiently small that they could arise from random chance. We have just one observable Universe, but nobody sensible believes we can see all of it. With a sufficiently large cosmos beyond the reach of our telescopes, the rest of the Universe may balance the oddity that we can see, making it a minor, local variation.
However, if the asymmetry can’t be explained away so simply, it could indicate that some new physical mechanisms were at work in the early history of the Universe. As Amanda Yoho, a graduate student in cosmology at Case Western Reserve University, told Ars, “I think the alignments, in conjunction with all of the other large angle anomalies, must point to something we don’t know, whether that be new fundamental physics, unknown astrophysical or cosmological sources, or something else.” [Continue reading...]
Following Facebook’s $19 billion dollar acquisition of WhatsApp, Reuven Cohen writes: In November 2013, a survey of smartphone owners found that WhatsApp was the leading social messaging app in countries including Spain, Switzerland, Germany and Japan. Yet at 450 million users and growing, there is a strong likelihood that both Facebook and WhatsApp share the majority of the same user base. So what’s driving the massive valuation? One answer might be users attention. Unlike many other mobile apps, WhatsApp users actually use this service on an ongoing daily or even hourly basis.
“Attention,” write Thomas Mandel and Gerard Van der Leun in their 1996 book Rules of the Net, ”is the hard currency of cyberspace.” This has never been truer.
WhatsApp’s value may not have much to do with the disruption of the telecom world as much as a looming battle for Internet users rapidly decreasing attention spans. A study back in 2011 uncovered the reality for most mobile apps. Most people never use an app more than once. According to the study, 26% of the time customers never give the app a second try. With an ever-increasing number of apps competing for users attention, the only real metric that matters is whether or not they actual use it. Your attention may very well be the fundamental value behind Facebook’s purchase.
In a 1997 Wired article, author Michael H. Goldhaber describes the shift towards the so called Attention Economy; “Attention has its own behavior, its own dynamics, its own consequences. An economy built on it will be different than the familiar material-based one.” writes Goldhaber.
His thesis is that as the Internet becomes an increasingly strong presence in the overall economy and our daily lives, the flow of attention will not only anticipate the flow of money, but also eventually replace it altogether. Fast-forward 17 years and his thesis has never been more true.
As we become ever more bombarded with information, the value of this information decreases. Just look at the improvements made to Facebook’s news feed over the years. In an attempt to make its news feed more useful, the company has implement-advanced algorithms that attempt to tailor the flow of information to your specific interests. The better Facebook gets at keeping your attention, the more valuable you become. Yes, you are the product. [Continue reading...]
To the extent that corporations are in the business of corralling, controlling, and effectively claiming ownership of people’s attention, the only way of finding freedom in such a world will derive from each individual’s effort to cultivate their own powers of autonomous attention.
You might be thinking that a minute sample size was taken among a population of the severely uninformed, but evidences suggests otherwise. The National Science Foundation conducts a poll to measure scientific literacy each year. This year, 2,200 people were asked ten questions about physical and biological sciences, and about one in four people did not know that the Earth revolved around the Sun; a proven scientific fact that was discovered in 1543 by Nicolaus Copernicus.
Now before jumping to the conclusion that this provides yet more evidence that Americans are strikingly ignorant, it’s worth noting that among citizens of the European Union polled in 2005, 29% believed the Sun revolves around the Earth.
But hold on — here’s perhaps the most revealing element in popular awareness about basic science on both sides of the Atlantic: both population groups demonstrated a better understanding of plate tectonics.
83% of Americans and 87% of Europeans understand that “the continents on which we live have been moving for millions of years and will continue to move in the future.”
Does this mean that continental drift is an easier concept to grasp than the structure of the solar system?
I don’t think so. Neither do I think that a quarter of Americans believe in Ptolemaic astronomy. It seems more likely that a significant number of people who are not speaking their native language find the question — Does the Earth go around the Sun, or does the Sun go around the Earth? — grammatically challenging.
In other words, this poll may reveal less about what people believe than it reveals about how well they understand what they are being asked.
Union of Concerned Scientists: Back in 2012, after the Competitive Enterprise Institute and the National Review each published pieces that likened climate scientist Michael E. Mann to a child molester and called his work a fraud, Mann fought back with a lawsuit, charging them with libel. Now, in a preliminary ruling, a Superior Court Judge has sided with Mann, paving the way for the case to move forward and potentially setting an important precedent about the limits of disinformation.
The ruling, in essence, reinforces the wise adage attributed to former New York Sen. Patrick Moynihan that, while everyone is entitled to his or her own opinion, we are not each entitled to our own facts. But first, some background.
Michael Mann, a world-renowned climate scientist at Penn State University, is perhaps best known as the author of the so-called “Hockey Stick” graph. Some 15 years ago, in 1999, Mann and two colleagues published data they had compiled from tree rings, coral growth bands and ice cores, as well as more recent temperature measurements, to chart 1,000 years’ worth of climate data. The resulting graph of their findings showed relatively stable global temperatures followed by a steep warming trend beginning in the 1900s. One of Mann’s colleagues gave it the nickname because the graph looks something like a hockey stick lying on its side with the upturned blade representing the sharp, comparatively recent temperature increase. It quickly became one of the most famous, easy-to-grasp representations of the reality of global warming.
The United Nations’ Intergovernmental Panel on Climate Change featured Mann’s work, among similar studies, in their pathbreaking 2001 report, concluding that temperature increases in the 20th century were likely to have been “the largest of any century during the past 1,000 years.” But while Mann’s peer-reviewed research pointed clearly to a human role in global warming, it also made Mann a lightning rod for attacks from those, including many in the fossil fuel industry, who sought to deny the reality of global warming. [Continue reading...]
Douglas Hofstadter — Research on artificial intelligence is sidestepping the core question: how do people think?
Douglas Hofstadter is a cognitive scientist at Indiana University and the Pulitzer Prize-winning author of Gödel, Escher, Bach: An Eternal Golden Braid.
Popular Mechanics: You’ve said in the past that IBM’s Jeopardy-playing computer, Watson, isn’t deserving of the term artificial intelligence. Why?
Douglas Hofstadter: Well, artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial level — things that are impressive if you don’t look at the details. In that sense, we’ve already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinking — that’s inaccurate. Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.
Do you think we’ll start seeing diminishing returns from a Watson-like approach to AI?
I can’t really predict that. But what I can say is that I’ve monitored Google Translate — which uses a similar approach — for many years. Google Translate is developing and it’s making progress because the developers are inventing new, clever ways of milking the quickness of computers and the vastness of its database. But it’s not making progress at all in the sense of understanding your text, and you can still see it falling flat on its face a lot of the time. And I know it’ll never produce polished [translated] text, because real translating involves understanding what is being said and then reproducing the ideas that you just heard in a different language. Translation has to do with ideas, it doesn’t have to do with words, and Google Translate is about words triggering other words.
So why are AI researchers so focused on building programs and computers that don’t do anything like thinking?
They’re not studying the mind and they’re not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They’re doing product development.
I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn’t about making money, and the people in the field weren’t driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That’s a beautiful paradox and very exciting, philosophically. [Continue reading...]
Know thyself has been a maxim throughout the ages, rooted in the belief that wisdom and wise living demand we acquire self-knowledge.
As Shakespeare wrote:
This above all: to thine own self be true,
And it must follow, as the night the day,
Thou canst not then be false to any man.
New research on human feelings, however, seems to have the absurd implication that if you really want to know your inner being, you should probably carry around a mirror and pay close attention to your facial expressions. The researchers clearly believe that monitoring muscle contractions is a more reliable way of knowing what someone is feeling than using any kind of subjective measure. Reduced to this muscular view, it turns out — according to the research — that we only have four basic feelings.
Likewise, devotees of the Quantified Self seem to believe that it’s not really possible to know what it means to be alive unless one can be hooked up to and study the output from one or several digital devices.
In each of these cases we are witnessing a trend driven by technological development through which the self is externalized.
Thoreau warned that we have “become the tools of our tools,” but the danger laying beyond that is that we become our tools; that our sense of who we are becomes so pervasively mediated by devices that without these devices we conclude we are nothing.
Josh Cohen writes: With January over, the spirit of self-improvement in which you began the year can start to evaporate. Except now your feeble excuses are under assault from a glut of “self-tracking” devices and apps. Your weakness for saturated fats and alcohol, your troubled sleep and mood swings, your tendencies to procrastination, indecision and disorganisation — all your quirks and flaws can now be monitored and remedied with the help of mobile technology.
Technology offer solutions not only to familiar problems of diet, exercise and sleep, but to anxieties you weren’t even aware of. If you can’t resolve a moral dilemma, there’s an app that will solicit your friends’ advice. If you’re concerned about your toddler’s language development, there’s a small device that will measure the number and range of words she’s using against those of her young peers.
Quantified Self (QS) is a growing global movement selling a new form of wisdom, encapsulated in the slogan “self-knowledge through numbers”. Rooted in the American tech scene, it encourages people to monitor all aspects of their physical, emotional, cognitive, social, domestic and working lives. The wearable cameras that enable you to broadcast your life minute by minute; the Nano-sensors that can be installed in any region of the body to track vital functions from blood pressure to cholesterol intake, the voice recorders that pick up the sound of your sleeping self or your baby’s babble—together, these devices can provide you with the means to regain control over your fugitive life.
This vision has traction at a time when our daily lives, as the Snowden leaks have revealed, are being lived in the shadow of state agencies, private corporations and terrorist networks — overwhelming yet invisible forces that leave us feeling powerless to maintain boundaries around our private selves. In a world where our personal data appears vulnerable to intrusion and exploitation, a movement that effectively encourages you to become your own spy is bound to resonate. Surveillance technologies will put us back in the centre of the lives from which they’d displaced us. Our authoritative command of our physiological and behavioural “numbers” can assure us that after all, no one knows us better than we do. [Continue reading...]
Paul Willis writes: Science is not a democracy. A consensus of evidence may be interesting, but technically it may not be significant. The thoughts of a majority of scientists doesn’t mean a hill of beans. It’s all about the evidence. The science is never settled.
These are refrains that I and other science communicators have been using over and over again when we turn to analysing debates and discussions based on scientific principles. I think we get torn between remaining true to the philosophical principles by which science is conducted and trying to make those principles familiar to an audience that probably does not understand them.
So let me introduce a concept that is all-too-often overlooked in science discussions, that can actually shed some light deep into the mechanisms of science and explain the anatomy of a scientific debate. It’s the phonically beautiful term ‘consilience’.
Consilience means to use several different lines of inquiry that converge on the same or similar conclusions. The more independent investigations you have that reach the same result, the more confidence you can have that the conclusion is correct. Moreover, if one independent investigation produces a result that is at odds with the consilience of several other investigations, that is an indication that the error is probably in the methods of the adherent investigation, not in the conclusions of the consilience.
Let’s take an example to unpack this concept, an example where I first came across the term and it is a beautiful case of consilience at work. Charles Darwin’s On Origin Of Species is a masterpiece of consilience. Each chapter is a separate line of investigation and, within each chapter there are numerous examples, investigations and experiments that all join together to reach the same conclusion: that life changes through time and that life has evolved on Earth. Take apart On Origin Of Species case by case and no single piece of evidence that Darwin mustered conclusively demonstrates that evolution is true. But add those cases back together and the consilience is clear: evidence from artificial breeding, palaeontology, comparative morphology and a host of other independent lines of investigation combine to confirm the same inescapable conclusion.
That was 1859. Since then yet more investigations have been added to the consilience for evolution. What’s more, these investigations within the biological and geological sciences have been joined with others from physics and chemistry as well as completely new areas of science such as genetics, radiometric dating and molecular biology. Each independent line of investigation builds the consilience that the world and the universe are extremely old and that life has evolved through unfathomable durations of time here on our home planet.
So, when a new line of investigation comes along claiming evidence and conclusions contrary to evolution, how can that be accommodated within the consilience? How does it relate to so many independent strains conjoined by a similar conclusion at odds with the newcomer? Can one piece of evidence overthrow such a huge body of work?
Such is the thinking of those pesky creationists who regularly come up with “Ah-Ha!” and “Gotcha!” factoids that apparently overturn, not just evolution, but the whole consilience of science. [Continue reading...]
The Guardian reports: In 2004, the European Space Agency launched the Rosetta probe on an audacious mission to chase down a comet and place a robot on its surface. For nearly three years Rosetta had been hurtling through space in a state of hibernation. On Monday, it awoke.
The radio signal from Rosetta came from 800m kilometres away, a distance made hardly more conceivable by its proximity to Jupiter. The signal appeared on a computer screen as a tremulous green spike, but it meant the world – perhaps the solar system – to the scientists and engineers gathered at European Space Operations Centre in Darmstadt.
In a time when every spacecraft worth its salt has a Twitter account, the inevitable message followed from @Esa_Rosetta. It was brief and joyful: “Hello, world!”.
Speaking to the assembled crowd at Darmstadt, Matt Taylor, project scientist on the Rosetta mission, said: “Now it’s up to us to do the work we’ve promised to do.”
Just 10 minutes before he’d been facing an uncertain future career. If the spacecraft had not woken up, there would be no science to do and the role of project scientist would have been redundant.
The comet hunter had been woken by an internal alarm clock at 10am UK time but only after several hours of warming up its instruments and orientating towards Earth could it send a message home.
In the event, the missive was late. Taylor had been hiding his nerves well, even joking about the wait on Twitter but when the clock passed 19:00CET, making the signal at least 15 minutes late, the mood changed. ESA scientists and engineers started rocking on their heels, clutching their arms around themselves, and stopping the banter than had helped pass the time. Taylor himself sat down, and seemed to withdraw.
Then the flood of relief when the blip on the graph appeared. “I told you it would work,” said Taylor with a grin.
The successful rousing of the distant probe marks a crucial milestone in a mission that is more spectacular and ambitious than any the European Space Agency has conceived. The €1bn, car-sized spacecraft will now close in on a comet, orbit around it, and send down a lander, called Philae, the first time such a feat has been attempted.
The comet, 67P/Churyumov-Gerasimenko, is 4km wide, or roughly the size of Mont Blanc. That is big enough to study, but too measly to have a gravitational field strong enough to hold the lander in place. Instead, the box of sensors on legs will latch on to the comet by firing an explosive harpoon the moment it lands, and twisting ice screws into its surface.
To my mind, the long-standing belief that comets may have had a role in the origin of life on Earth seems even more speculative than Jeremy England’s idea that it could have arisen as a result of physical processes originating on this planet’s surface.
As much as anything, what seems so extraordinary about a project such as Rosetta is its accomplishment as a feat of navigation (assuming that it does in fact make its planned rendezvous with the comet). It’s like aiming an arrow at a speck of dust on a moving target in the dark.
SecurityWatch: At a recent RSA Security Conference, Nico Sell was on stage announcing that her company — Wickr — was making drastic changes to ensure its users’ security. She said that the company would switch from RSA encryption to elliptic curve encryption, and that the service wouldn’t have a backdoor for anyone.
As she left the stage, before she’d even had a chance to take her microphone off, a man approached her and introduced himself as an agent with the Federal Bureau of Investigation. He then proceeded to “casually” ask if she’d be willing to install a backdoor into Wickr that would allow the FBI to retrieve information.
This encounter, and the agent’s casual demeanor, is apparently business as usual as intelligence and law enforcement agencies seek to gain greater access into protected communication systems. Since her encounter with the agent at RSA, Sell says it’s a story she’s heard again and again. “It sounds like that’s how they do it now,” she told SecurityWatch. “Always casual, testing, because most people would say yes.” [Continue reading...]
Tom Chatfield writes: Massive, inconceivable numbers are commonplace in conversations about computers. The exabyte, a one followed by 18 zeroes worth of bits; the petaflop, one quadrillion calculations performed in a single second. Beneath the surface of our lives churns an ocean of information, from whose depths answers and optimisations ascend like munificent kraken.
This is the much-hyped realm of “big data”: unprecedented quantities of information generated at unprecedented speed, in unprecedented variety.
From particle physics to predictive search and aggregated social media sentiments, we reap its benefits across a broadening gamut of fields. We agonise about over-sharing while the numbers themselves tick upwards. Mostly, though, we fail to address a handful of questions more fundamental even than privacy. What are machines good at; what are they less good at; and when are their answers worse than useless?
Consider cats. As commentators like the American psychologist Gary Marcus have noted, it’s extremely difficult to teach a computer to recognise cats. And that’s not for want of trying. Back in the summer of 2012, Google fed 10 million feline-featuring images (there’s no shortage online) into a massively powerful custom-built system. The hope was that the alchemy of big data would do for images what it has already done for machine translation: that an algorithm could learn from a sufficient number of examples to approximate accurate solutions to the question “what is that?”
Sadly, cats proved trickier than words. Although the system did develop a rough measure of “cattiness”, it struggled with variations in size, positioning, setting and complexity. Once expanded to encompass 20,000 potential categories of object, the identification process managed just 15.8% accuracy: a huge improvement on previous efforts, but hardly a new digital dawn. [Continue reading...]
Michael Pollan writes: In 1973, a book claiming that plants were sentient beings that feel emotions, prefer classical music to rock and roll, and can respond to the unspoken thoughts of humans hundreds of miles away landed on the New York Times best-seller list for nonfiction. “The Secret Life of Plants,” by Peter Tompkins and Christopher Bird, presented a beguiling mashup of legitimate plant science, quack experiments, and mystical nature worship that captured the public imagination at a time when New Age thinking was seeping into the mainstream. The most memorable passages described the experiments of a former C.I.A. polygraph expert named Cleve Backster, who, in 1966, on a whim, hooked up a galvanometer to the leaf of a dracaena, a houseplant that he kept in his office. To his astonishment, Backster found that simply by imagining the dracaena being set on fire he could make it rouse the needle of the polygraph machine, registering a surge of electrical activity suggesting that the plant felt stress. “Could the plant have been reading his mind?” the authors ask. “Backster felt like running into the street and shouting to the world, ‘Plants can think!’ ”
Backster and his collaborators went on to hook up polygraph machines to dozens of plants, including lettuces, onions, oranges, and bananas. He claimed that plants reacted to the thoughts (good or ill) of humans in close proximity and, in the case of humans familiar to them, over a great distance. In one experiment designed to test plant memory, Backster found that a plant that had witnessed the murder (by stomping) of another plant could pick out the killer from a lineup of six suspects, registering a surge of electrical activity when the murderer was brought before it. Backster’s plants also displayed a strong aversion to interspecies violence. Some had a stressful response when an egg was cracked in their presence, or when live shrimp were dropped into boiling water, an experiment that Backster wrote up for the International Journal of Parapsychology, in 1968.
In the ensuing years, several legitimate plant scientists tried to reproduce the “Backster effect” without success. Much of the science in “The Secret Life of Plants” has been discredited. But the book had made its mark on the culture. Americans began talking to their plants and playing Mozart for them, and no doubt many still do. This might seem harmless enough; there will probably always be a strain of romanticism running through our thinking about plants. (Luther Burbank and George Washington Carver both reputedly talked to, and listened to, the plants they did such brilliant work with.) But in the view of many plant scientists “The Secret Life of Plants” has done lasting damage to their field. According to Daniel Chamovitz, an Israeli biologist who is the author of the recent book “What a Plant Knows,” Tompkins and Bird “stymied important research on plant behavior as scientists became wary of any studies that hinted at parallels between animal senses and plant senses.” Others contend that “The Secret Life of Plants” led to “self-censorship” among researchers seeking to explore the “possible homologies between neurobiology and phytobiology”; that is, the possibility that plants are much more intelligent and much more like us than most people think—capable of cognition, communication, information processing, computation, learning, and memory.
The quotation about self-censorship appeared in a controversial 2006 article in Trends in Plant Science proposing a new field of inquiry that the authors, perhaps somewhat recklessly, elected to call “plant neurobiology.” The six authors—among them Eric D. Brenner, an American plant molecular biologist; Stefano Mancuso, an Italian plant physiologist; František Baluška, a Slovak cell biologist; and Elizabeth Van Volkenburgh, an American plant biologist—argued that the sophisticated behaviors observed in plants cannot at present be completely explained by familiar genetic and biochemical mechanisms. Plants are able to sense and optimally respond to so many environmental variables—light, water, gravity, temperature, soil structure, nutrients, toxins, microbes, herbivores, chemical signals from other plants—that there may exist some brainlike information-processing system to integrate the data and coördinate a plant’s behavioral response. The authors pointed out that electrical and chemical signalling systems have been identified in plants which are homologous to those found in the nervous systems of animals. They also noted that neurotransmitters such as serotonin, dopamine, and glutamate have been found in plants, though their role remains unclear.
Hence the need for plant neurobiology, a new field “aimed at understanding how plants perceive their circumstances and respond to environmental input in an integrated fashion.” The article argued that plants exhibit intelligence, defined by the authors as “an intrinsic ability to process information from both abiotic and biotic stimuli that allows optimal decisions about future activities in a given environment.” Shortly before the article’s publication, the Society for Plant Neurobiology held its first meeting, in Florence, in 2005. A new scientific journal, with the less tendentious title Plant Signaling & Behavior, appeared the following year.
Depending on whom you talk to in the plant sciences today, the field of plant neurobiology represents either a radical new paradigm in our understanding of life or a slide back down into the murky scientific waters last stirred up by “The Secret Life of Plants.” Its proponents believe that we must stop regarding plants as passive objects—the mute, immobile furniture of our world—and begin to treat them as protagonists in their own dramas, highly skilled in the ways of contending in nature. They would challenge contemporary biology’s reductive focus on cells and genes and return our attention to the organism and its behavior in the environment. It is only human arrogance, and the fact that the lives of plants unfold in what amounts to a much slower dimension of time, that keep us from appreciating their intelligence and consequent success. Plants dominate every terrestrial environment, composing ninety-nine per cent of the biomass on earth. By comparison, humans and all the other animals are, in the words of one plant neurobiologist, “just traces.” [Continue reading...]