Category Archives: Science

Are humans reaching the limits of our ability to probe the laws of nature?

blakes-rock

Natalie Wolchover writes: Physicists typically think they “need philosophers and historians of science like birds need ornithologists,” the Nobel laureate David Gross told a roomful of philosophers, historians and physicists last week in Munich, Germany, paraphrasing Richard Feynman.

But desperate times call for desperate measures.

Fundamental physics faces a problem, Gross explained — one dire enough to call for outsiders’ perspectives. “I’m not sure that we don’t need each other at this point in time,” he said.

It was the opening session of a three-day workshop, held in a Romanesque-style lecture hall at Ludwig Maximilian University (LMU Munich) one year after George Ellis and Joe Silk, two white-haired physicists now sitting in the front row, called for such a conference in an incendiary opinion piece in Nature. One hundred attendees had descended on a land with a celebrated tradition in both physics and the philosophy of science to wage what Ellis and Silk declared a “battle for the heart and soul of physics.”

The crisis, as Ellis and Silk tell it, is the wildly speculative nature of modern physics theories, which they say reflects a dangerous departure from the scientific method. Many of today’s theorists — chief among them the proponents of string theory and the multiverse hypothesis — appear convinced of their ideas on the grounds that they are beautiful or logically compelling, despite the impossibility of testing them. Ellis and Silk accused these theorists of “moving the goalposts” of science and blurring the line between physics and pseudoscience. “The imprimatur of science should be awarded only to a theory that is testable,” Ellis and Silk wrote, thereby disqualifying most of the leading theories of the past 40 years. “Only then can we defend science from attack.”

They were reacting, in part, to the controversial ideas of Richard Dawid, an Austrian philosopher whose 2013 book String Theory and the Scientific Method identified three kinds of “non-empirical” evidence that Dawid says can help build trust in scientific theories absent empirical data. Dawid, a researcher at LMU Munich, answered Ellis and Silk’s battle cry and assembled far-flung scholars anchoring all sides of the argument for the high-profile event last week.

Gross, a supporter of string theory who won the 2004 Nobel Prize in physics for his work on the force that glues atoms together, kicked off the workshop by asserting that the problem lies not with physicists but with a “fact of nature” — one that we have been approaching inevitably for four centuries.

The dogged pursuit of a fundamental theory governing all forces of nature requires physicists to inspect the universe more and more closely — to examine, for instance, the atoms within matter, the protons and neutrons within those atoms, and the quarks within those protons and neutrons. But this zooming in demands evermore energy, and the difficulty and cost of building new machines increases exponentially relative to the energy requirement, Gross said. “It hasn’t been a problem so much for the last 400 years, where we’ve gone from centimeters to millionths of a millionth of a millionth of a centimeter” — the current resolving power of the Large Hadron Collider (LHC) in Switzerland, he said. “We’ve gone very far, but this energy-squared is killing us.”

As we approach the practical limits of our ability to probe nature’s underlying principles, the minds of theorists have wandered far beyond the tiniest observable distances and highest possible energies. Strong clues indicate that the truly fundamental constituents of the universe lie at a distance scale 10 million billion times smaller than the resolving power of the LHC. This is the domain of nature that string theory, a candidate “theory of everything,” attempts to describe. But it’s a domain that no one has the faintest idea how to access. [Continue reading…]

Facebooktwittermail

A scientific approach designed to precisely calibrate the metrics needed for quantifying bullshit

Science News reports: Dutch social psychologist Diederik Stapel was known for his meteoric rise, until he was known for his fall. His research on social interactions, which spanned topics from infidelity to selfishness to discrimination, frequently appeared in top-tier journals. But then in 2011, three junior researchers raised concerns that Stapel was fabricating data. Stapel’s institution, Tilburg University, suspended him and launched a formal investigation. A commission ultimately determined that of his more than 125 research papers, at least 55 were based on fraudulent data. Stapel now has 57 retractions to his name.

The case provided an unusual opportunity for exploring the language of deception: One set of Stapel’s papers that discussed faked data and a set of his papers based on legitimate results. Linguists David Markowitz and Jeffrey Hancock ran an analysis of articles in each set that listed Stapel as the first author. The researchers discovered particular tells in the language that allowed them to peg the fraudulent work with roughly 70 percent accuracy. While Stapel was careful to concoct data that appeared to be reasonable, he oversold his false goods, using, for example, more science-related terms and more amplifying terms, like extreme and exceptionally, in the now-retracted papers.

Markowitz and Hancock, now at Stanford, are still probing the language of lies, and they recently ran a similar analysis on a larger sample of papers with fudged data.

The bottom line: Fraudulent papers were full of jargon, harder to read, and bloated with references. This parsing-of-language approach, which the team describes in the Journal of Language and Social Psychology, might be used to flag papers that deserve extra scrutiny. But tricks for detecting counterfeit data are unlikely to thwart the murkier problem of questionable research practices or the general lack of clarity in the scientific literature.

“This is an important contribution to the discussion of quality control in research,”Nick Steneck, a science historian at the University of Michigan and an expert in research integrity practices, told me. “But there’s a whole lot of other reasons why clarity and readability of scientific writing matters, including making things understandable to the public.” [Continue reading…]

Facebooktwittermail

Naturalists are becoming an endangered species

By David Norman, University of Cambridge

The phrase “Natural History” is linked in most people’s minds today with places that use the phrase: the various Natural History Museums, or television programmes narrated so evocatively by renowned naturalist Sir David Attenborough.

As times have changed, used in its traditional sense the phrase now has an almost archaic ring to it, perhaps recalling the Victorian obsession with collecting butterflies or beetles, rocks or fossils, or stuffed birds and animals, or perhaps the 18th century best-seller, Gilbert White’s The Natural History of Selborne.

Once natural history was part of what was equally archaically called natural philosophy, encompassing the enquiry into all aspects of the natural world that we inhabit, from the tiniest creature to the largest, to molecules and materials, to planets and stars in outer space. These days, we call it science. Natural history specifically strives to study and understand organisms within their environment, which would these days equate to the disciplines of ecology or conservation.

In a recent article in the journal BioScience, a group of 17 scientists decry what they see as a shift away from this traditional learning (once typical parts of biology degrees) that taught students about organisms: where they live, what they eat, how they behave, their variety and relationships to their ecosystems in which they live.

Partly by the promise of a course-specific career, and perhaps partly because of poorly taught courses that can emphasise rote learning, students are enticed into more exciting fields such as biotechnology or evolutionary developmental biology (“evo-devo”), where understanding an organism is less important than understanding the function of a particular organ or limb.

Continue reading

Facebooktwittermail

The human mind as the preeminent scientific instrument

Walter Isaacson writes: This month marks the 100th anniversary of the General Theory of Relativity, the most beautiful theory in the history of science, and in its honor we should take a moment to celebrate the visualized “thought experiments” that were the navigation lights guiding Albert Einstein to his brilliant creation. Einstein relished what he called Gedankenexperimente, ideas that he twirled around in his head rather than in a lab. That’s what teachers call daydreaming, but if you’re Einstein you get to call them Gedankenexperimente.

As these thought experiments remind us, creativity is based on imagination. If we hope to inspire kids to love science, we need to do more than drill them in math and memorized formulas. We should stimulate their minds’ eyes as well. Even let them daydream.

Einstein’s first great thought experiment came when he was about 16. He had run away from his school in Germany, which he hated because it emphasized rote learning rather than visual imagination, and enrolled in a Swiss village school based on the educational philosophy of Johann Heinrich Pestalozzi, who believed in encouraging students to visualize concepts. While there, Einstein tried to picture what it would be like to travel so fast that you caught up with a light beam. If he rode alongside it, he later wrote, “I should observe such a beam of light as an electromagnetic field at rest.” In other words, the wave would seem stationary. But this was not possible according to Maxwell’s equations, which describe the motion and oscillation of electromagnetic fields.

The conflict between his thought experiment and Maxwell’s equations caused Einstein “psychic tension,” he later recalled, and he wandered around nervously, his palms sweating. Some of us can recall what made our palms sweaty as teenagers, and those thoughts didn’t involve Maxwell’s equations. But that’s because we were probably performing less elevated thought experiments. [Continue reading…]

Facebooktwittermail

It’s completely ridiculous to think that humans could live on Mars

Danielle and Astro Teller write: Our 12-year-old daughter who, like us, is a big fan of The Martian by Andy Weir, said, “I can’t stand that people think we’re all going to live on Mars after we destroy our own planet. Even after we’ve made the Earth too hot and polluted for humans, it still won’t be as bad as Mars. At least there’s plenty of water here, and the atmosphere won’t make your head explode.”

What makes The Martian so wonderful is that the protagonist survives in a brutally hostile environment, against all odds, by exploiting science in clever and creative ways. To nerds like us, that’s better than Christmas morning or a hot fudge sundae. (One of us is nerdier than the other — I’m not naming any names, but his job title is “Captain of Moonshots.”) The idea of using our ingenuity to explore other planets is thrilling. Our daughter has a good point about escaping man-made disaster on Earth by colonizing Mars, though. It doesn’t make a lot of sense.

Mars has almost no surface water; a toxic atmosphere that is too thin for humans to survive without pressure suits; deadly solar radiation; temperatures lower than Antarctica; and few to none of the natural resources that have been critical to human success on Earth. Smart people have proposed solutions for those pesky environmental issues, some of which are seriously sci-fi, like melting the polar ice caps with nuclear bombs. But those aren’t even the real problems.

The real problems have to do with human nature and economics. First, we live on a planet that is perfect for us, and we seem to be unable to prevent ourselves from making it less and less habitable. We’re like a bunch of teenagers destroying our parents’ mansion in one long, crazy party, figuring that our backup plan is to run into the forest and build our own house. We’ll worry about how to get food and a good sound system later. Proponents of Mars colonization talk about “terraforming” Mars to make it more like Earth, but in the meantime, we’re “marsforming” Earth by making our atmosphere poisonous and annihilating our natural resources. We are also well on our way to making Earth one big desert, just like Mars. [Continue reading…]

Facebooktwittermail

The theory of parallel universes is not just maths – it is science that can be tested

By Eugene Lim, King’s College London

The existence of parallel universes may seem like something cooked up by science fiction writers, with little relevance to modern theoretical physics. But the idea that we live in a “multiverse” made up of an infinite number of parallel universes has long been considered a scientific possibility – although it is still a matter of vigorous debate among physicists. The race is now on to find a way to test the theory, including searching the sky for signs of collisions with other universes.

It is important to keep in mind that the multiverse view is not actually a theory, it is rather a consequence of our current understanding of theoretical physics. This distinction is crucial. We have not waved our hands and said: “Let there be a multiverse”. Instead the idea that the universe is perhaps one of infinitely many is derived from current theories like quantum mechanics and string theory.

Continue reading

Facebooktwittermail

Science is a dynamic, ongoing reconfiguration of knowledge and must be free to change

David P Barash writes: Coming from a scientist, this sounds smug, but here it is: science is one of humanity’s most noble and successful endeavours, and our best way to learn how the world works. We know more than ever about our own bodies, the biosphere, the planet and even the cosmos. We take pictures of Pluto, unravel quantum mechanics, synthesise complex chemicals and can peer into (as well as manipulate) the workings of DNA, not to mention our brains and, increasingly, even our diseases.

Sometimes science’s very success causes trouble, it’s true. Nuclear weapons – perhaps the most immediate threat to life on Earth – were a triumph for science. Then there are the paradoxical downsides of modern medicine, notably overpopulation, plus the environmental destruction that science has unwittingly promoted. But these are not the cause of the crisis faced by science today. Today science faces a crisis of legitimacy which is entirely centred on rampant public distrust and disavowal.

A survey by the Pew Research Center in Washington, DC, conducted with the American Association for the Advancement of Science, reported that in 2015 a mere 33 per cent of the American public accepted evolution. A standard line from – mostly Republican – politicians when asked about climate change is ‘I’m not a scientist’… as though that absolved them from looking at the facts. Vaccines have been among medical science’s most notable achievements (essentially eradicating smallpox and nearly eliminating polio, among other infectious scourges) but the anti-vaccination movement has stalled comparable progress against measles and pertussis.

How can this be? Why must we scientists struggle to defend and promote our greatest achievements? There are many possible factors at work. In some cases, science conflicts with religious belief, particularly among fundamentalists – every year I find it necessary to give my undergraduate students a ‘talk’ in which I am frank that evolutionary science is likely to challenge any literalist religious beliefs they might have. In the political sphere, there is a conflict between scientific facts and short-term economic prospects (climate‑change deniers tend to be not merely scientifically illiterate, but funded by CO2-emitting corporations). Anti-vaxxers are propelled by the lingering effect of a single discredited research report that continues to resonate with people predisposed to ‘alternative medicine’ and stubborn opposition to establishment wisdom. [Continue reading…]

Facebooktwittermail

New study indicates Earth’s inner core was formed 1-1.5 billion years ago

Phys.org reports: There have been many estimates for when the earth’s inner core was formed, but scientists from the University of Liverpool have used new data which indicates that the Earth’s inner core was formed 1 – 1.5 billion years ago as it “froze” from the surrounding molten iron outer core.

The inner core is Earth’s deepest layer. It is a ball of solid iron just larger than Pluto which is surrounded by a liquid outer core. The inner core is a relatively recent addition to our planet and establishing when it was formed is a topic of vigorous scientific debate with estimates ranging from 0.5 billion to 2 billion years ago.

In a new study published in Nature, researchers from the University’s School of Environmental Sciences analysed magnetic records from ancient igneous rocks and found that there was a sharp increase in the strength of the Earth’s magnetic field between 1 and 1.5 billion years ago.

This increased magnetic field is a likely indication of the first occurrence of solid iron at Earth’s centre and the point in Earth’s history at which the solid inner core first started to “freeze” out from the cooling molten outer core.

Liverpool palaeomagnetism expert and the study’s lead author, Dr Andy Biggin, said: “This finding could change our understanding of the Earth’s interior and its history.” [Continue reading…]

Facebooktwittermail

There is no known physics theory that is true at every scale — there may never be

Lawrence M Krauss writes: Whenever you say anything about your daily life, a scale is implied. Try it out. “I’m too busy” only works for an assumed time scale: today, for example, or this week. Not this century or this nanosecond. “Taxes are onerous” only makes sense for a certain income range. And so on.

Surely the same restriction doesn’t hold true in science, you might say. After all, for centuries after the introduction of the scientific method, conventional wisdom held that there were theories that were absolutely true for all scales, even if we could never be empirically certain of this in advance. Newton’s universal law of gravity, for example, was, after all, universal! It applied to falling apples and falling planets alike, and accounted for every significant observation made under the sun, and over it as well.

With the advent of relativity, and general relativity in particular, it became clear that Newton’s law of gravity was merely an approximation of a more fundamental theory. But the more fundamental theory, general relativity, was so mathematically beautiful that it seemed reasonable to assume that it codified perfectly and completely the behavior of space and time in the presence of mass and energy.

The advent of quantum mechanics changed everything. When quantum mechanics is combined with relativity, it turns out, rather unexpectedly in fact, that the detailed nature of the physical laws that govern matter and energy actually depend on the physical scale at which you measure them. This led to perhaps the biggest unsung scientific revolution in the 20th century: We know of no theory that both makes contact with the empirical world, and is absolutely and always true. [Continue reading…]

Facebooktwittermail

Why should we place our faith in science?

By Jonathan Keith, Monash University

Most of us would like to think scientific debate does not operate like the comments section of online news articles. These are frequently characterised by inflexibility, truculence and expostulation. Scientists are generally a little more civil, but sometimes not much so!

There is a more fundamental issue here than politeness, though. Science has a reputation as an arbiter of fact above and beyond just personal opinion or bias. The term “scientific method” suggests there exists an agreed upon procedure for processing evidence which, while not infallible, is at least impartial.

So when even the most respected scientists can arrive at different deeply held convictions when presented with the same evidence, it undermines the perceived impartiality of the scientific method. It demonstrates that science involves an element of subjective or personal judgement.

Yet personal judgements are not mere occasional intruders on science, they are a necessary part of almost every step of reasoning about evidence.

Continue reading

Facebooktwittermail

The climate story nobody talks about

Adam Frank writes: On Nov. 30, world leaders will gather in Paris for a pivotal United Nations conference on climate change.

Given its importance, I want to use the next couple months to explore some alternative perspectives on the unruly aggregate of topics lumped together as “climate change.”

There is an urgent demand for such alternative narratives and it rises, in part, from the ridiculous stalemate we find ourselves in today. But the endless faux “debate” about the state of climate science also obscures a deeper — and more profound — reality: We’ve become a species of enormous capacities with the power to change an entire planet. So, what exactly does this mean?

In service of answering this question and looking for perspectives on climate change beyond the usual focus on controversy, let’s begin by acknowledging a single fact that’s rarely discussed in the media: Climate science is a triumph of human civilization.

Landing on the moon. The development of relativity theory. The discovery of DNA. We rightfully hail these accomplishments as testaments to the creative power of the human imagination. We point to them as the highest achievements of our species, calling them milestones in our collective evolution.

But climate science is no different. It, too, belongs in that short list of epoch making human efforts. [Continue reading…]

Facebooktwittermail

Over half of psychology studies fail reproducibility test

Nature reports: Don’t trust everything you read in the psychology literature. In fact, two thirds of it should probably be distrusted.

In the biggest project of its kind, Brian Nosek, a social psychologist and head of the Center for Open Science in Charlottesville, Virginia, and 269 co-authors repeated work reported in 98 original papers from three psychology journals, to see if they independently came up with the same results.

The studies they took on ranged from whether expressing insecurities perpetuates them to differences in how children and adults respond to fear stimuli, to effective ways to teach arithmetic. [Continue reading…]

Facebooktwittermail

Landmark discoveries that were later debunked

Shannon Hall writes: It begins with the smallest anomaly. The first exoplanets were the slightest shifts in a star’s light. The Higgs boson was just a bump in the noise. And the Big Bang sprung from a few rapidly moving galaxies that should have been staying put. Great scientific discoveries are born from puny signals that prompt attention.

And now, another tantalizing, result is gathering steam, stirring the curiosity of physicists worldwide. It’s a bump in the data gathered by the Large Hadron Collider (LHC), the world’s most powerful particle accelerator. If the bump matures into a clearer peak during the LHC’s second run, it could indicate the existence of a new, unexpected particle that’s 2,000 times heavier than the proton. Ultimately, it could provoke a major update to our understanding of physics.

Or it could simply be a statistical fluke, doomed to disappear over time. But the bump currently has a significance level of three sigma, meaning that this little guy just might be here to stay. The rule of thumb in physics is that a one-sigma result could easily be due to random fluctuations, like the fair coin that flipped tails twice. A three-sigma result counts as an observation, worth discussing and publishing. But for physicists to proclaim a discovery, a finding that rewrites textbooks, a result has to be at the five-sigma level. At that point, the chance of the signal arising randomly is only one in a million.

There’s no knowing if the LHC researchers’ new finding is real until they gather more data. And even bigger would-be discoveries — those with five-sigma results and better — have led physicists astray before, raising hopes for new insights into the Universe before being disproved by other data. When pushing the very limits of what we can possibly measure, false positives are always a danger. Here are five examples where seemingly solid findings came undone. [Continue reading…]

Facebooktwittermail

No, the Earth is not heading for a ‘mini ice age’

Eric Holthaus writes: A new study and related press release from the Royal Astronomical Society is making the rounds in recent days, claiming that a new statistical analysis of sunspot cycles shows “solar activity will fall by 60 per cent during the 2030s” to a level that last occurred during the so-called Little Ice Age, which ended 300 years ago.

Since climate change deniers have a particular fascination with sunspot cycles, this story has predictably been picked up by all manner of conservative news media, with a post in the Telegraph quickly gathering up tens of thousands of shares. The only problem is, it’s a wildly inaccurate reading of the research.

Sunspots have been observed on a regular basis for at least 400 years, and over that period, there’s a weak correlation between the number of sunspots and global temperature — most notably during a drastic downturn in the number of sunspots from about 1645 to 1715. Known as the Maunder minimum, this phenomenon happened about the same time as a decades-long European cold snap known as the Little Ice Age. That connection led to theory that this variability remains the dominant factor in Earth’s climate. Though that idea is still widely circulated, it’s been disproved. In reality, sunspots fluctuate in an 11-year cycle, and the current cycle is the weakest in 100 years — yet 2014 was the planet’s hottest year in recorded history. [Continue reading…]

Facebooktwittermail

The risks that GMOs may pose to the global ecosystem

Mark Spitznagel and Nassim Nicholas Taleb, who both anticipated the failure of the financial system in 2007, see eerie parallels in the reasoning being used by those who believed in stability then and those who insist now that there are no significant risks involved in the promotion of genetically modified organisms (GMOs).

Spitznagel and Taleb write: First, there has been a tendency to label anyone who dislikes G.M.O.s as anti-science — and put them in the anti-antibiotics, antivaccine, even Luddite category. There is, of course, nothing scientific about the comparison. Nor is the scholastic invocation of a “consensus” a valid scientific argument.

Interestingly, there are similarities between arguments that are pro-G.M.O. and snake oil, the latter having relied on a cosmetic definition of science. The charge of “therapeutic nihilism” was leveled at people who contested snake oil medicine at the turn of the 20th century. (At that time, anything with the appearance of sophistication was considered “progress.”)

Second, we are told that a modified tomato is not different from a naturally occurring tomato. That is wrong: The statistical mechanism by which a tomato was built by nature is bottom-up, by tinkering in small steps (as with the restaurant business, distinct from contagion-prone banks). In nature, errors stay confined and, critically, isolated.

Third, the technological salvation argument we faced in finance is also present with G.M.O.s, which are intended to “save children by providing them with vitamin-enriched rice.” The argument’s flaw is obvious: In a complex system, we do not know the causal chain, and it is better to solve a problem by the simplest method, and one that is unlikely to cause a bigger problem.

Fourth, by leading to monoculture — which is the same in finance, where all risks became systemic — G.M.O.s threaten more than they can potentially help. Ireland’s population was decimated by the effect of monoculture during the potato famine. Just consider that the same can happen at a planetary scale.

Fifth, and what is most worrisome, is that the risk of G.M.O.s are more severe than those of finance. They can lead to complex chains of unpredictable changes in the ecosystem, while the methods of risk management with G.M.O.s — unlike finance, where some effort was made — are not even primitive.

The G.M.O. experiment, carried out in real time and with our entire food and ecological system as its laboratory, is perhaps the greatest case of human hubris ever. It creates yet another systemic, “too big too fail” enterprise — but one for which no bailouts will be possible when it fails. [Continue reading…]

Facebooktwittermail

On the value of not knowing everything

James McWilliams writes: In January 2010, while driving from Chicago to Minneapolis, Sam McNerney played an audiobook and had an epiphany. The book was Jonah Lehrer’s How We Decide, and the epiphany was that consciousness could reside in the brain. The quest for an empirical understanding of consciousness has long preoccupied neurobiologists. But McNerney was no neurobiologist. He was a twenty-year-old philosophy major at Hamilton College. The standard course work — ancient, modern, and contemporary philosophy — enthralled him. But after this drive, after he listened to Lehrer, something changed. “I had to rethink everything I knew about everything,” McNerney said.

Lehrer’s publisher later withdrew How We Decide for inaccuracies. But McNerney was mentally galvanized for good reason. He had stumbled upon what philosophers call the “Hard Problem” — the quest to understand the enigma of the gap between mind and body. Intellectually speaking, what McNerney experienced was like diving for a penny in a pool and coming up with a gold nugget.

The philosopher Thomas Nagel drew popular attention to the Hard Problem four decades ago in an influential essay titled “What Is It Like to Be a Bat?” Frustrated with the “recent wave of reductionist euphoria,” Nagel challenged the reductive conception of mind — the idea that consciousness resides as a physical reality in the brain — by highlighting the radical subjectivity of experience. His main premise was that “an organism has conscious mental states if and only if there is something that it is like to be that organism.”

If that idea seems elusive, consider it this way: A bat has consciousness only if there is something that it is like for that bat to be a bat. Sam has consciousness only if there is something it is like for Sam to be Sam. You have consciousness only if there is something that it is like for you to be you (and you know that there is). And here’s the key to all this: Whatever that “like” happens to be, according to Nagel, it necessarily defies empirical verification. You can’t put your finger on it. It resists physical accountability.

McNerney returned to Hamilton intellectually turbocharged. This was an idea worth pondering. “It took hold of me,” he said. “It chose me — I know you hear that a lot, but that’s how it felt.” He arranged to do research in cognitive science as an independent study project with Russell Marcus, a trusted professor. Marcus let him loose to write what McNerney calls “a seventy-page hodgepodge of psychological research and philosophy and everything in between.” Marcus remembered the project more charitably, as “a huge, ambitious, wide-ranging, smart, and engaging paper.” Once McNerney settled into his research, Marcus added, “it was like he had gone into a phone booth and come out as a super-student.”

When he graduated in 2011, McNerney was proud. “I pulled it off,” he said about earning a degree in philosophy. Not that he had any hard answers to any big problems, much less the Hard Problem. Not that he had a job. All he knew was that he “wanted to become the best writer and thinker I could be.”

So, as one does, he moved to New York City.

McNerney is the kind of young scholar adored by the humanities. He’s inquisitive, open-minded, thrilled by the world of ideas, and touched with a tinge of old-school transcendentalism. What Emerson said of Thoreau — “he declined to give up his large ambition of knowledge and action for any narrow craft or profession” — is certainly true of McNerney. [Continue reading…]

Facebooktwittermail

The complexity of science

Leonard Mlodinow writes: The other week I was working in my garage office when my 14-year-old daughter, Olivia, came in to tell me about Charles Darwin. Did I know that he discovered the theory of evolution after studying finches on the Galápagos Islands? I was steeped in what felt like the 37th draft of my new book, which is on the development of scientific ideas, and she was proud to contribute this tidbit of history that she had just learned in class.

Sadly, like many stories of scientific discovery, that commonly recounted tale, repeated in her biology textbook, is not true.

The popular history of science is full of such falsehoods. In the case of evolution, Darwin was a much better geologist than ornithologist, at least in his early years. And while he did notice differences among the birds (and tortoises) on the different islands, he didn’t think them important enough to make a careful analysis. His ideas on evolution did not come from the mythical Galápagos epiphany, but evolved through many years of hard work, long after he had returned from the voyage. (To get an idea of the effort involved in developing his theory, consider this: One byproduct of his research was a 684-page monograph on barnacles.)

The myth of the finches obscures the qualities that were really responsible for Darwin’s success: the grit to formulate his theory and gather evidence for it; the creativity to seek signs of evolution in existing animals, rather than, as others did, in the fossil record; and the open-mindedness to drop his belief in creationism when the evidence against it piled up.

The mythical stories we tell about our heroes are always more romantic and often more palatable than the truth. But in science, at least, they are destructive, in that they promote false conceptions of the evolution of scientific thought. [Continue reading…]

Facebooktwittermail

Chain reactions spreading ideas through science and culture

David Krakauer writes: On Dec. 2, 1942, just over three years into World War II, President Roosevelt was sent the following enigmatic cable: “The Italian navigator has landed in the new world.” The accomplishments of Christopher Columbus had long since ceased to be newsworthy. The progress of the Italian physicist, Enrico Fermi, navigator across the territories of Lilliputian matter — the abode of the microcosm of the atom — was another thing entirely. Fermi’s New World, discovered beneath a Midwestern football field in Chicago, was the province of newly synthesized radioactive elements. And Fermi’s landing marked the earliest sustained and controlled nuclear chain reaction required for the construction of an atomic bomb.

This physical chain reaction was one of the links of scientific and cultural chain reactions initiated by the Hungarian physicist, Leó Szilárd. The first was in 1933, when Szilárd proposed the idea of a neutron chain reaction. Another was in 1939, when Szilárd and Einstein sent the now famous “Szilárd-Einstein” letter to Franklin D. Roosevelt informing him of the destructive potential of atomic chain reactions: “This new phenomenon would also lead to the construction of bombs, and it is conceivable — though much less certain — that extremely powerful bombs of a new type may thus be constructed.”

This scientific information in turn generated political and policy chain reactions: Roosevelt created the Advisory Committee on Uranium which led in yearly increments to the National Defense Research Committee, the Office of Scientific Research and Development, and finally, the Manhattan Project.

Life itself is a chain reaction. Consider a cell that divides into two cells and then four and then eight great-granddaughter cells. Infectious diseases are chain reactions. Consider a contagious virus that infects one host that infects two or more susceptible hosts, in turn infecting further hosts. News is a chain reaction. Consider a report spread from one individual to another, who in turn spreads the message to their friends and then on to the friends of friends.

These numerous connections that fasten together events are like expertly arranged dominoes of matter, life, and culture. As the modernist designer Charles Eames would have it, “Eventually everything connects — people, ideas, objects. The quality of the connections is the key to quality per se.”

Dominoes, atoms, life, infection, and news — all yield domino effects that require a sensitive combination of distances between pieces, physics of contact, and timing. When any one of these ingredients is off-kilter, the propagating cascade is likely to come to a halt. Premature termination is exactly what we might want to happen to a deadly infection, but it is the last thing that we want to impede an idea. [Continue reading…]

Facebooktwittermail