Katherine W. Phillips writes: The first thing to acknowledge about diversity is that it can be difficult. In the U.S., where the dialogue of inclusion is relatively advanced, even the mention of the word “diversity” can lead to anxiety and conflict. Supreme Court justices disagree on the virtues of diversity and the means for achieving it. Corporations spend billions of dollars to attract and manage diversity both internally and externally, yet they still face discrimination lawsuits, and the leadership ranks of the business world remain predominantly white and male.
It is reasonable to ask what good diversity does us. Diversity of expertise confers benefits that are obvious — you would not think of building a new car without engineers, designers and quality-control experts — but what about social diversity? What good comes from diversity of race, ethnicity, gender and sexual orientation? Research has shown that social diversity in a group can cause discomfort, rougher interactions, a lack of trust, greater perceived interpersonal conflict, lower communication, less cohesion, more concern about disrespect, and other problems. So what is the upside?
The fact is that if you want to build teams or organizations capable of innovating, you need diversity. Diversity enhances creativity. It encourages the search for novel information and perspectives, leading to better decision making and problem solving. Diversity can improve the bottom line of companies and lead to unfettered discoveries and breakthrough innovations. Even simply being exposed to diversity can change the way you think. This is not just wishful thinking: it is the conclusion I draw from decades of research from organizational scientists, psychologists, sociologists, economists and demographers.
The universe is an astonishingly secretive place. Mysterious substances known as dark matter and dark energy account for some 95% of it. Despite huge effort to find out what they are, we simply don’t know.
We know dark matter exists because of the gravitational pull of galaxy clusters – the matter we can see in a cluster just isn’t enough to hold it together by gravity. So there must be some extra material there, made up by unknown particles that simply aren’t visible to us. Several candidate particles have already been proposed.
Scientists are trying to work out what these unknown particles are by looking at how they affect the ordinary matter we see around us. But so far it has proven difficult, so we know it interacts only weakly with normal matter at best. Now my colleague Benjamin Varcoe and I have come up with a new way to probe dark matter that may just prove successful: by using atoms that have been stretched to be 4,000 times larger than usual.
Henry Cowles writes: There is a theory in psychology called the theory theory. It’s a theory about theories. While this might sound obvious, the theory theory leads to counterintuitive conclusions. A quarter-century ago, psychologists began to point out important links between the development of scientific theories and how everyday thinking, including children’s thinking, works. According to theory theorists, a child learns by constructing a theory of the world and testing it against experience. In this sense, children are little scientists – they hypothesise on the basis of observations, test their hypotheses experimentally, and then revise their views in light of the evidence they gather.
According to Alison Gopnik, a theory theorist at the University of California, Berkeley, the analogy works both ways. It’s not just that ‘children are little scientists’, she wrote in her paper ‘The Scientist as Child’ (1996), ‘but that scientists are big children.’ Depending on where you look, you can see the scientific method in a child, or spot the inner child in a scientist. Either way, the theory theory makes it easy to see connections between elementary learning and scientific theorising.
This should be pretty surprising. After all, scientists go through a lot of training in order to think the way they do. Their results are exact; their methods exacting. Most of us share the sense that scientific thinking is difficult, even for scientists. This perceived difficulty has bolstered (at least until recently) the collective respect for scientific expertise on which the support of cutting-edge research depends. It’s also what gives the theory theory its powerful punch. If science is so hard, how can children – and, some theory theorists argue, even infants – think like scientists in any meaningful sense? Indeed, in the age of what Erik M. Conway and Naomi Oreskes call “the merchants of doubt” (not to say in the age of Trump), isn’t it dangerous to suggest that science is a matter of child’s play?
To gain purchase on this question, let’s take a step back. Claims that children are scientists rest on a certain idea about what science is. For theory theorists – and for many of the rest of us – science is about producing theories. How we do that is often represented as a short list of steps, such as ‘observe’, ‘hypothesise’, and ‘test’, steps that have been emblazoned on posters and recited in debates for the past century. But where did this idea that science is a set of steps – a method – come from? As it turns out, we don’t need to go back to Isaac Newton or the Scientific Revolution to find the history of ‘the scientific method’ in this sense. The image of science that most of us hold, even most scientists, comes from a surprising place: modern child psychology. The scientific method as we know it today comes from psychological studies of children only a century ago. [Continue reading…]
Nathaniel Scharping writes: Ever since our ancestors cut rough paths through the wilderness, humanity has been laying down trails. From footpaths to highways, a global network of roads binds communities and facilitates the exchange of goods and ideas. But there is a flip side to this creeping tangle of pathways: The roads that bring us closer also serve divide ecosystems into smaller parcels, turning vast expanses into a jigsaw of human mobility.
In a study published in Science, an international team of researchers attempted to quantify the extent to which roads have sliced up the globe. They used data from OpenStreetMap, a crowd-sourced mapping project, to chart how much land is covered by roads. For the purposes of their project, they defined a roadway as everything within a kilometer of the physical road itself (studies have shown measurable impacts on the environment extending out at least that far).
They estimated that roughly 20 percent of land is occupied by roads, not including Greenland and Antarctica. Although that leaves 80 percent as open space, this land is far from whole. Transected by highways and streets, the road-free areas are cut up into some 600,000 individual parcels. Half of these are less than a square mile, while only 7 percent span more than 60 square miles. The true impact of roads seems to be the gradual tessellation of once-cohesive landscapes. [Continue reading…]
Ed Yong writes: In 1995, if you had told Toby Spribille that he’d eventually overthrow a scientific idea that’s been the stuff of textbooks for 150 years, he would have laughed at you. Back then, his life seemed constrained to a very different path. He was raised in a Montana trailer park, and home-schooled by what he now describes as a “fundamentalist cult.” At a young age, he fell in love with science, but had no way of feeding that love. He longed to break away from his roots and get a proper education.
At 19, he got a job at a local forestry service. Within a few years, he had earned enough to leave home. His meager savings and non-existent grades meant that no American university would take him, so Spribille looked to Europe.
Thanks to his family background, he could speak German, and he had heard that many universities there charged no tuition fees. His missing qualifications were still a problem, but one that the University of Gottingen decided to overlook. “They said that under exceptional circumstances, they could enroll a few people every year without transcripts,” says Spribille. “That was the bottleneck of my life.”
Throughout his undergraduate and postgraduate work, Spribille became an expert on the organisms that had grabbed his attention during his time in the Montana forests — lichens.
You’ve seen lichens before, but unlike Spribille, you may have ignored them. They grow on logs, cling to bark, smother stones. At first glance, they look messy and undeserving of attention. On closer inspection, they are astonishingly beautiful. They can look like flecks of peeling paint, or coralline branches, or dustings of powder, or lettuce-like fronds, or wriggling worms, or cups that a pixie might drink from. They’re also extremely tough. They grow in the most inhospitable parts of the planet, where no plant or animal can survive.
Lichens have an important place in biology. In the 1860s, scientists thought that they were plants. But in 1868, a Swiss botanist named Simon Schwendener revealed that they’re composite organisms, consisting of fungi that live in partnership with microscopic algae. This “dual hypothesis” was met with indignation: it went against the impetus to put living things in clear and discrete buckets. The backlash only collapsed when Schwendener and others, with good microscopes and careful hands, managed to tease the two partners apart.
Schwendener wrongly thought that the fungus had “enslaved” the alga, but others showed that the two cooperate. The alga uses sunlight to make nutrients for the fungus, while the fungus provides minerals, water, and shelter. This kind of mutually beneficial relationship was unheard of, and required a new word. Two Germans, Albert Frank and Anton de Bary, provided the perfect one — symbiosis, from the Greek for ‘together’ and ‘living’. [Continue reading…]
David Grinspoon writes: As a planetary astrobiologist, I am focused on the major transitions in planetary evolution and the evolving relationship between planets and life. The scientific community is converging on the idea that we have entered a new epoch of Earth history, one in which the net activity of humans has become an agent of global change as powerful as the great forces of nature that shape continents and propel the evolution of species. This concept has garnered a lot of attention, and justly so. Thinking about the new epoch – often called the Anthropocene, or the age of humanity – challenges us to look at ourselves in the mirror of deep time, measured not in centuries or even in millennia, but over millions and billions of years. And yet much of the recent discussion and debate over the Anthropocene still does not come to terms with its full meaning and importance.
Various markers have been proposed for the starting date of the Anthropocene, such as the rise in CO2, isotopes from nuclear tests, the ‘Columbian exchange’ of species between hemispheres when Europeans colonised the Americas, or more ancient human modifications of the landscape or climate. The question in play here is: when did our world gain a quality that is uniquely human? Many species have had a major influence on the globe, but they don’t each get their own planetary transition in the geologic timescale. When did humans begin changing things in a way that no other species has ever changed Earth before? Making massive changes in landscapes is not unique to us. Beavers do plenty of that, for example, when they build dams, alter streams, cut down forests and create new meadows. Even changing global climate and initiating mass extinction is not a human first. Photosynthetic bacteria did that some 2.5 billion years ago.
What distinguishes humans from other world-changing organisms must be related to our great cleverness and adaptability; the power that comes from communicating, planning and working in social groups; transmitting knowledge from one generation to the next; and applying these skills toward altering our surroundings and expanding our habitable domains. However, people have been engaged in these activities for tens of thousands of years, and have produced many different environmental modifications proposed as markers of the Anthropocene’s beginning. Therefore, those definitions strike me as incomplete. Until now, the people causing the disturbances had no way of recognising or even conceiving of a global change. Yes, humans have been altering our planet for millennia, but there is something going on now that was not happening when we started doing all that world-changing. [Continue reading…]
Steve Paulson writes: One gets the sense that Freeman Dyson has seen everything. It’s not just that at 92 he’s had a front row seat on scientific breakthroughs for the past century, or that he’s been friends and colleagues with many of the giants of 20th-century physics, from Hans Bethe and Wolfgang Pauli to Robert Oppenheimer and Richard Feynman. Dyson is one of the great sages of the science world. If you want to get a sense of where science has come from and where it might be headed, Dyson is your man.
Dyson grew up in England with a gift for numbers and calculating. During World War II, he worked with the British Royal Air Force to pinpoint bombing targets in Germany. After the war, he moved to the United States where he got to know many of the physicists who’d built the atomic bomb. Like a lot of scientists from that era, excitement over the bomb helped launch his career in physics, and later he dreamed of building a fleet of spaceships that would travel around the solar system, powered by nuclear bombs. Perhaps it’s no accident that Dyson became an outspoken critic of nuclear weapons during the Cold War.
For more than six decades, Princeton’s Institute for Advanced Study has been his intellectual home. Dyson has described himself as a fox rather than a hedgehog. He says scientists who jump from one project to the next have more fun. Though no longer an active scientist, he continues to track developments in science and technology. Dyson seems to be happy living in a universe filled with answered questions, and he likes the fact that physics has so far failed to unify the classical world of stars and the quantum world of atoms.
When I approached Dyson about an interview on the idea of the heroic in science, he responded, “I prefer telling stories to talking philosophy.” In the end, I got both stories and big ideas. Dyson isn’t shy about making sweeping pronouncements—whether on the archaic requirements of the Ph.D. system or the pitfalls of Big Science—but his manner is understated and his dry sense of humor is always just below the surface. [Continue reading…]
David Grinspoon writes: Can a planet be alive? Lynn Margulis, a giant of late 20th-century biology, who had an incandescent intellect that veered toward the unorthodox, thought so. She and chemist James Lovelock together theorized that life must be a planet-altering phenomenon and the distinction between the “living” and “nonliving” parts of Earth is not as clear-cut as we think. Many members of the scientific community derided their theory, called the Gaia hypothesis, as pseudoscience, and questioned their scientific integrity. But now Margulis and Lovelock may have their revenge. Recent scientific discoveries are giving us reason to take this hypothesis more seriously. At its core is an insight about the relationship between planets and life that has changed our understanding of both, and is shaping how we look for life on other worlds.
Studying Earth’s global biosphere together, Margulis and Lovelock realized that it has some of the properties of a life form. It seems to display “homeostasis,” or self‐regulation. Many of Earth’s life‐sustaining qualities exhibit remarkable stability. The temperature range of the climate; the oxygen content of the atmosphere; the pH, chemistry, and salinity of the ocean—all these are biologically mediated. All have, for hundreds of millions of years, stayed within a range where life can thrive. Lovelock and Margulis surmised that the totality of life is interacting with its environments in ways that regulate these global qualities. They recognized that Earth is, in a sense, a living organism. Lovelock named this creature Gaia.
Margulis and Lovelock showed that the Darwinian picture of biological evolution is incomplete. Darwin identified the mechanism by which life adapts due to changes in the environment, and thus allowed us to see that all life on Earth is a continuum, a proliferation, a genetic diaspora from a common root. In the Darwinian view, Earth was essentially a stage with a series of changing backdrops to which life had to adjust. Yet, what or who was changing the sets? Margulis and Lovelock proposed that the drama of life does not unfold on the stage of a dead Earth, but that, rather, the stage itself is animated, part of a larger living entity, Gaia, composed of the biosphere together with the “nonliving” components that shape, respond to, and cycle through the biota of Earth. Yes, life adapts to environmental change, shaping itself through natural selection. Yet life also pushes back and changes the environment, alters the planet. This is now as obvious as the air you are breathing, which has been oxygenated by life. So evolution is not a series of adaptations to inanimate events, but a system of feedbacks, an exchange. Life has not simply molded itself to the shifting contours of a dynamic Earth. Rather, life and Earth have shaped each other as they’ve co-evolved. When you start looking at the planet in this way, then you see coral reefs, limestone cliffs, deltas, bogs, and islands of bat guano as parts of this larger animated entity. You realize that the entire skin of Earth, and its depths as well, are indeed alive. [Continue reading…]
Charles Leadbeater writes: Heidegger detested René Descartes’s dictum ‘I think, therefore I am’ which located the search for identity in our brains. There, it was secured by a rational process of thought, detached from a physical world that presented itself to the knowing subject as a puzzle to be solved. Descartes’s ideas launched a great inward turn in philosophy with the subject at the centre of the drama confronting the objective world about which he tries to gain knowledge.
Had Heidegger ever come up with a saying to sum up his philosophy it would have been: ‘I dwell, therefore I am.’ For him, identity is bound up with being in the world, which in turn means having a place in it. We don’t live in the abstract space favoured by philosophers, but in a particular place, with specific features and history. We arrive already entangled with the world, not detached from it. Our identity is not secured just in our heads but through our bodies too, how we feel and how we are moved, literally and emotionally.
Instead of presenting it as a puzzle to be solved, Heidegger’s world is one we should immerse ourselves in and care for: it is part of the larger ‘being’ where we all belong. As [Jeff] Malpas puts it, Heidegger argues that we should release ourselves to the world, to find our part in its larger ebb and flow, rather than seek to detach ourselves from it in order to dominate it. [Continue reading…]
The Washington Post reports: In Ethiopia, she is known as “Dinkinesh” — Amharic for “you are marvelous.” It’s an apt name for one of the most complete ancient hominid skeletons ever found, an assemblage of fossilized bones that has given scientists unprecedented insight into the history of humanity.
You probably know her as Lucy.
Discovered in 1974, wedged into a gully in Ethiopia’s Awash Valley, the delicate, diminutive skeleton is both uncannily familiar and alluringly strange. In some ways, the 3.2-million-year-old Australopithecus was a lot like us; her hips, feet and long legs were clearly made for walking. But she also had long arms and dexterous curved fingers, much like modern apes that still swing from the trees.
So, for decades scientists have wondered: Who exactly was Lucy? Was she lumbering and land-bound, like us modern humans? Or did she retain some of the ancient climbing abilities that made her ancestors — and our own — champions of the treetops?
A new study suggests she was a little of both: Though her lower limbs were adapted for bipedalism, she had exceptionally strong arm bones that allowed her to haul herself up branches, researchers reported Wednesday in the journal PLoS One. [Continue reading…]
Live Science reports: After analyzing the crater from the cosmic impact that ended the age of dinosaurs, scientists now say the object that smacked into the planet may have punched nearly all the way through Earth’s crust, according to a new study.
The finding could shed light on how impacts can reshape the faces of planets and how such collisions can generate new habitats for life, the researchers said.
Asteroids and comets occasionally pelt Earth’s surface. Still, for the most part, changes to the planet’s surface result largely from erosion due to rain and wind, “as well as plate tectonics, which generates mountains and ocean trenches,” said study co-author Sean Gulick, a marine geophysicist at the University of Texas at Austin.
In contrast, on the solar system’s other rocky planets, erosion and plate tectonics typically have little, if any, influence on the planetary surfaces. “The key driver of surface changes on those planets is constantly getting hit by stuff from space,” Gulick told Live Science. [Continue reading…]
Nathan Collins writes: With climate change and deforestation threatening biodiversity around the world, it’s fair to wonder just how rapidly threatened species have been declining, and when exactly those declines began. The answer is bleak: Among threatened vertebrates, rapid losses began in the late 19th century, and numbers have since declined by about 25 percent per decade, according to a new study.
“Although preservation of biodiversity is vital to a sustainable human society, rapid population decline (RPD) continues to be widespread” across plant and animal populations, Haipeng Li and a team of Chinese and American biologists write in Proceedings of the National Academy of Sciences.
Understanding the severity and origins of these population losses could help conservationists protect endangered species and possibly help promote public awareness of the threat, the researchers argue. But there’s a problem: Good data on plant and animal population sizes only goes back about four decades, and populations surely declined prior to that.
Fortunately, modern biologists have a way to circumvent that: DNA. [Continue reading…]