Elizabeth Kolbert writes: In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.
Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.
As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine — they’d been obtained from the Los Angeles County coroner’s office — the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well — significantly better than the average student — even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student — a conclusion that was equally unfounded.
“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.” [Continue reading…]
New Scientist reports: It’s a hole in one! Bumblebees have learned to push a ball into a hole to get a reward, stretching what was thought possible for small-brained creatures.
Plenty of previous studies have shown that bees are no bumbling fools, but these have generally involved activities that are somewhat similar to their natural foraging behaviour.
For example, bees were able to learn to pull a string to reach an artificial flower containing sugar solution. Bees sometimes have to pull parts of flowers to access nectar, so this isn’t too alien to them.
So while these tasks might seem complex, they don’t really show a deeper level of learning, says Olli Loukola at Queen Mary University of London, an author of that study.
Loukola and his team decided the next challenge was whether bees could learn to move an object that was not attached to the reward.
They built a circular platform with a small hole in the centre filled with sugar solution, into which bees had to move a ball to get a reward. A researcher showed them how to do this by using a plastic bee on a stick to push the ball.
The researchers then took three groups of other bees and trained them in different ways. One group observed a previously trained bee solving the task; another was shown the ball moving into the hole, pulled by a hidden magnet; and a third group was given no demonstration, but was shown the ball already in the hole containing the reward.
The bees then did the task themselves. Those that had watched other bees do it were most successful and took less time than those in the other groups to solve the task. Bees given the magnetic demonstration were also more successful than those not given one. [Continue reading…]
Science News reports: Chimps with little social status influence their comrades’ behavior to a surprising extent, a new study suggests.
In groups of captive chimps, a method for snagging food from a box spread among many individuals who saw a low-ranking female peer demonstrate the technique, say primatologist Stuart Watson of the University of St. Andrews in Fife, Scotland, and colleagues. But in other groups where an alpha male introduced the same box-opening technique, relatively few chimps copied the behavior, the researchers report online February 7 in the American Journal of Primatology.
“I suspect that even wild chimpanzees are motivated to copy obviously rewarding behaviors of low-ranking individuals, but the limited spread of rewarding behaviors demonstrated by alpha males was quite surprising,” Watson says. Previous research has found that chimps in captivity more often copy rewarding behaviors of dominant versus lower-ranking group mates. The researchers don’t understand why in this case the high-ranking individuals weren’t copied as much. [Continue reading…]
Carlo Rovelli writes: According to tradition, in the year 450 BCE, a man embarked on a 400-mile sea voyage from Miletus in Anatolia to Abdera in Thrace, fleeing a prosperous Greek city that was suddenly caught up in political turmoil. It was to be a crucial journey for the history of knowledge. The traveller’s name was Leucippus; little is known about his life, but his intellectual spirit proved indelible. He wrote the book The Great Cosmology, in which he advanced new ideas about the transient and permanent aspects of the world. On his arrival in Abdera, Leucippus founded a scientific and philosophical school, to which he soon affiliated a young disciple, Democritus, who cast a long shadow over the thought of all subsequent times.
Together, these two thinkers have built the majestic cathedral of ancient atomism. Leucippus was the teacher. Democritus, the great pupil who wrote dozens of works on every field of knowledge, was deeply venerated in antiquity, which was familiar with these works. ‘The most subtle of the Ancients,’ Seneca called him. ‘Who is there whom we can compare with him for the greatness, not merely of his genius, but also of his spirit?’ asks Cicero.
What Leucippus and Democritus had understood was that the world can be comprehended using reason. They had become convinced that the variety of natural phenomena must be attributable to something simple, and had tried to understand what this something might be. They had conceived of a kind of elementary substance from which everything was made. Anaximenes of Miletus had imagined this substance could compress and rarefy, thus transforming from one to another of the elements from which the world is constituted. It was a first germ of physics, rough and elementary, but in the right direction. An idea was needed, a great idea, a grand vision, to grasp the hidden order of the world. Leucippus and Democritus came up with this idea.
The idea of Democritus’s system is extremely simple: the entire universe is made up of a boundless space in which innumerable atoms run. Space is without limits; it has neither an above nor a below; it is without a centre or a boundary. Atoms have no qualities at all, apart from their shape. They have no weight, no colour, no taste. ‘Sweetness is opinion, bitterness is opinion; heat, cold and colour are opinion: in reality only atoms, and vacuum,’ said Democritus. Atoms are indivisible; they are the elementary grains of reality, which cannot be further subdivided, and everything is made of them. They move freely in space, colliding with one another; they hook on to and push and pull one another. Similar atoms attract one another and join.
This is the weave of the world. This is reality. Everything else is nothing but a by-product – random and accidental – of this movement, and this combining of atoms. The infinite variety of the substances of which the world is made derives solely from this combining of atoms. [Continue reading…]
Moheb Costandi writes: How do humans and other animals find their way from A to B? This apparently simple question has no easy answer. But after decades of extensive research, a picture of how the brain encodes space and enables us to navigate through it is beginning to emerge. Earlier, neuroscientists had found that the mammalian brain contains at least three different cell types, which cooperate to encode neural representations of an animal’s location and movements.
But that picture has just grown far more complex. New research now points to the existence of two more types of brain cells involved in spatial navigation — and suggests previously unrecognized neural mechanisms underlying the way mammals make their way about the world.
Earlier work, performed in freely moving rodents, revealed that neurons called place cells fire when an animal is in a specific location. Another type — grid cells — activate periodically as an animal moves around. Finally, head direction cells fire when a mouse or rat moves in a particular direction. Together, these cells, which are located in and around a deep brain structure called the hippocampus, appear to encode an animal’s current location within its environment by tracking the distance and direction of its movements.
This process is fine for simply moving around, but it does not explain exactly how a traveler gets to a specific destination. The question of how the brain encodes the endpoint of a journey has remained unanswered. To investigate this, Ayelet Sarel of the Weismann Institute of Science in Israel and her colleagues trained three Egyptian fruit bats to fly in complicated paths and then land at a specific location where they could eat and rest. The researchers recorded the activity of a total of 309 hippocampal neurons with a wireless electrode array. About a third of these neurons exhibited the characteristics of place cells, each of them firing only when the bat was in a specific area of the large flight room. But the researchers also identified 58 cells that fired only when the bats were flying directly toward the landing site. [Continue reading…]
Katherine W. Phillips writes: The first thing to acknowledge about diversity is that it can be difficult. In the U.S., where the dialogue of inclusion is relatively advanced, even the mention of the word “diversity” can lead to anxiety and conflict. Supreme Court justices disagree on the virtues of diversity and the means for achieving it. Corporations spend billions of dollars to attract and manage diversity both internally and externally, yet they still face discrimination lawsuits, and the leadership ranks of the business world remain predominantly white and male.
It is reasonable to ask what good diversity does us. Diversity of expertise confers benefits that are obvious — you would not think of building a new car without engineers, designers and quality-control experts — but what about social diversity? What good comes from diversity of race, ethnicity, gender and sexual orientation? Research has shown that social diversity in a group can cause discomfort, rougher interactions, a lack of trust, greater perceived interpersonal conflict, lower communication, less cohesion, more concern about disrespect, and other problems. So what is the upside?
The fact is that if you want to build teams or organizations capable of innovating, you need diversity. Diversity enhances creativity. It encourages the search for novel information and perspectives, leading to better decision making and problem solving. Diversity can improve the bottom line of companies and lead to unfettered discoveries and breakthrough innovations. Even simply being exposed to diversity can change the way you think. This is not just wishful thinking: it is the conclusion I draw from decades of research from organizational scientists, psychologists, sociologists, economists and demographers.
The universe is an astonishingly secretive place. Mysterious substances known as dark matter and dark energy account for some 95% of it. Despite huge effort to find out what they are, we simply don’t know.
We know dark matter exists because of the gravitational pull of galaxy clusters – the matter we can see in a cluster just isn’t enough to hold it together by gravity. So there must be some extra material there, made up by unknown particles that simply aren’t visible to us. Several candidate particles have already been proposed.
Scientists are trying to work out what these unknown particles are by looking at how they affect the ordinary matter we see around us. But so far it has proven difficult, so we know it interacts only weakly with normal matter at best. Now my colleague Benjamin Varcoe and I have come up with a new way to probe dark matter that may just prove successful: by using atoms that have been stretched to be 4,000 times larger than usual.
Henry Cowles writes: There is a theory in psychology called the theory theory. It’s a theory about theories. While this might sound obvious, the theory theory leads to counterintuitive conclusions. A quarter-century ago, psychologists began to point out important links between the development of scientific theories and how everyday thinking, including children’s thinking, works. According to theory theorists, a child learns by constructing a theory of the world and testing it against experience. In this sense, children are little scientists – they hypothesise on the basis of observations, test their hypotheses experimentally, and then revise their views in light of the evidence they gather.
According to Alison Gopnik, a theory theorist at the University of California, Berkeley, the analogy works both ways. It’s not just that ‘children are little scientists’, she wrote in her paper ‘The Scientist as Child’ (1996), ‘but that scientists are big children.’ Depending on where you look, you can see the scientific method in a child, or spot the inner child in a scientist. Either way, the theory theory makes it easy to see connections between elementary learning and scientific theorising.
This should be pretty surprising. After all, scientists go through a lot of training in order to think the way they do. Their results are exact; their methods exacting. Most of us share the sense that scientific thinking is difficult, even for scientists. This perceived difficulty has bolstered (at least until recently) the collective respect for scientific expertise on which the support of cutting-edge research depends. It’s also what gives the theory theory its powerful punch. If science is so hard, how can children – and, some theory theorists argue, even infants – think like scientists in any meaningful sense? Indeed, in the age of what Erik M. Conway and Naomi Oreskes call “the merchants of doubt” (not to say in the age of Trump), isn’t it dangerous to suggest that science is a matter of child’s play?
To gain purchase on this question, let’s take a step back. Claims that children are scientists rest on a certain idea about what science is. For theory theorists – and for many of the rest of us – science is about producing theories. How we do that is often represented as a short list of steps, such as ‘observe’, ‘hypothesise’, and ‘test’, steps that have been emblazoned on posters and recited in debates for the past century. But where did this idea that science is a set of steps – a method – come from? As it turns out, we don’t need to go back to Isaac Newton or the Scientific Revolution to find the history of ‘the scientific method’ in this sense. The image of science that most of us hold, even most scientists, comes from a surprising place: modern child psychology. The scientific method as we know it today comes from psychological studies of children only a century ago. [Continue reading…]
Nathaniel Scharping writes: Ever since our ancestors cut rough paths through the wilderness, humanity has been laying down trails. From footpaths to highways, a global network of roads binds communities and facilitates the exchange of goods and ideas. But there is a flip side to this creeping tangle of pathways: The roads that bring us closer also serve divide ecosystems into smaller parcels, turning vast expanses into a jigsaw of human mobility.
In a study published in Science, an international team of researchers attempted to quantify the extent to which roads have sliced up the globe. They used data from OpenStreetMap, a crowd-sourced mapping project, to chart how much land is covered by roads. For the purposes of their project, they defined a roadway as everything within a kilometer of the physical road itself (studies have shown measurable impacts on the environment extending out at least that far).
They estimated that roughly 20 percent of land is occupied by roads, not including Greenland and Antarctica. Although that leaves 80 percent as open space, this land is far from whole. Transected by highways and streets, the road-free areas are cut up into some 600,000 individual parcels. Half of these are less than a square mile, while only 7 percent span more than 60 square miles. The true impact of roads seems to be the gradual tessellation of once-cohesive landscapes. [Continue reading…]
Ed Yong writes: In 1995, if you had told Toby Spribille that he’d eventually overthrow a scientific idea that’s been the stuff of textbooks for 150 years, he would have laughed at you. Back then, his life seemed constrained to a very different path. He was raised in a Montana trailer park, and home-schooled by what he now describes as a “fundamentalist cult.” At a young age, he fell in love with science, but had no way of feeding that love. He longed to break away from his roots and get a proper education.
At 19, he got a job at a local forestry service. Within a few years, he had earned enough to leave home. His meager savings and non-existent grades meant that no American university would take him, so Spribille looked to Europe.
Thanks to his family background, he could speak German, and he had heard that many universities there charged no tuition fees. His missing qualifications were still a problem, but one that the University of Gottingen decided to overlook. “They said that under exceptional circumstances, they could enroll a few people every year without transcripts,” says Spribille. “That was the bottleneck of my life.”
Throughout his undergraduate and postgraduate work, Spribille became an expert on the organisms that had grabbed his attention during his time in the Montana forests — lichens.
You’ve seen lichens before, but unlike Spribille, you may have ignored them. They grow on logs, cling to bark, smother stones. At first glance, they look messy and undeserving of attention. On closer inspection, they are astonishingly beautiful. They can look like flecks of peeling paint, or coralline branches, or dustings of powder, or lettuce-like fronds, or wriggling worms, or cups that a pixie might drink from. They’re also extremely tough. They grow in the most inhospitable parts of the planet, where no plant or animal can survive.
Lichens have an important place in biology. In the 1860s, scientists thought that they were plants. But in 1868, a Swiss botanist named Simon Schwendener revealed that they’re composite organisms, consisting of fungi that live in partnership with microscopic algae. This “dual hypothesis” was met with indignation: it went against the impetus to put living things in clear and discrete buckets. The backlash only collapsed when Schwendener and others, with good microscopes and careful hands, managed to tease the two partners apart.
Schwendener wrongly thought that the fungus had “enslaved” the alga, but others showed that the two cooperate. The alga uses sunlight to make nutrients for the fungus, while the fungus provides minerals, water, and shelter. This kind of mutually beneficial relationship was unheard of, and required a new word. Two Germans, Albert Frank and Anton de Bary, provided the perfect one — symbiosis, from the Greek for ‘together’ and ‘living’. [Continue reading…]
David Grinspoon writes: As a planetary astrobiologist, I am focused on the major transitions in planetary evolution and the evolving relationship between planets and life. The scientific community is converging on the idea that we have entered a new epoch of Earth history, one in which the net activity of humans has become an agent of global change as powerful as the great forces of nature that shape continents and propel the evolution of species. This concept has garnered a lot of attention, and justly so. Thinking about the new epoch – often called the Anthropocene, or the age of humanity – challenges us to look at ourselves in the mirror of deep time, measured not in centuries or even in millennia, but over millions and billions of years. And yet much of the recent discussion and debate over the Anthropocene still does not come to terms with its full meaning and importance.
Various markers have been proposed for the starting date of the Anthropocene, such as the rise in CO2, isotopes from nuclear tests, the ‘Columbian exchange’ of species between hemispheres when Europeans colonised the Americas, or more ancient human modifications of the landscape or climate. The question in play here is: when did our world gain a quality that is uniquely human? Many species have had a major influence on the globe, but they don’t each get their own planetary transition in the geologic timescale. When did humans begin changing things in a way that no other species has ever changed Earth before? Making massive changes in landscapes is not unique to us. Beavers do plenty of that, for example, when they build dams, alter streams, cut down forests and create new meadows. Even changing global climate and initiating mass extinction is not a human first. Photosynthetic bacteria did that some 2.5 billion years ago.
What distinguishes humans from other world-changing organisms must be related to our great cleverness and adaptability; the power that comes from communicating, planning and working in social groups; transmitting knowledge from one generation to the next; and applying these skills toward altering our surroundings and expanding our habitable domains. However, people have been engaged in these activities for tens of thousands of years, and have produced many different environmental modifications proposed as markers of the Anthropocene’s beginning. Therefore, those definitions strike me as incomplete. Until now, the people causing the disturbances had no way of recognising or even conceiving of a global change. Yes, humans have been altering our planet for millennia, but there is something going on now that was not happening when we started doing all that world-changing. [Continue reading…]
Steve Paulson writes: One gets the sense that Freeman Dyson has seen everything. It’s not just that at 92 he’s had a front row seat on scientific breakthroughs for the past century, or that he’s been friends and colleagues with many of the giants of 20th-century physics, from Hans Bethe and Wolfgang Pauli to Robert Oppenheimer and Richard Feynman. Dyson is one of the great sages of the science world. If you want to get a sense of where science has come from and where it might be headed, Dyson is your man.
Dyson grew up in England with a gift for numbers and calculating. During World War II, he worked with the British Royal Air Force to pinpoint bombing targets in Germany. After the war, he moved to the United States where he got to know many of the physicists who’d built the atomic bomb. Like a lot of scientists from that era, excitement over the bomb helped launch his career in physics, and later he dreamed of building a fleet of spaceships that would travel around the solar system, powered by nuclear bombs. Perhaps it’s no accident that Dyson became an outspoken critic of nuclear weapons during the Cold War.
For more than six decades, Princeton’s Institute for Advanced Study has been his intellectual home. Dyson has described himself as a fox rather than a hedgehog. He says scientists who jump from one project to the next have more fun. Though no longer an active scientist, he continues to track developments in science and technology. Dyson seems to be happy living in a universe filled with answered questions, and he likes the fact that physics has so far failed to unify the classical world of stars and the quantum world of atoms.
When I approached Dyson about an interview on the idea of the heroic in science, he responded, “I prefer telling stories to talking philosophy.” In the end, I got both stories and big ideas. Dyson isn’t shy about making sweeping pronouncements—whether on the archaic requirements of the Ph.D. system or the pitfalls of Big Science—but his manner is understated and his dry sense of humor is always just below the surface. [Continue reading…]
David Grinspoon writes: Can a planet be alive? Lynn Margulis, a giant of late 20th-century biology, who had an incandescent intellect that veered toward the unorthodox, thought so. She and chemist James Lovelock together theorized that life must be a planet-altering phenomenon and the distinction between the “living” and “nonliving” parts of Earth is not as clear-cut as we think. Many members of the scientific community derided their theory, called the Gaia hypothesis, as pseudoscience, and questioned their scientific integrity. But now Margulis and Lovelock may have their revenge. Recent scientific discoveries are giving us reason to take this hypothesis more seriously. At its core is an insight about the relationship between planets and life that has changed our understanding of both, and is shaping how we look for life on other worlds.
Studying Earth’s global biosphere together, Margulis and Lovelock realized that it has some of the properties of a life form. It seems to display “homeostasis,” or self‐regulation. Many of Earth’s life‐sustaining qualities exhibit remarkable stability. The temperature range of the climate; the oxygen content of the atmosphere; the pH, chemistry, and salinity of the ocean—all these are biologically mediated. All have, for hundreds of millions of years, stayed within a range where life can thrive. Lovelock and Margulis surmised that the totality of life is interacting with its environments in ways that regulate these global qualities. They recognized that Earth is, in a sense, a living organism. Lovelock named this creature Gaia.
Margulis and Lovelock showed that the Darwinian picture of biological evolution is incomplete. Darwin identified the mechanism by which life adapts due to changes in the environment, and thus allowed us to see that all life on Earth is a continuum, a proliferation, a genetic diaspora from a common root. In the Darwinian view, Earth was essentially a stage with a series of changing backdrops to which life had to adjust. Yet, what or who was changing the sets? Margulis and Lovelock proposed that the drama of life does not unfold on the stage of a dead Earth, but that, rather, the stage itself is animated, part of a larger living entity, Gaia, composed of the biosphere together with the “nonliving” components that shape, respond to, and cycle through the biota of Earth. Yes, life adapts to environmental change, shaping itself through natural selection. Yet life also pushes back and changes the environment, alters the planet. This is now as obvious as the air you are breathing, which has been oxygenated by life. So evolution is not a series of adaptations to inanimate events, but a system of feedbacks, an exchange. Life has not simply molded itself to the shifting contours of a dynamic Earth. Rather, life and Earth have shaped each other as they’ve co-evolved. When you start looking at the planet in this way, then you see coral reefs, limestone cliffs, deltas, bogs, and islands of bat guano as parts of this larger animated entity. You realize that the entire skin of Earth, and its depths as well, are indeed alive. [Continue reading…]
Charles Leadbeater writes: Heidegger detested René Descartes’s dictum ‘I think, therefore I am’ which located the search for identity in our brains. There, it was secured by a rational process of thought, detached from a physical world that presented itself to the knowing subject as a puzzle to be solved. Descartes’s ideas launched a great inward turn in philosophy with the subject at the centre of the drama confronting the objective world about which he tries to gain knowledge.
Had Heidegger ever come up with a saying to sum up his philosophy it would have been: ‘I dwell, therefore I am.’ For him, identity is bound up with being in the world, which in turn means having a place in it. We don’t live in the abstract space favoured by philosophers, but in a particular place, with specific features and history. We arrive already entangled with the world, not detached from it. Our identity is not secured just in our heads but through our bodies too, how we feel and how we are moved, literally and emotionally.
Instead of presenting it as a puzzle to be solved, Heidegger’s world is one we should immerse ourselves in and care for: it is part of the larger ‘being’ where we all belong. As [Jeff] Malpas puts it, Heidegger argues that we should release ourselves to the world, to find our part in its larger ebb and flow, rather than seek to detach ourselves from it in order to dominate it. [Continue reading…]
The Washington Post reports: In Ethiopia, she is known as “Dinkinesh” — Amharic for “you are marvelous.” It’s an apt name for one of the most complete ancient hominid skeletons ever found, an assemblage of fossilized bones that has given scientists unprecedented insight into the history of humanity.
You probably know her as Lucy.
Discovered in 1974, wedged into a gully in Ethiopia’s Awash Valley, the delicate, diminutive skeleton is both uncannily familiar and alluringly strange. In some ways, the 3.2-million-year-old Australopithecus was a lot like us; her hips, feet and long legs were clearly made for walking. But she also had long arms and dexterous curved fingers, much like modern apes that still swing from the trees.
So, for decades scientists have wondered: Who exactly was Lucy? Was she lumbering and land-bound, like us modern humans? Or did she retain some of the ancient climbing abilities that made her ancestors — and our own — champions of the treetops?
A new study suggests she was a little of both: Though her lower limbs were adapted for bipedalism, she had exceptionally strong arm bones that allowed her to haul herself up branches, researchers reported Wednesday in the journal PLoS One. [Continue reading…]
Live Science reports: After analyzing the crater from the cosmic impact that ended the age of dinosaurs, scientists now say the object that smacked into the planet may have punched nearly all the way through Earth’s crust, according to a new study.
The finding could shed light on how impacts can reshape the faces of planets and how such collisions can generate new habitats for life, the researchers said.
Asteroids and comets occasionally pelt Earth’s surface. Still, for the most part, changes to the planet’s surface result largely from erosion due to rain and wind, “as well as plate tectonics, which generates mountains and ocean trenches,” said study co-author Sean Gulick, a marine geophysicist at the University of Texas at Austin.
In contrast, on the solar system’s other rocky planets, erosion and plate tectonics typically have little, if any, influence on the planetary surfaces. “The key driver of surface changes on those planets is constantly getting hit by stuff from space,” Gulick told Live Science. [Continue reading…]