Phys.org reports: There have been many estimates for when the earth’s inner core was formed, but scientists from the University of Liverpool have used new data which indicates that the Earth’s inner core was formed 1 – 1.5 billion years ago as it “froze” from the surrounding molten iron outer core.
The inner core is Earth’s deepest layer. It is a ball of solid iron just larger than Pluto which is surrounded by a liquid outer core. The inner core is a relatively recent addition to our planet and establishing when it was formed is a topic of vigorous scientific debate with estimates ranging from 0.5 billion to 2 billion years ago.
In a new study published in Nature, researchers from the University’s School of Environmental Sciences analysed magnetic records from ancient igneous rocks and found that there was a sharp increase in the strength of the Earth’s magnetic field between 1 and 1.5 billion years ago.
This increased magnetic field is a likely indication of the first occurrence of solid iron at Earth’s centre and the point in Earth’s history at which the solid inner core first started to “freeze” out from the cooling molten outer core.
Liverpool palaeomagnetism expert and the study’s lead author, Dr Andy Biggin, said: “This finding could change our understanding of the Earth’s interior and its history.” [Continue reading…]
Lawrence M Krauss writes: Whenever you say anything about your daily life, a scale is implied. Try it out. “I’m too busy” only works for an assumed time scale: today, for example, or this week. Not this century or this nanosecond. “Taxes are onerous” only makes sense for a certain income range. And so on.
Surely the same restriction doesn’t hold true in science, you might say. After all, for centuries after the introduction of the scientific method, conventional wisdom held that there were theories that were absolutely true for all scales, even if we could never be empirically certain of this in advance. Newton’s universal law of gravity, for example, was, after all, universal! It applied to falling apples and falling planets alike, and accounted for every significant observation made under the sun, and over it as well.
With the advent of relativity, and general relativity in particular, it became clear that Newton’s law of gravity was merely an approximation of a more fundamental theory. But the more fundamental theory, general relativity, was so mathematically beautiful that it seemed reasonable to assume that it codified perfectly and completely the behavior of space and time in the presence of mass and energy.
The advent of quantum mechanics changed everything. When quantum mechanics is combined with relativity, it turns out, rather unexpectedly in fact, that the detailed nature of the physical laws that govern matter and energy actually depend on the physical scale at which you measure them. This led to perhaps the biggest unsung scientific revolution in the 20th century: We know of no theory that both makes contact with the empirical world, and is absolutely and always true. [Continue reading…]
Eric D. Green, James D. Watson& Francis S. Collins write: Twenty-five years ago, the newly created US National Center for Human Genome Research (now the National Human Genome Research Institute; NHGRI), which the three of us have each directed, joined forces with US and international partners to launch the Human Genome Project (HGP). What happened next represents one of the most historically significant scientific endeavours: a 13-year quest to sequence all three billion base pairs of the human genome.
Even just a few years ago, discussions surrounding the HGP focused mainly on what insights the project had brought or would bring to our understanding of human disease. Only now is it clear that, as well as dramatically accelerating biomedical research, the HGP initiated a new way of doing science.
As biology’s first large-scale project, the HGP paved the way for numerous consortium-based research ventures. The NHGRI alone has been involved in launching more than 25 such projects since 2000. These have presented new challenges to biomedical research — demanding, for instance, that diverse groups from different countries and disciplines come together to share and analyse vast data sets. [Continue reading…]
The Independent reports: The most comprehensive study of the human genome has discovered that a sizeable minority of people are walking around with some of their genes missing without any apparent ill-effects, scientists have found.
A project to sequence and analyse the entire genetic code of more than 2,500 people drawn from 26 different ethnic populations from around the world has revealed that some genes do not seem to be as essential for health and life as previously believed.
The finding is just one to have emerged from the 1,000 Genomes Project set up in 2008 to study the genetic variation in at least this number of people in order to understand the variety of DNA types within the human population, the researchers said. [Continue reading…]
Sherry Turkle writes: Studies of conversation both in the laboratory and in natural settings show that when two people are talking, the mere presence of a phone on a table between them or in the periphery of their vision changes both what they talk about and the degree of connection they feel. People keep the conversation on topics where they won’t mind being interrupted. They don’t feel as invested in each other. Even a silent phone disconnects us.
In 2010, a team at the University of Michigan led by the psychologist Sara Konrath put together the findings of 72 studies that were conducted over a 30-year period. They found a 40 percent decline in empathy among college students, with most of the decline taking place after 2000.
Across generations, technology is implicated in this assault on empathy. We’ve gotten used to being connected all the time, but we have found ways around conversation — at least from conversation that is open-ended and spontaneous, in which we play with ideas and allow ourselves to be fully present and vulnerable. But it is in this type of conversation — where we learn to make eye contact, to become aware of another person’s posture and tone, to comfort one another and respectfully challenge one another — that empathy and intimacy flourish. In these conversations, we learn who we are.
Of course, we can find empathic conversations today, but the trend line is clear. It’s not only that we turn away from talking face to face to chat online. It’s that we don’t allow these conversations to happen in the first place because we keep our phones in the landscape. [Continue reading…]
Yuval Noah Harari writes: This is the basic lesson of evolutionary psychology: a need shaped thousands of generations ago continues to be felt subjectively even if it is no longer necessary for survival and reproduction in the present. Tragically, the agricultural revolution gave humans the power to ensure the survival and reproduction of domesticated animals while ignoring their subjective needs. In consequence, domesticated animals are collectively the most successful animals in the world, and at the same time they are individually the most miserable animals that have ever existed.
The situation has only worsened over the last few centuries, during which time traditional agriculture gave way to industrial farming. In traditional societies such as ancient Egypt, the Roman empire or medieval China, humans had a very partial understanding of biochemistry, genetics, zoology and epidemiology. Consequently, their manipulative powers were limited. In medieval villages, chickens ran free between the houses, pecked seeds and worms from the garbage heap, and built nests in the barn. If an ambitious peasant tried to lock 1,000 chickens inside a crowded coop, a deadly bird-flu epidemic would probably have resulted, wiping out all the chickens, as well as many villagers. No priest, shaman or witch doctor could have prevented it. But once modern science had deciphered the secrets of birds, viruses and antibiotics, humans could begin to subject animals to extreme living conditions. With the help of vaccinations, medications, hormones, pesticides, central air-conditioning systems and automatic feeders, it is now possible to cram tens of thousands of chickens into tiny coops, and produce meat and eggs with unprecedented efficiency.
The fate of animals in such industrial installations has become one of the most pressing ethical issues of our time, certainly in terms of the numbers involved. These days, most big animals live on industrial farms. We imagine that our planet is populated by lions, elephants, whales and penguins. That may be true of the National Geographic channel, Disney movies and children’s fairytales, but it is no longer true of the real world. The world contains 40,000 lions but, by way of contrast, there are around 1 billion domesticated pigs; 500,000 elephants and 1.5 billion domesticated cows; 50 million penguins and 20 billion chickens. [Continue reading…]
From the earliest of times, philosophers and scientists have tried to understand the relationship between animate and inanimate matter. But the origin of life remains one of the major scientific riddles to be solved.
The building blocks of life as we know it essentially consist of four groups of chemicals: proteins, nucleic acids, lipids (fats) and carbohydrates. There was much excitement about the possibility of finding amino acids (the ingredients for proteins) on comets or distant planets because some scientists believe that life on Earth, or at least its building blocks, may have originally come from outer space and been deposited by meteorites.
But there are now extensive examples of how natural processes on Earth can convert simple molecules into these building blocks. Scientists have demonstrated in the lab how to make amino acids, simple sugars, lipids and even nucleotides – the basic units of DNA – from very simple chemicals, under conditions that could have existed on early earth. What still eludes them is the point in the process when a chemical stew becomes an organism. How did the first lifeforms become alive?
Jason Cohn and Camille Servan-Schreiber: Growing up in Los Angeles and Paris, we both were raised secular and embraced atheism early and easily. It’s not that we didn’t ponder life’s mysteries; it’s just that after we reasoned away our religious questions, we stopped worrying about them and moved on. When we learned about the former pastor Jerry DeWitt’s struggles with being an “outed” atheist in rural Louisiana, we realized for the first time just how difficult being an atheist can be in some communities, where religion is woven deeply into the social fabric. [Continue reading…]
Paleogenetics is helping to solve the great mystery of prehistory: How did humans spread out over the earth?
Jacob Mikanowski writes: Most of human history is prehistory. Of the 200,000 or more years that humans have spent on Earth, only a tiny fraction have been recorded in writing. Even in our own little sliver of geologic time, the 12,000 years of the Holocene, whose warm weather and relatively stable climate incubated the birth of agriculture, cities, states, and most of the other hallmarks of civilisation, writing has been more the exception than the rule.
Professional historians can’t help but pity their colleagues on the prehistoric side of the fence. Historians are accustomed to drawing on vast archives, but archaeologists must assemble and interpret stories from scant material remains. In the annals of prehistory, cultures are designated according to modes of burial such as ‘Single Grave’, or after styles of arrowhead, such as ‘Western Stemmed Point’. Whole peoples are reduced to styles of pottery, such as Pitted Ware, Corded Ware or Funnel Beaker, all of them spread across the map in confusing, amoeba-like blobs.
In recent years, archaeologists have become reluctant to infer too much from assemblages of ceramics, weapons and grave goods. For at least a generation, they have been drilled on the mantra that ‘pots are not people’. Material culture is not a proxy for identity. Artefacts recovered from a dig can provide a wealth of information about a people’s mode of subsistence, funeral rites and trade contacts, but they are not a reliable guide to their language or ethnicity – or their patterns of migration.
Before the Second World War, prehistory was seen as a series of invasions, with proto-Celts and Indo-Aryans swooping down on unsuspecting swaths of Europe and Asia like so many Vikings, while megalith builders wandered between continents in indecisive meanders. After the Second World War, this view was replaced by the processual school, which attributed cultural changes to internal adaptations. Ideas and technologies might travel, but people by and large stayed put. Today, however, migration is making a comeback.
Much of this shift has to do with the introduction of powerful new techniques for studying ancient DNA. The past five years have seen a revolution in the availability and scope of genetic testing that can be performed on prehistoric human and animal remains. Ancient DNA is tricky to work with. Usually it’s degraded, chemically altered and cut into millions of short fragments. But recent advances in sequencing technology have made it possible to sequence whole genomes from samples reaching back thousands, and tens of thousands, of years. Whole-genome sequencing yields orders of magnitude more data than organelle-based testing, and allows geneticists to make detailed comparisons between individuals and populations. Those comparisons are now illuminating new branches of the human family tree. [Continue reading…]
Mike Jay writes: Half an hour on the slow train from Antwerp, surrounded by flat, sparsely populated farmlands, Geel (pronounced, roughly, ‘Hyale’) strikes the visitor as a quiet, tidy but otherwise unremarkable Belgian market town. Yet its story is unique. For more than 700 years its inhabitants have taken the mentally ill and disabled into their homes as guests or ‘boarders’. At times, these guests have numbered in the thousands, and arrived from all over Europe. There are several hundred in residence today, sharing their lives with their host families for years, decades or even a lifetime. One boarder recently celebrated 50 years in the Flemish town, arranging a surprise party at the family home. Friends and neighbours were joined by the mayor and a full brass band.
Among the people of Geel, the term ‘mentally ill’ is never heard: even words such as ‘psychiatric’ and ‘patient’ are carefully hedged with finger-waggling and scare quotes. The family care system, as it’s known, is resolutely non-medical. When boarders meet their new families, they do so, as they always have, without a backstory or clinical diagnosis. If a word is needed to describe them, it’s often a positive one such as ‘special’, or at worst, ‘different’. This might in fact be more accurate than ‘mentally ill’, since the boarders have always included some who would today be diagnosed with learning difficulties or special needs. But the most common collective term is simply ‘boarders’, which defines them at the most pragmatic level by their social, not mental, condition. These are people who, whatever their diagnosis, have come here because they’re unable to cope on their own, and because they have no family or friends who can look after them.
The origins of the Geel story lie in the 13th century, in the martyrdom of Saint Dymphna, a legendary seventh-century Irish princess whose pagan father went mad with grief after the death of his Christian wife and demanded that Dymphna marry him. To escape the king’s incestuous passion, Dymphna fled to Europe and holed up in the marshy flatlands of Flanders. Her father finally tracked her down in Geel, and when she refused him once more, he beheaded her. Over time, she became revered as a saint with powers of intercession for the mentally afflicted, and her shrine attracted pilgrims and tales of miraculous cures. [Continue reading…]
Tom Vanderbilt writes: In early 1999, during the halftime of a University of Washington basketball game, a time capsule from 1927 was opened. Among the contents of this portal to the past were some yellowing newspapers, a Mercury dime, a student handbook, and a building permit. The crowd promptly erupted into boos. One student declared the items “dumb.”
Such disappointment in time capsules seems to run endemic, suggests William E. Jarvis in his book Time Capsules: A Cultural History. A headline from The Onion, he notes, sums it up: “Newly unearthed time capsule just full of useless old crap.” Time capsules, after all, exude a kind of pathos: They show us that the future was not quite as advanced as we thought it would be, nor did it come as quickly. The past, meanwhile, turns out to not be as radically distinct as we thought.
In his book Predicting the Future, Nicholas Rescher writes that “we incline to view the future through a telescope, as it were, thereby magnifying and bringing nearer what we can manage to see.” So too do we view the past through the other end of the telescope, making things look farther away than they actually were, or losing sight of some things altogether.
These observations apply neatly to technology. We don’t have the personal flying cars we predicted we would. Coal, notes the historian David Edgerton in his book The Shock of the Old, was a bigger source of power at the dawn of the 21st century than in sooty 1900; steam was more significant in 1900 than 1800.
But when it comes to culture we tend to believe not that the future will be very different than the present day, but that it will be roughly the same. Try to imagine yourself at some future date. Where do you imagine you will be living? What will you be wearing? What music will you love?
Chances are, that person resembles you now. As the psychologist George Lowenstein and colleagues have argued, in a phenomenon they termed “projection bias,” people “tend to exaggerate the degree to which their future tastes will resemble their current tastes.” [Continue reading…]
Most of us would like to think scientific debate does not operate like the comments section of online news articles. These are frequently characterised by inflexibility, truculence and expostulation. Scientists are generally a little more civil, but sometimes not much so!
There is a more fundamental issue here than politeness, though. Science has a reputation as an arbiter of fact above and beyond just personal opinion or bias. The term “scientific method” suggests there exists an agreed upon procedure for processing evidence which, while not infallible, is at least impartial.
So when even the most respected scientists can arrive at different deeply held convictions when presented with the same evidence, it undermines the perceived impartiality of the scientific method. It demonstrates that science involves an element of subjective or personal judgement.
Yet personal judgements are not mere occasional intruders on science, they are a necessary part of almost every step of reasoning about evidence.
Mark Boyle writes: With little idea of what I was to expect, or how I was to go about it, seven years ago I began living without money. Originally intended as a one-year experiment in ecological living, I wanted to explore how it felt as a human being to live without the trappings and security that money had long-since afforded me. While terrifying and tough to begin with, by the end of the first year I somehow found myself more content, healthier and at peace than I had ever been. And although three years later I made a difficult decision to re-enter the monetary world – to establish projects that would enable others to loosen the grip that money has on their lives – I took from it many lessons that have changed my life forever.
For the first time I experienced how connected and interdependent I was on the people and natural world around me, something I had previously only intellectualised. It is not until you become physically aware of how your own health is entirely reliant on the health of the great web of life, that ideas such as deep ecology absorb themselves into your arteries, sinews and bones.
If the air that filled my lungs became polluted, if the nutrients in the soil that produced my food became depleted, or if the spring water which made up 60% of my body became poisoned, my own health would suffer accordingly. This seems like common sense, but you wouldn’t think so by observing the way we treat the natural world today. Over time, even the boundaries of what I considered to be “I” became less and less clear. [Continue reading…]
Candida Moss writes: On Thursday morning The New York Times ran a high profile story about the discovery of a new human ancestor species — Homo naledi — in the Rising Star cave in South Africa. The discovery, announced by professor Lee Berger, was monumental because the evidence for Homo naledi were discovered in a burial chamber. Concern for burial is usually seen as distinctive characteristic of humankind, so the possibility that this new non-human hominid species was ”deliberately disposing of its dead” was especially exciting.
To anthropologists the article was not only newsworthy it was also humorous, for the Times illustrated the piece with a photograph of Australopithecus africanus, a species already well-known. This howler of a mistake (at least to self-identified science nerds) was also somewhat understandable because the differences between the two skulls are sufficiently subtle that a lay viewer can indeed easily mistake them for one another. In fact, some have pointed to that similarity and wondered (while acknowledging the importance of the discovery) if it is indeed a “new species.”And that gets to the deeper issue: What and who were our ancestors?
It might seem as if the answer to this question is simply a question of biology, but in his new book Tales of the Ex-Apes: How we think about human evolution anthropologist Jonathan Marks argues that the story we tell about our origins, the study of our evolutionary tree, has cultural roots. Evolution isn’t just a question of biology, he argues, it’s also a question of mythology. Our scientific facts, he says, are the product of bioculture and biopolitics. [Continue reading…]
As a wildlife veterinarian, I often get asked about bats. I like bats, and I am always eager to talk about how interesting they are. Unfortunately the question is often not about biology but instead “what should I do about the ones in my roof?”.
With some unique talents and remarkable sex lives, bats are actually one of the most interesting, diverse and misunderstood groups of animals. Contrary to popular belief, they are beautiful creatures. Not necessarily in the cuddly, human-like sense – although some fruit bats with doey brown eyes and button noses could be considered so – but they are beautifully designed.
This couldn’t be illustrated better than by the discovery of the oldest known complete bat fossil, more than 53 million-years-old yet with a similar wing design to those flying around today. To put it in perspective, 50m years ago our ancestors were still swinging from the trees and would certainly not be recognised as human. But even then bats already had the combination of thin, long forearms and fingers covered by an extremely thin, strong membrane, which allowed them to master the art of powered, agile flight.
Duncan PJ, CC BY-SA
Soon afterwards, fossils record another game-changing adaptation in the evolution of most bats, and that is the ability to accurately locate prey using sound (what we call echolocation). These two adaptations early in their history gave bats an evolutionary edge compared to some other mammals, and allowed them to diversify into almost all habitats, on every continent except Antarctica.
Carl Zimmer writes: Recently a team of pathologists at Leiden University Medical Center in the Netherlands carried out an experiment that might seem doomed to failure.
They collected tissue from 26 women who had died during or just after pregnancy. All of them had been carrying sons. The pathologists then stained the samples to check for Y chromosomes.
Essentially, the scientists were looking for male cells in female bodies. And their search was stunningly successful.
As reported last month in the journal Molecular Human Reproduction, the researchers found cells with Y-chromosomes in every tissue sample they examined. These male cells were certainly uncommon — at their most abundant, they only made up about 1 in every 1,000 cells. But male cells were present in every organ that the scientists studied: brains, hearts, kidneys and others.
In the 1990s, scientists found the first clues that cells from both sons and daughters can escape from the uterus and spread through a mother’s body. They dubbed the phenomenon fetal microchimerism, after the chimera, a monster from Greek mythology that was part lion, goat and dragon.
But fetal cells don’t just drift passively. Studies of female mice show that fetal cells that end up in their hearts develop into cardiac tissue. “They’re becoming beating heart cells,” said Dr. J. Lee Nelson, an expert on microchimerism at the Fred Hutchinson Cancer Research Center in Seattle.
The new study suggests that women almost always acquire fetal cells each time they get pregnant. They have been detected as early as seven weeks into a pregnancy. In later years, the cells may disappear from their bodies, but sometimes the cells settle in for a lifetime. [Continue reading…]
We have all been raised to believe that civilization is, in large part, sustained by law and order. Without complex social institutions and some form of governance, we would be at the mercy of the law of the jungle — so the argument goes.
But there is a basic flaw in this Hobbesian view of a collective human need to tame the savagery in our nature.
For human beings to be vulnerable to the selfish drives of those around them, they generally need to possess things that are worth stealing. For things to be worth stealing, they must have durable value. People who own nothing, have little need to worry about thieves.
While Jared Diamond has argued that civilization arose in regions where agrarian societies could accumulate food surpluses, new research suggests that the value of cereal crops did not derive simply from the fact that the could be stored, but rather from the fact that having been stored they could subsequently be stolen or confiscated.
Joram Mayshar, Omer Moav, Zvika Neeman, and Luigi Pascali write: In a recent paper (Mayshar et al. 2015), we contend that fiscal capacity and viable state institutions are conditioned to a major extent by geography. Thus, like Diamond, we argue that geography matters a great deal. But in contrast to Diamond, and against conventional opinion, we contend that it is not high farming productivity and the availability of food surplus that accounts for the economic success of Eurasia.
- We propose an alternative mechanism by which environmental factors imply the appropriability of crops and thereby the emergence of complex social institutions.
To understand why surplus is neither necessary nor sufficient for the emergence of hierarchy, consider a hypothetical community of farmers who cultivate cassava (a major source of calories in sub-Saharan Africa, and the main crop cultivated in Nigeria), and assume that the annual output is well above subsistence. Cassava is a perennial root that is highly perishable upon harvest. Since this crop rots shortly after harvest, it isn’t stored and it is thus difficult to steal or confiscate. As a result, the assumed available surplus would not facilitate the emergence of a non-food producing elite, and may be expected to lead to a population increase.
Consider now another hypothetical farming community that grows a cereal grain – such as wheat, rice or maize – yet with an annual produce that just meets each family’s subsistence needs, without any surplus. Since the grain has to be harvested within a short period and then stored until the next harvest, a visiting robber or tax collector could readily confiscate part of the stored produce. Such ongoing confiscation may be expected to lead to a downward adjustment in population density, but it will nevertheless facilitate the emergence of non-producing elite, even though there was no surplus.
This simple scenario shows that surplus isn’t a precondition for taxation. It also illustrates our alternative theory that the transition to agriculture enabled hierarchy to emerge only where the cultivated crops were vulnerable to appropriation.
- In particular, we contend that the Neolithic emergence of fiscal capacity and hierarchy was conditioned on the cultivation of appropriable cereals as the staple crops, in contrast to less appropriable staples such as roots and tubers.
According to this theory, complex hierarchy did not emerge among hunter-gatherers because hunter-gatherers essentially live from hand-to-mouth, with little that can be expropriated from them to feed a would-be elite. [Continue reading…]
As the 17th-century English playwright William Congreve said: “Music has charms to soothe a savage breast.” It is known that listening to music can significantly enhance our health and general feelings of well-being.
An important and growing area of research concerns how music helps to mitigate pain and its negative effects. Music has been shown to reduce anxiety, fear, depression, pain-related distress and blood pressure. It has been found to lower pain-intensity levels and reduce the opioid requirements of patients with post-operative pain.
Music has helped children undergoing numerous medical and dental procedures. And it has been demonstrated to work in a variety of other clinical settings such as palliative care, paediatrics, surgery and anaesthesia.
So what makes music so effective at making us feel better? The research has often drawn on theories around how nerve impulses in the central nervous system are affected by our thought processes and emotions. Anything that distracts us from pain may reduce the extent to which we focus on it, and music may be particularly powerful in this regard. The beauty is that once we understand how music relates to pain, we have the potential to treat ourselves.