Scientific American reports: Songbirds stutter, babble when young, become mute if parts of their brains are damaged, learn how to sing from their elders and can even be “bilingual” — in other words, songbirds’ vocalizations share a lot of traits with human speech. However, that similarity goes beyond behavior, researchers have found. Even though humans and birds are separated by millions of years of evolution, the genes that give us our ability to learn speech have much in common with those that lend birds their warble.
A four-year long effort involving more than 100 researchers around the world put the power of nine supercomputers into analyzing the genomes of 48 species of birds. The results, published this week in a package of eight articles in Science and 20 papers in other journals, provides the most complete picture of the bird family tree thus far. The project has also uncovered genetic signatures in song-learning bird brains that have surprising similarities to the genetics of speech in humans, a finding that could help scientists study human speech.
The analysis suggests that most modern birds arose in an impressive speciation event, a “big bang” of avian diversification, in the 10 million years immediately following the extinction of dinosaurs. This period is more recent than posited in previous genetic analyses, but it lines up with the fossil record. By delving deeper into the rich data set, research groups identified when birds lost their teeth, investigated the relatively slow evolution of crocodiles and outlined the similarities between birds’ and humans’ vocal learning ability, among other findings. [Continue reading…]
Category Archives: Attention to the Unseen
The conception of perception shaped by context
Co-operation
Patrick Bateson writes: I am disturbed by the way we have created a social environment in which so much emphasis is laid on competition – on forging ahead while trampling on others. The ideal of social cooperation has come to be treated as high-sounding flabbiness, while individual selfishness is regarded as the natural and sole basis for a realistic approach to life. The image of the struggle for existence lies at the back of it, seriously distorting the view we have of ourselves and wrecking mutual trust.
The fashionable philosophy of individualism draws its respectability in part from an appeal to biology and specifically to the Darwinian theory of evolution by natural selection. Now, Darwin’s theory remains the most powerful explanation for the way that each plant and animal evolved so that it is exquisitely adapted to its environment. The theory works just as well for behaviour as it does for anatomy. Individual animals differ in the way they behave. Those that behave in a manner that is better suited to the conditions in which they live are more likely to survive. Finally, if their descendants resemble them in terms of behaviour, then in the course of evolution, the better adapted forms of behaviour will replace those that are not so effective in keeping the individual alive.
It is the Darwinian concept of differential survival that has been picked up and used so insistently in political rhetoric. Biology is thought to be all about competition – and that supposedly means constant struggle. This emphasis has had an insidious effect on the public mind and has encouraged the belief in individual selfishness and in confrontation. Competition is now widely seen as the mainspring of human activity, at least in Western countries. Excellence in the universities and in the arts is thought to be driven by the same ruthless process that supposedly works so well on the sportsfield or the market place, and they all have a lot in common with what supposedly happens in the jungle. The image of selfish genes, competing with each other in the course of evolution has fused imperceptibly with the notion of selfish individuals competing with each other in the course of their life-times. Individuals only thrive by winning. The argument has become so much a part of conventional belief that it is hard at first to see what is wrong with it.
To put it bluntly, thought has been led seriously astray by the rhetoric. [Continue reading…]
Thousands of Einstein documents now accessible online
The New York Times reports: They have been called the Dead Sea Scrolls of physics. Since 1986, the Princeton University Press and the Hebrew University of Jerusalem, to whom Albert Einstein bequeathed his copyright, have been engaged in a mammoth effort to study some 80,000 documents he left behind.
Starting on Friday, when Digital Einstein is introduced, anyone with an Internet connection will be able to share in the letters, papers, postcards, notebooks and diaries that Einstein left scattered in Princeton and in other archives, attics and shoeboxes around the world when he died in 1955.
The Einstein Papers Project, currently edited by Diana Kormos-Buchwald, a professor of physics and the history of science at the California Institute of Technology, has already published 13 volumes in print out of a projected 30. [Continue reading…]
How Darkness Visible shined a light
Peter Fulham writes: Twenty-five years ago, in December, 1989, Darkness Visible, William Styron’s account of his descent into the depths of clinical depression and back, appeared in Vanity Fair. The piece revealed in unsparing detail how Styron’s lifelong melancholy at once gave way to a seductive urge to end his own life. A few months later, he released the essay as a book, augmenting the article with a recollection of when the illness first took hold of him: in Paris, as he was about to accept the 1985 Prix mondial Cino Del Duca, the French literary award. By the author’s own acknowledgement, the response from readers was unprecedented. “This was just overwhelming. It was just by the thousands that the letters came in,” he told Charlie Rose. “I had not really realized that it was going to touch that kind of a nerve.”
Styron may have been startled by the outpouring of mail, but in many ways, it’s easy to understand. The academic research on mental illness at the time was relatively comprehensive, but no one to date had offered the kind of report that Styron gave to the public: a firsthand account of what it’s like to have the monstrous condition overtake you. He also exposed the inadequacy of the word itself, which is still used interchangeably to describe a case of the blues, rather than the tempestuous agony sufferers know too well.
Depression is notoriously hard to describe, but Styron managed to split the atom. “I’d feel the horror, like some poisonous fogbank, roll in upon my mind,” he wrote in one chapter. In another: “It is not an immediately identifiable pain, like that of a broken limb. It may be more accurate to say that despair… comes to resemble the diabolical discomfort of being imprisoned in a fiercely overheated room. And because no breeze stirs this cauldron… it is entirely natural that the victim begins to think ceaselessly of oblivion.”
As someone who has fought intermittently with the same illness since college, those sentences were cathartic, just as I suspect they were for the many readers who wrote to Styron disclosing unequivocally that he had saved their lives. As brutal as depression can be, one of the main ways a person can restrain it is through solidarity. You are not alone, Styron reminded his readers, and the fog will lift. Patience is paramount. [Continue reading…]
A universal logic of discernment
Natalie Wolchover writes: When in 2012 a computer learned to recognize cats in YouTube videos and just last month another correctly captioned a photo of “a group of young people playing a game of Frisbee,” artificial intelligence researchers hailed yet more triumphs in “deep learning,” the wildly successful set of algorithms loosely modeled on the way brains grow sensitive to features of the real world simply through exposure.
Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.
Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos.
The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, “a cat” regardless of its color, size or posture in a given video.
“They actually wrote down on paper, with exact proofs, something that people only dreamed existed,” said Ilya Nemenman, a biophysicist at Emory University. “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”
As for our own remarkable knack for spotting a cat in the bushes, a familiar face in a crowd or indeed any object amid the swirl of color, texture and sound that surrounds us, strong similarities between deep learning and biological learning suggest that the brain may also employ a form of renormalization to make sense of the world. [Continue reading…]
There is no language instinct. Chomsky was wrong
Vyvyan Evans writes: Imagine you’re a traveller in a strange land. A local approaches you and starts jabbering away in an unfamiliar language. He seems earnest, and is pointing off somewhere. But you can’t decipher the words, no matter how hard you try.
That’s pretty much the position of a young child when she first encounters language. In fact, she would seem to be in an even more challenging position. Not only is her world full of ceaseless gobbledygook; unlike our hypothetical traveller, she isn’t even aware that these people are attempting to communicate. And yet, by the age of four, every cognitively normal child on the planet has been transformed into a linguistic genius: this before formal schooling, before they can ride bicycles, tie their own shoelaces or do rudimentary addition and subtraction. It seems like a miracle. The task of explaining this miracle has been, arguably, the central concern of the scientific study of language for more than 50 years.
In the 1960s, the US linguist and philosopher Noam Chomsky offered what looked like a solution. He argued that children don’t in fact learn their mother tongue – or at least, not right down to the grammatical building blocks (the whole process was far too quick and painless for that). He concluded that they must be born with a rudimentary body of grammatical knowledge – a ‘Universal Grammar’ – written into the human DNA. With this hard-wired predisposition for language, it should be a relatively trivial matter to pick up the superficial differences between, say, English and French. The process works because infants have an instinct for language: a grammatical toolkit that works on all languages the world over.
At a stroke, this device removes the pain of learning one’s mother tongue, and explains how a child can pick up a native language in such a short time. It’s brilliant. Chomsky’s idea dominated the science of language for four decades. And yet it turns out to be a myth. A welter of new evidence has emerged over the past few years, demonstrating that Chomsky is plain wrong. [Continue reading…]
Long before we learned how to make wine our ancestors acquired a taste for rotten fruit
Live Science: Human ancestors may have begun evolving the knack for consuming alcohol about 10 million years ago, long before modern humans began brewing booze, researchers say.
The ability to break down alcohol likely helped human ancestors make the most out of rotting, fermented fruit that fell onto the forest floor, the researchers said. Therefore, knowing when this ability developed could help researchers figure out when these human ancestors began moving to life on the ground, as opposed to mostly in trees, as earlier human ancestors had lived.
“A lot of aspects about the modern human condition — everything from back pain to ingesting too much salt, sugar and fat — goes back to our evolutionary history,” said lead study author Matthew Carrigan, a paleogeneticist at Santa Fe College in Gainesville, Florida. “We wanted to understand more about the modern human condition with regards to ethanol,” he said, referring to the kind of alcohol found in rotting fruit and that’s also used in liquor and fuel. [Continue reading…]
Milky Way over Devils Tower
Incredible panorama of the Milky Way over Devils Tower in Wyoming (Photo: David Lane) http://t.co/zrGJb1Lu8o pic.twitter.com/sCmADwAPX4
— Meredith Frost (@MeredithFrost) December 1, 2014
Why Devils Tower and not Devil’s Tower? The question might sound trivial when posed next to the expanse of the Milky Way, but for what it’s worth, here’s the answer from the United States Board on Geographic Names:
Since its inception in 1890, the U.S. Board on Geographic Names has discouraged the use of the possessive form—the genitive apostrophe and the “s”. The possessive form using an “s” is allowed, but the apostrophe is almost always removed. The Board’s archives contain no indication of the reason for this policy.
However, there are many names in the GNIS database that do carry the genitive apostrophe, because the Board chooses not to apply its policies to some types of features. Although the legal authority of the Board includes all named entities except Federal Buildings, certain categories—broadly determined to be “administrative”—are best left to the organization that administers them. Examples include schools, churches, cemeteries, hospitals, airports, shopping centers, etc. The Board promulgates the names, but leaves issues such as the use of the genitive or possessive apostrophe to the data owners.
Myths attempting to explain the policy include the idea that the apostrophe looks too much like a rock in water when printed on a map, and is therefore a hazard, or that in the days of “stick–up type” for maps, the apostrophe would become lost and create confusion. The probable explanation is that the Board does not want to show possession for natural features because, “ownership of a feature is not in and of itself a reason to name a feature or change its name.”
Since 1890, only five Board decisions have allowed the genitive apostrophe for natural features. These are: Martha’s Vineyard (1933) after an extensive local campaign; Ike’s Point in New Jersey (1944) because “it would be unrecognizable otherwise”; John E’s Pond in Rhode Island (1963) because otherwise it would be confused as John S Pond (note the lack of the use of a period, which is also discouraged); and Carlos Elmer’s Joshua View (1995 at the specific request of the Arizona State Board on Geographic and Historic Names because, “otherwise three apparently given names in succession would dilute the meaning,” that is, Joshua refers to a stand of trees. Clark’s Mountain in Oregon (2002) was approved at the request of the Oregon Board to correspond with the personal references of Lewis and Clark.
Complex life may be possible in only 10% of all galaxies
Science: The universe may be a lonelier place than previously thought. Of the estimated 100 billion galaxies in the observable universe, only one in 10 can support complex life like that on Earth, a pair of astrophysicists argues. Everywhere else, stellar explosions known as gamma ray bursts would regularly wipe out any life forms more elaborate than microbes. The detonations also kept the universe lifeless for billions of years after the big bang, the researchers say.
“It’s kind of surprising that we can have life only in 10% of galaxies and only after 5 billion years,” says Brian Thomas, a physicist at Washburn University in Topeka who was not involved in the work. But “my overall impression is that they are probably right” within the uncertainties in a key parameter in the analysis.
Scientists have long mused over whether a gamma ray burst could harm Earth. The bursts were discovered in 1967 by satellites designed to spot nuclear weapons tests and now turn up at a rate of about one a day. They come in two types. Short gamma ray bursts last less than a second or two; they most likely occur when two neutron stars or black holes spiral into each other. Long gamma ray bursts last for tens of seconds and occur when massive stars burn out, collapse, and explode. They are rarer than the short ones but release roughly 100 times as much energy. A long burst can outshine the rest of the universe in gamma rays, which are highly energetic photons. [Continue reading…]
How broken sleep can unleash creativity
Karen Emslie writes: It is 4.18am. In the fireplace, where logs burned, there are now orange lumps that will soon be ash. Orion the Hunter is above the hill. Taurus, a sparkling V, is directly overhead, pointing to the Seven Sisters. Sirius, one of Orion’s heel dogs, is pumping red-blue-violet, like a galactic disco ball. As the night moves on, the old dog will set into the hill.
It is 4.18am and I am awake. Such early waking is often viewed as a disorder, a glitch in the body’s natural rhythm – a sign of depression or anxiety. It is true that when I wake at 4am I have a whirring mind. And, even though I am a happy person, if I lie in the dark my thoughts veer towards worry. I have found it better to get up than to lie in bed teetering on the edge of nocturnal lunacy.
If I write in these small hours, black thoughts become clear and colourful. They form themselves into words and sentences, hook one to the next – like elephants walking trunk to tail. My brain works differently at this time of night; I can only write, I cannot edit. I can only add, I cannot take away. I need my day-brain for finesse. I will work for several hours and then go back to bed.
All humans, animals, insects and birds have clocks inside, biological devices controlled by genes, proteins and molecular cascades. These inner clocks are connected to the ceaseless yet varying cycle of light and dark caused by the rotation and tilt of our planet. They drive primal physiological, neural and behavioural systems according to a roughly 24-hour cycle, otherwise known as our circadian rhythm, affecting our moods, desires, appetites, sleep patterns, and sense of the passage of time.
The Romans, Greeks and Incas woke up without iPhone alarms or digital radio clocks. Nature was their timekeeper: the rise of the sun, the dawn chorus, the needs of the field or livestock. Sundials and hourglasses recorded the passage of time until the 14th century when the first mechanical clocks were erected on churches and monasteries. By the 1800s, mechanical timepieces were widely worn on neck chains, wrists or lapels; appointments could be made and meal- or bed-times set.
Societies built around industrialisation and clock-time brought with them urgency and the concept of being ‘on time’ or having ‘wasted time’. Clock-time became increasingly out of synch with natural time, yet light and dark still dictated our working day and social structures.
Then, in the late 19th century, everything changed. [Continue reading…]
Researchers announce major advance in image-recognition software
The New York Times reports: Two groups of scientists, working independently, have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding.
Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at Stanford University, teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain.
The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate.
The advances may make it possible to better catalog and search for the billions of images and hours of video available online, which are often poorly described and archived. At the moment, search engines like Google rely largely on written language accompanying an image or video to ascertain what it contains. [Continue reading…]
Why moral character is the key to personal identity
Nina Strohminger writes: One morning after her accident, a woman I’ll call Kate awoke in a daze. She looked at the man next to her in bed. He resembled her husband, with the same coppery beard and freckles dusted across his shoulders. But this man was definitely not her husband.
Panicked, she packed a small bag and headed to her psychiatrist’s office. On the bus, there was a man she had been encountering with increasing frequency over the past several weeks. The man was clever, he was a spy. He always appeared in a different form: one day as a little girl in a sundress, another time as a bike courier who smirked at her knowingly. She explained these bizarre developments to her doctor, who was quickly becoming one of the last voices in this world she could trust. But as he spoke, her stomach sank with a dreaded realisation: this man, too, was an impostor.
Kate has Capgras syndrome, the unshakeable belief that someone – often a loved one, sometimes oneself – has been replaced with an exact replica. She also has Fregoli syndrome, the delusion that the same person is taking on a variety of shapes, like an actor donning an expert disguise. Capgras and Fregoli delusions offer hints about an extraordinary cognitive mechanism active in the healthy mind, a mechanism so exquisitely tuned that we are hardly ever aware of it. This mechanism ascribes to each person a unique identity, and then meticulously tracks and updates it. This mechanism is crucial to virtually every human interaction, from navigating a party to navigating a marriage. Without it, we quickly fall apart. [Continue reading…]
Wonder and the ends of inquiry
Lorraine Daston writes: Science and wonder have a long and ambivalent relationship. Wonder is a spur to scientific inquiry but also a reproach and even an inhibition to inquiry. As philosophers never tire of repeating, only those ignorant of the causes of things wonder: the solar eclipse that terrifies illiterate peasants is no wonder to the learned astronomer who can explain and predict it. Romantic poets accused science of not just neutralizing wonder but of actually killing it. Modern popularizations of science make much of wonder — but expressions of that passion are notably absent in professional publications. This love-hate relationship between wonder and science started with science itself.
Wonder always comes at the beginning of inquiry. “For it is owing to their wonder that men both now begin and at first began to philosophize,” explains Aristotle; Descartes made wonder “the first of the passions,” and the only one without a contrary, opposing passion. In these and many other accounts of wonder, both soul and senses are ambushed by a puzzle or a surprise, something that catches us unawares and unprepared. Wonder widens the eyes, opens the mouth, stops the heart, freezes thought. Above all, at least in classical accounts like those of Aristotle and Descartes, wonder both diagnoses and cures ignorance. It reveals that there are more things in heaven and earth than have been dreamt of in our philosophy; ideally, it also spurs us on to find an explanation for the marvel.
Therein lies the paradox of wonder: it is the beginning of inquiry (Descartes remarks that people deficient in wonder “are ordinarily quite ignorant”), but the end of inquiry also puts an end to wonder. [Continue reading…]
Review: Trouble in Paradise and Absolute Recoil by Slavoj Žižek
Terry Eagleton writes: It is said that Jean-Paul Sartre turned white-faced with excitement when a colleague arrived hotfoot from Germany with the news that one could make philosophy out of the ashtray. In these two new books, Slavoj Žižek philosophises in much the same spirit about sex, swearing, decaffeinated coffee, vampires, Henry Kissinger, The Sound of Music, the Muslim Brotherhood, the South Korean suicide rate and a good deal more. If there seems no end to his intellectual promiscuity, it is because he suffers from a rare affliction known as being interested in everything. In Britain, philosophers tend to divide between academics who write for each other and meaning-of-life merchants who beam their reflections at the general public. Part of Žižek’s secret is that he is both at once: a formidably erudite scholar well-versed in Kant and Heidegger who also has a consuming passion for the everyday. He is equally at home with Hegel and Hitchcock, the Fall from Eden and the fall of Mubarak. If he knows about Wagner and Schoenberg, he is also an avid consumer of vampire movies and detective fiction. A lot of his readers have learned to understand Freud or Nietzsche by viewing them through the lens of Jaws or Mary Poppins.
Academic philosophers can be obscure, whereas popularisers aim to be clear. With his urge to dismantle oppositions, Žižek has it both ways here. If some of his ideas can be hard to digest, his style is a model of lucidity. Absolute Recoil is full of intractable stuff, but Trouble in Paradise reports on the political situation in Egypt, China, Korea, Ukraine and the world in general in a crisp, well-crafted prose that any newspaper should be proud to publish. Not that, given Žižek’s provocatively political opinions, many of them would. He sees the world as divided between liberal capitalism and fundamentalism – in other words, between those who believe too little and those who believe too much. Instead of taking sides, however, he stresses the secret complicity between the two camps. [Continue reading…]
Societies in harsh environments more likely to believe in moralizing high gods
EurekAlert!: Just as physical adaptations help populations prosper in inhospitable habitats, belief in moralizing, high gods might be similarly advantageous for human cultures in poorer environments. A new study from the National Evolutionary Synthesis Center (NESCent) suggests that societies with less access to food and water are more likely to believe in these types of deities.
“When life is tough or when it’s uncertain, people believe in big gods,” says Russell Gray, a professor at the University of Auckland and a founding director of the Max Planck Institute for History and the Sciences in Jena, Germany. “Prosocial behavior maybe helps people do well in harsh or unpredictable environments.”
Gray and his coauthors found a strong correlation between belief in high gods who enforce a moral code and other societal characteristics. Political complexity–namely a social hierarchy beyond the local community– and the practice of animal husbandry were both strongly associated with a belief in moralizing gods.
The emergence of religion has long been explained as a result of either culture or environmental factors but not both. The new findings imply that complex practices and characteristics thought to be exclusive to humans arise from a medley of ecological, historical, and cultural variables.
“When researchers discuss the forces that shaped human history, there is considerable disagreement as to whether our behavior is primarily determined by culture or by the environment,” says primary author Carlos Botero, a researcher at the Initiative for Biological Complexity at North Carolina State University. “We wanted to throw away all preconceived notions regarding these processes and look at all the potential drivers together to see how different aspects of the human experience may have contributed to the behavioral patterns we see today.” [Continue reading…]
Mirror neurons may reveal more about neurons than they do about people
Jason G. Goldman writes: In his 2011 book, The Tell-Tale Brain, neuroscientist V. S. Ramachandran says that some of the cells in your brain are of a special variety. He calls them the “neurons that built civilization,” but you might know them as mirror neurons. They’ve been implicated in just about everything from the development of empathy in earlier primates, millions of years ago, to the emergence of complex culture in our species.
Ramachandran says that mirror neurons help explain the things that make us so apparently unique: tool use, cooking with fire, using complex linguistics to communicate.
It’s an inherently seductive idea: that one small tweak to a particular set of brain cells could have transformed an early primate into something that was somehow more. Indeed, experimental psychologist Cecilia Hayes wrote in 2010 (pdf), “[mirror neurons] intrigue both specialists and non-specialists, celebrated as a ‘revolution’ in understanding social behaviour and ‘the driving force’ behind ‘the great leap forward’ in human evolution.”
The story of mirror neurons begins in the 1990s at the University of Parma in Italy. A group of neuroscientists were studying rhesus monkeys by implanting small electrodes in their brains, and they found that some cells exhibited a curious kind of behavior. They fired both when the monkey executed a movement, such as grasping a banana, and also when the monkey watched the experimenter execute that very same movement.
It was immediately an exciting find. These neurons were located in a part of the brain thought solely responsible for sending motor commands out from the brain, through the brainstem to the spine, and out to the nerves that control the body’s muscles. This finding suggested that they’re not just used for executing actions, but are somehow involved in understanding the observed actions of others.
After that came a flood of research connecting mirror neurons to the development of empathy, autism, language, tool use, fire, and more. Psychologist and science writer Christian Jarrett has twice referred to mirror neurons as “the most hyped concept in neuroscience.” Is he right? Where does empirical evidence end and overheated speculation begin? [Continue reading…]
Gossip makes human society possible
Julie Beck writes: While gossiping is a behavior that has long been frowned upon, perhaps no one has frowned quite so intensely as the 16th- and 17th-century British. Back then, gossips, or “scolds” were sometimes forced to wear a menacing iron cage on their heads, called the “branks” or “scold’s bridle.” These masks purportedly had iron spikes or bits that went in the mouth and prevented the wearer from speaking. (And of course, of course, this ghastly punishment seems to have been mostly for women who were talking too much.)
Today, people who gossip are still not very well-liked, though we tend to resist the urge to cage their heads. Progress. And yet the reflexive distaste people feel for gossip and those who gossip in general is often nowhere to be found when people find themselves actually faced with a juicy morsel about someone they know. Social topics—personal relationships, likes and dislikes, anecdotes about social activities—made up about two-thirds of all conversations in analyses done by evolutionary psychologist Robin Dunbar. The remaining one-third of their time not spent talking about other people was devoted to discussing everything else: sports, music, politics, etc.
“Language in freely forming natural conversations is principally used for the exchange of social information,” Dunbar writes. “That such topics are so overwhelmingly important to us suggests that this is a primary function of language.” He even goes so far as to say: “Gossip is what makes human society as we know it possible.”
In recent years, research on the positive effects of gossip has proliferated. Rather than just a means to humiliate people and make them cry in the bathroom, gossip is now being considered by scientists as a way to learn about cultural norms, bond with others, promote cooperation, and even, as one recent study found, allow individuals to gauge their own success and social standing. [Continue reading…]