Robert Macfarlane writes: Eight years ago, in the coastal township of Shawbost on the Outer Hebridean island of Lewis, I was given an extraordinary document. It was entitled “Some Lewis Moorland Terms: A Peat Glossary”, and it listed Gaelic words and phrases for aspects of the tawny moorland that fills Lewis’s interior. Reading the glossary, I was amazed by the compressive elegance of its lexis, and its capacity for fine discrimination: a caochan, for instance, is “a slender moor-stream obscured by vegetation such that it is virtually hidden from sight”, while a feadan is “a small stream running from a moorland loch”, and a fèith is “a fine vein-like watercourse running through peat, often dry in the summer”. Other terms were striking for their visual poetry: rionnach maoim means “the shadows cast on the moorland by clouds moving across the sky on a bright and windy day”; èit refers to “the practice of placing quartz stones in streams so that they sparkle in moonlight and thereby attract salmon to them in the late summer and autumn”, and teine biorach is “the flame or will-o’-the-wisp that runs on top of heather when the moor burns during the summer”.
The “Peat Glossary” set my head a-whirr with wonder-words. It ran to several pages and more than 120 terms – and as that modest “Some” in its title acknowledged, it was incomplete. “There’s so much language to be added to it,” one of its compilers, Anne Campbell, told me. “It represents only three villages’ worth of words. I have a friend from South Uist who said her grandmother would add dozens to it. Every village in the upper islands would have its different phrases to contribute.” I thought of Norman MacCaig’s great Hebridean poem “By the Graveyard, Luskentyre”, where he imagines creating a dictionary out of the language of Donnie, a lobster fisherman from the Isle of Harris. It would be an impossible book, MacCaig concluded:
A volume thick as the height of the Clisham,
A volume big as the whole of Harris,
A volume beyond the wit of scholars.
The same summer I was on Lewis, a new edition of the Oxford Junior Dictionary was published. A sharp-eyed reader noticed that there had been a culling of words concerning nature. Under pressure, Oxford University Press revealed a list of the entries it no longer felt to be relevant to a modern-day childhood. The deletions included acorn, adder, ash, beech, bluebell, buttercup, catkin, conker, cowslip, cygnet, dandelion, fern, hazel, heather, heron, ivy, kingfisher, lark, mistletoe, nectar, newt, otter, pasture and willow. The words taking their places in the new edition included attachment, block-graph, blog, broadband, bullet-point, celebrity, chatroom, committee, cut-and-paste, MP3 player and voice-mail. As I had been entranced by the language preserved in the prose‑poem of the “Peat Glossary”, so I was dismayed by the language that had fallen (been pushed) from the dictionary. For blackberry, read Blackberry.
I have long been fascinated by the relations of language and landscape – by the power of strong style and single words to shape our senses of place. And it has become a habit, while travelling in Britain and Ireland, to note down place words as I encounter them: terms for particular aspects of terrain, elements, light and creaturely life, or resonant place names. I’ve scribbled these words in the backs of notebooks, or jotted them down on scraps of paper. Usually, I’ve gleaned them singly from conversations, maps or books. Now and then I’ve hit buried treasure in the form of vernacular word-lists or remarkable people – troves that have held gleaming handfuls of coinages, like the Lewisian “Peat Glossary”.
Not long after returning from Lewis, and spurred on by the Oxford deletions, I resolved to put my word-collecting on a more active footing, and to build up my own glossaries of place words. It seemed to me then that although we have fabulous compendia of flora, fauna and insects (Richard Mabey’s Flora Britannica and Mark Cocker’s Birds Britannica chief among them), we lack a Terra Britannica, as it were: a gathering of terms for the land and its weathers – terms used by crofters, fishermen, farmers, sailors, scientists, miners, climbers, soldiers, shepherds, poets, walkers and unrecorded others for whom particularised ways of describing place have been vital to everyday practice and perception. It seemed, too, that it might be worth assembling some of this terrifically fine-grained vocabulary – and releasing it back into imaginative circulation, as a way to rewild our language. I wanted to answer Norman MacCaig’s entreaty in his Luskentyre poem: “Scholars, I plead with you, / Where are your dictionaries of the wind … ?” [Continue reading…]
Scientific American reports: People are fascinated by the intelligence of animals. In fact, cave paintings dating back some 40,000 years suggest that we have long harbored keen interest in animal behavior and cognition. Part of that interest may have been practical: animals can be dangerous, they can be sources of food and clothing, and they can serve as sentries or mousers.
But, another part of that fascination is purely theoretical. Because animals resemble us in form, perhaps they also resemble us in thought. For many philosophers — including René Descartes and John Locke — granting intelligence to animals was a bridge too far. They especially deemed abstract reasoning to be uniquely human and to perfectly distinguish people from “brutes.” Why? Because animals do not speak, they must have no thoughts.
Nevertheless, undeterred by such pessimistic pronouncements, informed by Darwin’s theory of evolution, and guided by the maxim that “actions speak more loudly than words,” researchers today are fashioning powerful behavioral tests that provide nonverbal ways for animals to disclose their intelligence to us. Although animals may not use words, their behavior may serve as a suitable substitute; its study may allow us to jettison the stale convention that thought without language is impossible. [Continue reading…]
Stassa Edwards writes: In his Apology for Raymond Sebond (1576), Michel de Montaigne ascribed animals’ silence to man’s own wilful arrogance. The French essayist argued that animals could speak, that they were in possession of rich consciousness, but that man wouldn’t condescend to listen. ‘It is through the vanity of the same imagination that [man] equates himself with God,’ Montaigne wrote, ‘that he attributes divine attributes for himself, picks himself out and separates himself from the crowd of other creatures.’ Montaigne asked: ‘When I play with my cat, who knows if she is making more of a pastime of me than I of her?’
Montaigne’s question is as playful as his cat. Apology is not meant to answer the age-old question, but rather to provoke; to tap into an unending inquiry about the reasoning of animals. Perhaps, Montaigne implies, we simply misunderstand the foreign language of animals, and the ignorance is not theirs, but ours.
Montaigne’s position was a radical one – the idea the animals could actually speak to humans was decidedly anti-anthropocentric – and when he looked around for like-minded thinkers, he found himself one solitary essayist. But if Montaigne was a 16th century loner, then he could appeal to the Classics. Apology is littered with references to Pliny and a particular appeal to Plato’s account of the Golden Age under Saturn. But even there, Montaigne had little to work with. Aristotle had argued that animals lacked logos (meaning, literally, ‘word’ but also ‘reason’) and, therefore, had no sense of the philosophical world inhabited and animated by humans. And a few decades after Montaigne, the French philosopher René Descartes delivered the final blow, arguing that the uniqueness of man stems from his ownership of reason, which animals are incapable of possessing, and which grants him dominion over them.
Everyone know what it’s like to forget someone’s name. It could be the name of a celebrity and the need to remember might be non-existent, and yet, as though finding this name might be an antidote to looming senility, it’s hard to let go of such a compulsion until it is satisfied.
From infancy we are taught that success in life requires an unceasing commitment to colonize the world with language. To be lost for words, is to be left out.
Without the ability to speak or understand, we would lose our most vital connection with the rest of humanity.
Montaigne understood that it was a human conceit to imagine that among all creatures, we were the only ones endowed with the capacity to communicate:
Can there be a more formall and better ordained policie, divided into so severall charges and offices, more constantly entertained, and better maintained, than that of Bees? Shall we imagine their so orderly disposing of their actions, and managing of their vocations, have so proportioned and formall a conduct without discourse, reason, and forecast?
What Montaigne logically inferred in the 1500s, science would confirm centuries later.
While Stassa Edwards enumerates the many expressions of a human desire for animals to speak, my sense is that behind this desire there is an intuition about the limitations of language: that our mute companions often see more because they can say less.
We view language as a prism that allows us perceive order in the world and yet this facility in representation is so successful and elegantly structured that most of the time we see the representations much more clearly than we see the world.
Our ability to describe and analyze the world has never been more advanced than it is today and yet for millennia, humans have observed that animals seem to be able to do something that we cannot: anticipate earthquakes.
Perhaps our word-constructed world only holds together on condition that our senses remain dull.
The world we imagine we can describe, quantify, and control, is in truth a world we barely understand.
Nick Bilton writes: According to a producer in Hollywood, people have been staying clear of email and opting for cellphones over the past two weeks as studios have been bolstering firewalls and email systems. “Everyone has been doing business on their cellphone since this happened,” the person said, asking not to be named. “The reality is, every studio has emails in their system that would cause the [same] chaos if they came out.”
Or as Jenni Konner, a writer and executive producer for HBO’s “Girls,” said on Twitter Tuesday night: “The worst thing about the Sony hacks is people using the phone again.”
It’s not only people in Hollywood who are picking up the phone again in case of an email hack.
For the rest of us, the Sony hacking is just another example of how our emails are highly insecure. “Don’t put anything in an email that you wouldn’t want to see on the front page of The New York Times the next day,” said Brian Krebs, who specializes in cybercrime and operates the website Krebs on Security. “It’s like putting a postcard in the mail.”
“And you can’t unsay anything you’ve said on the Internet,” Mr. Krebs added.
What’s so terrible about having to use the phone?
I know — it requires that massively inconvenient social accommodation which requires people to share time.
Nowadays everyone thinks they should be able to control their own time without engaging in submissive forms of behavior like answering phone calls.
Text allows people to connect without sharing space or time.
The sacrifice however, is that text lacks the fluidity of speech. What is said can instantly be modified, modulated and shaped within the flow of conversation.
Instead of bemoaning the inconvenience of talking, maybe its time for everyone to reacquaint themselves with its value.
Scientific American reports: Songbirds stutter, babble when young, become mute if parts of their brains are damaged, learn how to sing from their elders and can even be “bilingual” — in other words, songbirds’ vocalizations share a lot of traits with human speech. However, that similarity goes beyond behavior, researchers have found. Even though humans and birds are separated by millions of years of evolution, the genes that give us our ability to learn speech have much in common with those that lend birds their warble.
A four-year long effort involving more than 100 researchers around the world put the power of nine supercomputers into analyzing the genomes of 48 species of birds. The results, published this week in a package of eight articles in Science and 20 papers in other journals, provides the most complete picture of the bird family tree thus far. The project has also uncovered genetic signatures in song-learning bird brains that have surprising similarities to the genetics of speech in humans, a finding that could help scientists study human speech.
The analysis suggests that most modern birds arose in an impressive speciation event, a “big bang” of avian diversification, in the 10 million years immediately following the extinction of dinosaurs. This period is more recent than posited in previous genetic analyses, but it lines up with the fossil record. By delving deeper into the rich data set, research groups identified when birds lost their teeth, investigated the relatively slow evolution of crocodiles and outlined the similarities between birds’ and humans’ vocal learning ability, among other findings. [Continue reading…]
Vyvyan Evans writes: Imagine you’re a traveller in a strange land. A local approaches you and starts jabbering away in an unfamiliar language. He seems earnest, and is pointing off somewhere. But you can’t decipher the words, no matter how hard you try.
That’s pretty much the position of a young child when she first encounters language. In fact, she would seem to be in an even more challenging position. Not only is her world full of ceaseless gobbledygook; unlike our hypothetical traveller, she isn’t even aware that these people are attempting to communicate. And yet, by the age of four, every cognitively normal child on the planet has been transformed into a linguistic genius: this before formal schooling, before they can ride bicycles, tie their own shoelaces or do rudimentary addition and subtraction. It seems like a miracle. The task of explaining this miracle has been, arguably, the central concern of the scientific study of language for more than 50 years.
In the 1960s, the US linguist and philosopher Noam Chomsky offered what looked like a solution. He argued that children don’t in fact learn their mother tongue – or at least, not right down to the grammatical building blocks (the whole process was far too quick and painless for that). He concluded that they must be born with a rudimentary body of grammatical knowledge – a ‘Universal Grammar’ – written into the human DNA. With this hard-wired predisposition for language, it should be a relatively trivial matter to pick up the superficial differences between, say, English and French. The process works because infants have an instinct for language: a grammatical toolkit that works on all languages the world over.
At a stroke, this device removes the pain of learning one’s mother tongue, and explains how a child can pick up a native language in such a short time. It’s brilliant. Chomsky’s idea dominated the science of language for four decades. And yet it turns out to be a myth. A welter of new evidence has emerged over the past few years, demonstrating that Chomsky is plain wrong. [Continue reading…]
Julie Beck writes: While gossiping is a behavior that has long been frowned upon, perhaps no one has frowned quite so intensely as the 16th- and 17th-century British. Back then, gossips, or “scolds” were sometimes forced to wear a menacing iron cage on their heads, called the “branks” or “scold’s bridle.” These masks purportedly had iron spikes or bits that went in the mouth and prevented the wearer from speaking. (And of course, of course, this ghastly punishment seems to have been mostly for women who were talking too much.)
Today, people who gossip are still not very well-liked, though we tend to resist the urge to cage their heads. Progress. And yet the reflexive distaste people feel for gossip and those who gossip in general is often nowhere to be found when people find themselves actually faced with a juicy morsel about someone they know. Social topics—personal relationships, likes and dislikes, anecdotes about social activities—made up about two-thirds of all conversations in analyses done by evolutionary psychologist Robin Dunbar. The remaining one-third of their time not spent talking about other people was devoted to discussing everything else: sports, music, politics, etc.
“Language in freely forming natural conversations is principally used for the exchange of social information,” Dunbar writes. “That such topics are so overwhelmingly important to us suggests that this is a primary function of language.” He even goes so far as to say: “Gossip is what makes human society as we know it possible.”
In recent years, research on the positive effects of gossip has proliferated. Rather than just a means to humiliate people and make them cry in the bathroom, gossip is now being considered by scientists as a way to learn about cultural norms, bond with others, promote cooperation, and even, as one recent study found, allow individuals to gauge their own success and social standing. [Continue reading…]
Jillian Hinchliffe and Seth Frey write: Although [Stephen] Booth is now retired [from the University of California, Berkeley], his work [on Shakespeare] couldn’t be more relevant. In the study of the human mind, old disciplinary boundaries have begun to dissolve and fruitful new relationships between the sciences and humanities have sprung up in their place. When it comes to the cognitive science of language, Booth may be the most prescient literary critic who ever put pen to paper. In his fieldwork in poetic experience, he unwittingly anticipated several language-processing phenomena that cognitive scientists have only recently begun to study. Booth’s work not only provides one of the most original and penetrating looks into the nature of Shakespeare’s genius, it has profound implications for understanding the processes that shape how we think.
Until the early decades of the 20th century, Shakespeare criticism fell primarily into two areas: textual, which grapples with the numerous variants of published works in order to produce an edition as close as possible to the original, and biographical. Scholarship took a more political turn beginning in the 1960s, providing new perspectives from various strains of feminist, Marxist, structuralist, and queer theory. Booth is resolutely dismissive of most of these modes of study. What he cares about is poetics. Specifically, how poetic language operates on and in audiences of a literary work.
Close reading, the school that flourished mid-century and with which Booth’s work is most nearly affiliated, has never gone completely out of style. But Booth’s approach is even more minute—microscopic reading, according to fellow Shakespeare scholar Russ McDonald. And as the microscope opens up new worlds, so does Booth’s critical lens. What makes him radically different from his predecessors is that he doesn’t try to resolve or collapse his readings into any single interpretation. That people are so hung up on interpretation, on meaning, Booth maintains, is “no more than habit.” Instead, he revels in the uncertainty caused by the myriad currents of phonetic, semantic, and ideational patterns at play. [Continue reading…]
The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.
The verbs are the same. The syntax is identical. Does the brain notice, or care, that the first is literal, the second
metaphorical, the third idiomatic?
It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.
The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.
But in their 1980 book, Metaphors We Live By, the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.
From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote. [Continue reading…]
Jonathan Gottschall: There’s a big question about what it is that makes people people. What is it that most sets our species apart from every other species? That’s the debate that I’ve been involved in lately.
When we call the species homo sapiens that’s an argument in the debate. It’s an argument that it is our sapience, our wisdom, our intelligence, or our big brains that most sets our species apart. Other scientists, other philosophers have pointed out that, no, a lot of the time we’re really not behaving all that rationally and reasonably. It’s our upright posture that sets us apart, or it’s our opposable thumb that allows us to do this incredible tool use, or it’s our cultural sophistication, or it’s the sophistication of language, and so on and so forth. I’m not arguing against any of those things, I’m just arguing that one thing of equal stature has typically been left off of this list, and that’s the way that people live their lives inside stories.
We live in stories all day long—fiction stories, novels, TV shows, films, interactive video games. We daydream in stories all day long. Estimates suggest we just do this for hours and hours per day — making up these little fantasies in our heads, these little fictions in our heads. We go to sleep at night to rest; the body rests, but not the brain. The brain stays up at night. What is it doing? It’s telling itself stories for about two hours per night. It’s eight or ten years out of our lifetime composing these little vivid stories in the theaters of our minds.
I’m not here to downplay any of those other entries into the “what makes us special” sweepstakes. I’m just here to say that one thing that has been left off the list is storytelling. We live our lives in stories, and it’s sort of mysterious that we do this. We’re not really sure why we do this. It’s one of these questions — storytelling — that falls in the gap between the sciences and the humanities. If you have this division into two cultures: you have the science people over here in their buildings, and the humanities people over here in their buildings. They’re writing in their own journals, and publishing their own book series, and the scientists are doing the same thing.
You have this division, and you have all this area in between the sciences and the humanities that no one is colonizing. There are all these questions in the borderlands between these disciplines that are rich and relatively unexplored. One of them is storytelling and it’s one of these questions that humanities people aren’t going to be able to figure out on their own because they don’t have a scientific toolkit that will help them gradually, painstakingly narrow down the field of competing ideas. The science people don’t really see these questions about storytelling as in their jurisdiction: “This belongs to someone else, this is the humanities’ territory, we don’t know anything about it.”
What is needed is fusion — people bringing together methods, ideas, approaches from scholarship and from the sciences to try to answer some of these questions about storytelling. Humans are addicted to stories, and they play an enormous role in human life and yet we know very, very little about this subject. [Continue reading… or watch a video of Gottschall’s talk.]
No, a ‘supercomputer’ did NOT pass the Turing Test for the first time and everyone should know better
Follow numerous “reports” (i.e. numerous regurgitations of a press release from Reading University) on an “historic milestone in artificial intelligence” having been passed “for the very first time by supercomputer Eugene Goostman” at an event organized by Professor Kevin Warwick, Mike Masnick writes:
If you’ve spent any time at all in the tech world, you should automatically have red flags raised around that name. Warwick is somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question. He’s been doing it for decades. All the way back in 2000, we were writing about all the ridiculous press he got for claiming to be the world’s first “cyborg” for implanting a chip in his arm. There was even a — since taken down — Kevin Warwick Watch website that mocked and categorized all of his media appearances in which gullible reporters simply repeated all of his nutty claims. Warwick had gone quiet for a while, but back in 2010, we wrote about how his lab was getting bogus press for claiming to have “the first human infected with a computer virus.” The Register has rightly referred to Warwick as both “Captain Cyborg” and a “media strumpet” and have long been chronicling his escapades in exaggerating bogus stories about the intersection of humans and computers for many, many years.
Basically, any reporter should view extraordinary claims associated with Warwick with extreme caution. But that’s not what happened at all. Instead, as is all too typical with Warwick claims, the press went nutty over it, including publications that should know better.
Anyone can try having a “conversation” with Eugene Goostman.
If the strings of words it spits out give you the impression you’re talking to a human being, that’s probably an indication that you don’t spend enough time talking to human beings.
University of New England, Australia: We humans like to think of ourselves as unique for many reasons, not least of which being our ability to communicate with words. But ground-breaking research by an expert from the University of New England shows that our ‘misunderstood cousins,’ the Neanderthals, may well have spoken in languages not dissimilar to the ones we use today.
Pinpointing the origin and evolution of speech and human language is one of the longest running and most hotly debated topics in the scientific world. It has long been believed that other beings, including the Neanderthals with whom our ancestors shared the Earth for thousands of years, simply lacked the necessary cognitive capacity and vocal hardware for speech.
Associate Professor Stephen Wroe, a zoologist and palaeontologist from UNE, along with an international team of scientists and the use of 3D x-ray imaging technology, made the revolutionary discovery challenging this notion based on a 60,000 year-old Neanderthal hyoid bone discovered in Israel in 1989.
“To many, the Neanderthal hyoid discovered was surprising because its shape was very different to that of our closest living relatives, the chimpanzee and the bonobo. However, it was virtually indistinguishable from that of our own species. This led to some people arguing that this Neanderthal could speak,” A/Professor Wroe said.
“The obvious counterargument to this assertion was that the fact that hyoids of Neanderthals were the same shape as modern humans doesn’t necessarily mean that they were used in the same way. With the technology of the time, it was hard to verify the argument one way or the other.”
However advances in 3D imaging and computer modelling allowed A/Professor Wroe’s team to revisit the question.
“By analysing the mechanical behaviour of the fossilised bone with micro x-ray imaging, we were able to build models of the hyoid that included the intricate internal structure of the bone. We then compared them to models of modern humans. Our comparisons showed that in terms of mechanical behaviour, the Neanderthal hyoid was basically indistinguishable from our own, strongly suggesting that this key part of the vocal tract was used in the same way.
“From this research, we can conclude that it’s likely that the origins of speech and language are far, far older than once thought.”
The ability to discern the emotions of others provides the foundation for emotional intelligence. How well-developed this faculty is seems to have little to do with the strength of other markers of intelligence, indeed, as a new study seems to imply, there may be little reason to see in emotional intelligence much that is uniquely human.
Scientific American: [A]lthough dogs have the capacity to understand more than 100 words, studies have demonstrated Fido can’t really speak human languages or comprehend them with the same complexity that we do. Yet researchers have now discovered that dog and human brains process the vocalizations and emotions of others more similarly than previously thought. The findings suggest that although dogs cannot discuss relativity theory with us, they do seem to be wired in a way that helps them to grasp what we feel by attending to the sounds we make.
To compare active human and dog brains, postdoctoral researcher Attila Andics and his team from MTA-ELTE Comparative Ethology Research Group in Hungary trained 11 dogs to lie still in an fMRI brain scanner for several six minute intervals so that the researchers could perform the same experiment on both human and canine participants. Both groups listened to almost two hundred dog and human sounds — from whining and crying to laughter and playful barking — while the team scanned their brain activity.
The resulting study, published in Current Biology today, reveals both that dog brains have voice-sensitive regions and that these neurological areas resemble those of humans. Sharing similar locations in both species, they process voices and emotions of other individuals similarly. Both groups respond with greater neural activity when they listen to voices reflecting positive emotions such as laughing than to negative sounds that include crying or whining. Dogs and people, however, respond more strongly to the sounds made by their own species. “Dogs and humans meet in a very similar social environment but we didn’t know before just how similar the brain mechanisms are to process this social information,” Andics says. [Continue reading…]
Kat McGowan writes: Up in the northern Sierra Nevada, the ecologist Richard Karban is trying to learn an alien language. The sagebrush plants that dot these slopes speak to one another, using words no human knows. Karban, who teaches at the University of California, Davis, is listening in, and he’s beginning to understand what they say.
The evidence for plant communication is only a few decades old, but in that short time it has leapfrogged from electrifying discovery to decisive debunking to resurrection. Two studies published in 1983 demonstrated that willow trees, poplars and sugar maples can warn each other about insect attacks: Intact, undamaged trees near ones that are infested with hungry bugs begin pumping out bug-repelling chemicals to ward off attack. They somehow know what their neighbors are experiencing, and react to it. The mind-bending implication was that brainless trees could send, receive and interpret messages.
The first few “talking tree” papers quickly were shot down as statistically flawed or too artificial, irrelevant to the real-world war between plants and bugs. Research ground to a halt. But the science of plant communication is now staging a comeback. Rigorous, carefully controlled experiments are overcoming those early criticisms with repeated testing in labs, forests and fields. It’s now well established that when bugs chew leaves, plants respond by releasing volatile organic compounds into the air. By Karban’s last count, 40 out of 48 studies of plant communication confirm that other plants detect these airborne signals and ramp up their production of chemical weapons or other defense mechanisms in response. “The evidence that plants release volatiles when damaged by herbivores is as sure as something in science can be,” said Martin Heil, an ecologist at the Mexican research institute Cinvestav Irapuato. “The evidence that plants can somehow perceive these volatiles and respond with a defense response is also very good.”
Plant communication may still be a tiny field, but the people who study it are no longer seen as a lunatic fringe. “It used to be that people wouldn’t even talk to you: ‘Why are you wasting my time with something we’ve already debunked?’” said Karban. “That’s now better for sure.” The debate is no longer whether plants can sense one another’s biochemical messages — they can — but about why and how they do it. [Continue reading…]
Queensland Brain Institute: QBI scientists at The University of Queensland have found that honeybees use the pattern of polarised light in the sky invisible to humans to direct one another to a honey source.
The study, conducted in Professor Mandyam Srinivasan’s laboratory at the Queensland Brain Institute, a member of the Australian Research Council Centre of Excellence in Vision Science (ACEVS), demonstrated that bees navigate to and from honey sources by reading the pattern of polarised light in the sky.
“The bees tell each other where the nectar is by converting their polarised ‘light map’ into dance movements,” Professor Srinivasan said.
“The more we find out how honeybees make their way around the landscape, the more awed we feel at the elegant way they solve very complicated problems of navigation that would floor most people – and then communicate them to other bees,” he said.
The discovery shines new light on the astonishing navigational and communication skills of an insect with a brain the size of a pinhead.
The researchers allowed bees to fly down a tunnel to a sugar source, shining only polarised light from above, either aligned with the tunnel or at right angles to the tunnel.
They then filmed what the bees ‘told’ their peers, by waggling their bodies when they got back to the hive.
“It is well known that bees steer by the sun, adjusting their compass as it moves across the sky, and then convert that information into instructions for other bees by waggling their body to signal the direction of the honey,” Professor Srinivasan said.
“Other laboratories have shown from studying their eyes that bees can see a pattern of polarised light in the sky even when the sun isn’t shining: the big question was could they translate the navigational information it provides into their waggle dance.”
The researchers conclude that even when the sun is not shining, bees can tell one another where to find food by reading and dancing to their polarised sky map.
In addition to revealing how bees perform their remarkable tasks, Professor Srinivasan says it also adds to our understanding of some of the most basic machinery of the brain itself.
Professor Srinivasan’s team conjectures that flight under polarised illumination activates discrete populations of cells in the insect’s brain.
When the polarised light was aligned with the tunnel, one pair of ‘place cells’ – neurons important for spatial navigation – became activated, whereas when the light was oriented across the tunnel a different pair of place cells was activated.
The researchers suggest that depending on which set of cells is activated, the bee can work out if the food source lies in a direction toward or opposite the direction of the sun, or in a direction ninety degrees to the left or right of it.
The study, “Honeybee navigation: critically examining the role of polarization compass”, is published in the 6 January 2014 issue of the Philosophical Transactions of the Royal Society B.
Richard Hamilton writes: My first job was as a lawyer. I was not a very happy or inspired lawyer. One night I was driving home listening to a radio report, and there is something very intimate about radio: a voice comes out of a machine and into the listener’s ear. With rain pounding the windscreen and only the dashboard lights and the stereo for company, I thought to myself, ‘This is what I want to do.’ So I became a radio journalist.
As broadcasters, we are told to imagine speaking to just one person. My tutor at journalism college told me that there is nothing as captivating as the human voice saying something of interest (he added that radio is better than TV because it has the best pictures). We remember where we were when we heard a particular story. Even now when I drive in my car, the memory of a scene from a radio play can be ignited by a bend in a country road or a set of traffic lights in the city.
But potent as radio seems, can a recording device ever fully replicate the experience of listening to a live storyteller? The folklorist Joseph Bruchac thinks not. ‘The presence of teller and audience, and the immediacy of the moment, are not fully captured by any form of technology,’ he wrote in a comment piece for The Guardian in 2010. ‘Unlike the insect frozen in amber, a told story is alive… The story breathes with the teller’s breath.’ And as devoted as I am to radio, my recent research into oral storytelling makes me think that Bruchac may be right. [Continue reading…]
As a child, I was once taken to a small sad zoo near the Yorkshire seaside town of Scarborough. There were only a handful of animals and my attention was quickly drawn by a solitary chimpanzee.
We soon sat face-to-face within arm’s reach, exchanging calls and became absorbed in what seemed like communication — even if there were no words. Before the eyes of another primate we see mirrors of inquiry. Just as much as I wanted to talk to the chimp, it seemed like he wanted to talk to me. His sorrow, like that of all captives, could not be held tight by silence.
The rest of my family eventually tired of my interest in learning how to speak chimpaneze. After all, talking to animals is something that only small children are willing to take seriously. Supposedly, it is just another childish exercise of the imagination — the kind of behavior that as we grow older we grow out of.
This notion of outgrowing a sense of kinship with other creatures implies an upward advance, yet in truth we don’t outgrow these experiences of connection, we simply move away from them. We imagine we are leaving behind something we no longer need, whereas in fact we are losing something we have forgotten how to appreciate.
Like so many other aspects of maturation, the process through which adults forget their connections to the non-human world involves a dulling of the senses. As we age, we become less alive, less attuned and less receptive to life’s boundless expressions. The insatiable curiosity we had as children, slowly withers as the mental constructs which form a known world cut away and displace our passion for exploration.
Within this known and ordered world, the idea that an adult would describe herself as an animal communicator, naturally provokes skepticism. Is this a person living in a fantasy world? Or is she engaged in a hoax, cynically exploiting the longings of others such as the desire to rediscover a lost childhood?
Whether Anna Breytenbach (who features in the video below) can see inside the minds of animals, I have no way of knowing, but that animals have minds and that they can have what we might regard as intensely human experiences — such as the feeling of loss — I have no doubt.
The cultural impact of science which is often more colored by belief than reason, suggests that whenever we reflect on the experience of animals we are perpetually at risk of falling into the trap of anthropomorphization. The greater risk, however, is that we unquestioningly accept this assumption: that even if as humans we are the culmination of an evolutionary process that goes all the way back to the formation of amino acids, at the apex of this process we somehow stand apart. We can observe the animal kingdom and yet as humans we have risen above it.
But instead, what actually sets us apart in the most significant way is not the collection of attributes that define human uniqueness, but rather it is this very idea of our separateness — the idea that we are here and nature is out there.
The pursuit of artificial intelligence has been driven by the assumption that if human intelligence can be replicated or advanced upon by machines then this accomplishment will in various ways serve the human good. At the same time, thanks to the technophobia promoted in some dystopian science fiction, there is a popular fear that if machines become smarter than people we will end up becoming their slaves.
It turns out that even if there are some irrational fears wrapped up in technophobia, there are good reasons to regard computing devices as a threat to human intelligence.
It’s not that we are creating machines that harbor evil designs to take over the world, but simply that each time we delegate a function of the brain to an external piece of circuitry, our mental faculties inevitably atrophy.
Use it or lose it applies just as much to the brain as it does to any other part of the body.
Carolyn Gregoire writes: Take a moment to think about the last time you memorized someone’s phone number. Was it way back when, perhaps circa 2001? And when was the last time you were at a dinner party or having a conversation with friends, when you whipped out your smartphone to Google the answer to someone’s question? Probably last week.
Technology changes the way we live our daily lives, the way we learn, and the way we use our faculties of attention — and a growing body of research has suggested that it may have profound effects on our memories (particularly the short-term, or working, memory), altering and in some cases impairing its function.
The implications of a poor working memory on our brain functioning and overall intelligence levels are difficult to over-estimate.
“The depth of our intelligence hinges on our ability to transfer information from working memory, the scratch pad of consciousness, to long-term memory, the mind’s filing system,” Nicholas Carr, author of The Shallows: What The Internet Is Doing To Our Brains, wrote in Wired in 2010. “When facts and experiences enter our long-term memory, we are able to weave them into the complex ideas that give richness to our thought.”
While our long-term memory has a nearly unlimited capacity, the short-term memory has more limited storage, and that storage is very fragile. “A break in our attention can sweep its contents from our mind,” Carr explains.
Meanwhile, new research has found that taking photos — an increasingly ubiquitous practice in our smartphone-obsessed culture — actually hinders our ability to remember that which we’re capturing on camera.
Concerned about premature memory loss? You probably should be. Here are five things you should know about the way technology is affecting your memory.
1. Information overload makes it harder to retain information.
Even a single session of Internet usage can make it more difficult to file away information in your memory, says Erik Fransén, computer science professor at Sweden’s KTH Royal Institute of Technology. And according to Tony Schwartz, productivity expert and author of The Way We’re Working Isn’t Working, most of us aren’t able to effectively manage the overload of information we’re constantly bombarded with. [Continue reading…]
As I pointed out in a recent post, the externalization of intelligence long preceded the creation of smart phones and personal computers. Indeed, it goes all the way back to the beginning of civilization when we first learned how to transform language into a material form as the written word, thereby creating a substitute for memory.
Plato foresaw the consequences of writing.
In Phaedrus, he describes an exchange between the god Thamus, king and ruler of all Egypt, and the god Theuth, who has invented writing. Theuth, who is very proud of what he has created says: “This invention, O king, will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.” But Thamus points out that while one man has the ability to invent, the ability to judge an invention’s usefulness or harmfulness belongs to another.
If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.
Bedazzled by our ingenuity and its creations, we are fast forgetting the value of this quality that can never be implanted in a machine (or a text): wisdom.