Your brain on metaphors

Michael Chorost writes:

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

The verbs are the same. The syntax is identical. Does the brain notice, or care, that the first is literal, the second
metaphorical, the third idiomatic?

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

But in their 1980 book, Metaphors We Live By, the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote. [Continue reading...]

facebooktwittermail

The way we live our lives in stories

Jonathan Gottschall: There’s a big question about what it is that makes people people. What is it that most sets our species apart from every other species? That’s the debate that I’ve been involved in lately.

When we call the species homo sapiens that’s an argument in the debate. It’s an argument that it is our sapience, our wisdom, our intelligence, or our big brains that most sets our species apart. Other scientists, other philosophers have pointed out that, no, a lot of the time we’re really not behaving all that rationally and reasonably. It’s our upright posture that sets us apart, or it’s our opposable thumb that allows us to do this incredible tool use, or it’s our cultural sophistication, or it’s the sophistication of language, and so on and so forth. I’m not arguing against any of those things, I’m just arguing that one thing of equal stature has typically been left off of this list, and that’s the way that people live their lives inside stories.

We live in stories all day long—fiction stories, novels, TV shows, films, interactive video games. We daydream in stories all day long. Estimates suggest we just do this for hours and hours per day — making up these little fantasies in our heads, these little fictions in our heads. We go to sleep at night to rest; the body rests, but not the brain. The brain stays up at night. What is it doing? It’s telling itself stories for about two hours per night. It’s eight or ten years out of our lifetime composing these little vivid stories in the theaters of our minds.

I’m not here to downplay any of those other entries into the “what makes us special” sweepstakes. I’m just here to say that one thing that has been left off the list is storytelling. We live our lives in stories, and it’s sort of mysterious that we do this. We’re not really sure why we do this. It’s one of these questions — storytelling — that falls in the gap between the sciences and the humanities. If you have this division into two cultures: you have the science people over here in their buildings, and the humanities people over here in their buildings. They’re writing in their own journals, and publishing their own book series, and the scientists are doing the same thing.

You have this division, and you have all this area in between the sciences and the humanities that no one is colonizing. There are all these questions in the borderlands between these disciplines that are rich and relatively unexplored. One of them is storytelling and it’s one of these questions that humanities people aren’t going to be able to figure out on their own because they don’t have a scientific toolkit that will help them gradually, painstakingly narrow down the field of competing ideas. The science people don’t really see these questions about storytelling as in their jurisdiction: “This belongs to someone else, this is the humanities’ territory, we don’t know anything about it.”

What is needed is fusion — people bringing together methods, ideas, approaches from scholarship and from the sciences to try to answer some of these questions about storytelling. Humans are addicted to stories, and they play an enormous role in human life and yet we know very, very little about this subject. [Continue reading... or watch a video of Gottschall's talk.]

facebooktwittermail

No, a ‘supercomputer’ did NOT pass the Turing Test for the first time and everyone should know better

Follow numerous “reports” (i.e. numerous regurgitations of a press release from Reading University) on an “historic milestone in artificial intelligence” having been passed “for the very first time by supercomputer Eugene Goostman” at an event organized by Professor Kevin Warwick, Mike Masnick writes:

If you’ve spent any time at all in the tech world, you should automatically have red flags raised around that name. Warwick is somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question. He’s been doing it for decades. All the way back in 2000, we were writing about all the ridiculous press he got for claiming to be the world’s first “cyborg” for implanting a chip in his arm. There was even a — since taken down — Kevin Warwick Watch website that mocked and categorized all of his media appearances in which gullible reporters simply repeated all of his nutty claims. Warwick had gone quiet for a while, but back in 2010, we wrote about how his lab was getting bogus press for claiming to have “the first human infected with a computer virus.” The Register has rightly referred to Warwick as both “Captain Cyborg” and a “media strumpet” and have long been chronicling his escapades in exaggerating bogus stories about the intersection of humans and computers for many, many years.

Basically, any reporter should view extraordinary claims associated with Warwick with extreme caution. But that’s not what happened at all. Instead, as is all too typical with Warwick claims, the press went nutty over it, including publications that should know better.

Anyone can try having a “conversation” with Eugene Goostman.

If the strings of words it spits out give you the impression you’re talking to a human being, that’s probably an indication that you don’t spend enough time talking to human beings.

facebooktwittermail

Talking Neanderthals challenge assumptions about the origins of speech

University of New England, Australia: We humans like to think of ourselves as unique for many reasons, not least of which being our ability to communicate with words. But ground-breaking research by an expert from the University of New England shows that our ‘misunderstood cousins,’ the Neanderthals, may well have spoken in languages not dissimilar to the ones we use today.

Pinpointing the origin and evolution of speech and human language is one of the longest running and most hotly debated topics in the scientific world. It has long been believed that other beings, including the Neanderthals with whom our ancestors shared the Earth for thousands of years, simply lacked the necessary cognitive capacity and vocal hardware for speech.

Associate Professor Stephen Wroe, a zoologist and palaeontologist from UNE, along with an international team of scientists and the use of 3D x-ray imaging technology, made the revolutionary discovery challenging this notion based on a 60,000 year-old Neanderthal hyoid bone discovered in Israel in 1989.

“To many, the Neanderthal hyoid discovered was surprising because its shape was very different to that of our closest living relatives, the chimpanzee and the bonobo. However, it was virtually indistinguishable from that of our own species. This led to some people arguing that this Neanderthal could speak,” A/Professor Wroe said.

“The obvious counterargument to this assertion was that the fact that hyoids of Neanderthals were the same shape as modern humans doesn’t necessarily mean that they were used in the same way. With the technology of the time, it was hard to verify the argument one way or the other.”

However advances in 3D imaging and computer modelling allowed A/Professor Wroe’s team to revisit the question.

“By analysing the mechanical behaviour of the fossilised bone with micro x-ray imaging, we were able to build models of the hyoid that included the intricate internal structure of the bone. We then compared them to models of modern humans. Our comparisons showed that in terms of mechanical behaviour, the Neanderthal hyoid was basically indistinguishable from our own, strongly suggesting that this key part of the vocal tract was used in the same way.

“From this research, we can conclude that it’s likely that the origins of speech and language are far, far older than once thought.”

facebooktwittermail

The emotional intelligence of dogs

f13-iconThe ability to discern the emotions of others provides the foundation for emotional intelligence. How well-developed this faculty is seems to have little to do with the strength of other markers of intelligence, indeed, as a new study seems to imply, there may be little reason to see in emotional intelligence much that is uniquely human.

Scientific American: [A]lthough dogs have the capacity to understand more than 100 words, studies have demonstrated Fido can’t really speak human languages or comprehend them with the same complexity that we do. Yet researchers have now discovered that dog and human brains process the vocalizations and emotions of others more similarly than previously thought. The findings suggest that although dogs cannot discuss relativity theory with us, they do seem to be wired in a way that helps them to grasp what we feel by attending to the sounds we make.

To compare active human and dog brains, postdoctoral researcher Attila Andics and his team from MTA-ELTE Comparative Ethology Research Group in Hungary trained 11 dogs to lie still in an fMRI brain scanner for several six minute intervals so that the researchers could perform the same experiment on both human and canine participants. Both groups listened to almost two hundred dog and human sounds — from whining and crying to laughter and playful barking — while the team scanned their brain activity.

The resulting study, published in Current Biology today, reveals both that dog brains have voice-sensitive regions and that these neurological areas resemble those of humans. Sharing similar locations in both species, they process voices and emotions of other individuals similarly. Both groups respond with greater neural activity when they listen to voices reflecting positive emotions such as laughing than to negative sounds that include crying or whining. Dogs and people, however, respond more strongly to the sounds made by their own species. “Dogs and humans meet in a very similar social environment but we didn’t know before just how similar the brain mechanisms are to process this social information,” Andics says. [Continue reading...]

facebooktwittermail

The secret language of plants

f13-iconKat McGowan writes: Up in the northern Sierra Nevada, the ecologist Richard Karban is trying to learn an alien language. The sagebrush plants that dot these slopes speak to one another, using words no human knows. Karban, who teaches at the University of California, Davis, is listening in, and he’s beginning to understand what they say.

The evidence for plant communication is only a few decades old, but in that short time it has leapfrogged from electrifying discovery to decisive debunking to resurrection. Two studies published in 1983 demonstrated that willow trees, poplars and sugar maples can warn each other about insect attacks: Intact, undamaged trees near ones that are infested with hungry bugs begin pumping out bug-repelling chemicals to ward off attack. They somehow know what their neighbors are experiencing, and react to it. The mind-bending implication was that brainless trees could send, receive and interpret messages.

The first few “talking tree” papers quickly were shot down as statistically flawed or too artificial, irrelevant to the real-world war between plants and bugs. Research ground to a halt. But the science of plant communication is now staging a comeback. Rigorous, carefully controlled experiments are overcoming those early criticisms with repeated testing in labs, forests and fields. It’s now well established that when bugs chew leaves, plants respond by releasing volatile organic compounds into the air. By Karban’s last count, 40 out of 48 studies of plant communication confirm that other plants detect these airborne signals and ramp up their production of chemical weapons or other defense mechanisms in response. “The evidence that plants release volatiles when damaged by herbivores is as sure as something in science can be,” said Martin Heil, an ecologist at the Mexican research institute Cinvestav Irapuato. “The evidence that plants can somehow perceive these volatiles and respond with a defense response is also very good.”

Plant communication may still be a tiny field, but the people who study it are no longer seen as a lunatic fringe. “It used to be that people wouldn’t even talk to you: ‘Why are you wasting my time with something we’ve already debunked?’” said Karban. “That’s now better for sure.” The debate is no longer whether plants can sense one another’s biochemical messages — they can — but about why and how they do it. [Continue reading...]

facebooktwittermail

Bees translate polarized light into a navigational dance

bee

Queensland Brain Institute: QBI scientists at The University of Queensland have found that honeybees use the pattern of polarised light in the sky invisible to humans to direct one another to a honey source.

The study, conducted in Professor Mandyam Srinivasan’s laboratory at the Queensland Brain Institute, a member of the Australian Research Council Centre of Excellence in Vision Science (ACEVS), demonstrated that bees navigate to and from honey sources by reading the pattern of polarised light in the sky.

“The bees tell each other where the nectar is by converting their polarised ‘light map’ into dance movements,” Professor Srinivasan said.

“The more we find out how honeybees make their way around the landscape, the more awed we feel at the elegant way they solve very complicated problems of navigation that would floor most people – and then communicate them to other bees,” he said.

The discovery shines new light on the astonishing navigational and communication skills of an insect with a brain the size of a pinhead.

The researchers allowed bees to fly down a tunnel to a sugar source, shining only polarised light from above, either aligned with the tunnel or at right angles to the tunnel.

They then filmed what the bees ‘told’ their peers, by waggling their bodies when they got back to the hive.

“It is well known that bees steer by the sun, adjusting their compass as it moves across the sky, and then convert that information into instructions for other bees by waggling their body to signal the direction of the honey,” Professor Srinivasan said.

“Other laboratories have shown from studying their eyes that bees can see a pattern of polarised light in the sky even when the sun isn’t shining: the big question was could they translate the navigational information it provides into their waggle dance.”

The researchers conclude that even when the sun is not shining, bees can tell one another where to find food by reading and dancing to their polarised sky map.

In addition to revealing how bees perform their remarkable tasks, Professor Srinivasan says it also adds to our understanding of some of the most basic machinery of the brain itself.

Professor Srinivasan’s team conjectures that flight under polarised illumination activates discrete populations of cells in the insect’s brain.

When the polarised light was aligned with the tunnel, one pair of ‘place cells’ – neurons important for spatial navigation – became activated, whereas when the light was oriented across the tunnel a different pair of place cells was activated.

The researchers suggest that depending on which set of cells is activated, the bee can work out if the food source lies in a direction toward or opposite the direction of the sun, or in a direction ninety degrees to the left or right of it.

The study, “Honeybee navigation: critically examining the role of polarization compass”, is published in the 6 January 2014 issue of the Philosophical Transactions of the Royal Society B.

facebooktwittermail

Stories made present

Richard Hamilton writes: My first job was as a lawyer. I was not a very happy or inspired lawyer. One night I was driving home listening to a radio report, and there is something very intimate about radio: a voice comes out of a machine and into the listener’s ear. With rain pounding the windscreen and only the dashboard lights and the stereo for company, I thought to myself, ‘This is what I want to do.’ So I became a radio journalist.

As broadcasters, we are told to imagine speaking to just one person. My tutor at journalism college told me that there is nothing as captivating as the human voice saying something of interest (he added that radio is better than TV because it has the best pictures). We remember where we were when we heard a particular story. Even now when I drive in my car, the memory of a scene from a radio play can be ignited by a bend in a country road or a set of traffic lights in the city.

But potent as radio seems, can a recording device ever fully replicate the experience of listening to a live storyteller? The folklorist Joseph Bruchac thinks not. ‘The presence of teller and audience, and the immediacy of the moment, are not fully captured by any form of technology,’ he wrote in a comment piece for The Guardian in 2010. ‘Unlike the insect frozen in amber, a told story is alive… The story breathes with the teller’s breath.’ And as devoted as I am to radio, my recent research into oral storytelling makes me think that Bruchac may be right. [Continue reading...]

facebooktwittermail

Talking to animals

chimpanzee

As a child, I was once taken to a small sad zoo near the Yorkshire seaside town of Scarborough. There were only a handful of animals and my attention was quickly drawn by a solitary chimpanzee.

We soon sat face-to-face within arm’s reach, exchanging calls and became absorbed in what seemed like communication — even if there were no words. Before the eyes of another primate we see mirrors of inquiry. Just as much as I wanted to talk to the chimp, it seemed like he wanted to talk to me. His sorrow, like that of all captives, could not be held tight by silence.

The rest of my family eventually tired of my interest in learning how to speak chimpaneze. After all, talking to animals is something that only small children are willing to take seriously. Supposedly, it is just another childish exercise of the imagination — the kind of behavior that as we grow older we grow out of.

This notion of outgrowing a sense of kinship with other creatures implies an upward advance, yet in truth we don’t outgrow these experiences of connection, we simply move away from them. We imagine we are leaving behind something we no longer need, whereas in fact we are losing something we have forgotten how to appreciate.

Like so many other aspects of maturation, the process through which adults forget their connections to the non-human world involves a dulling of the senses. As we age, we become less alive, less attuned and less receptive to life’s boundless expressions. The insatiable curiosity we had as children, slowly withers as the mental constructs which form a known world cut away and displace our passion for exploration.

Within this known and ordered world, the idea that an adult would describe herself as an animal communicator, naturally provokes skepticism. Is this a person living in a fantasy world? Or is she engaged in a hoax, cynically exploiting the longings of others such as the desire to rediscover a lost childhood?

Whether Anna Breytenbach (who features in the video below) can see inside the minds of animals, I have no way of knowing, but that animals have minds and that they can have what we might regard as intensely human experiences — such as the feeling of loss — I have no doubt.

The cultural impact of science which is often more colored by belief than reason, suggests that whenever we reflect on the experience of animals we are perpetually at risk of falling into the trap of anthropomorphization. The greater risk, however, is that we unquestioningly accept this assumption: that even if as humans we are the culmination of an evolutionary process that goes all the way back to the formation of amino acids, at the apex of this process we somehow stand apart. We can observe the animal kingdom and yet as humans we have risen above it.

But instead, what actually sets us apart in the most significant way is not the collection of attributes that define human uniqueness, but rather it is this very idea of our separateness — the idea that we are here and nature is out there.

facebooktwittermail

How computers are making people stupid

The pursuit of artificial intelligence has been driven by the assumption that if human intelligence can be replicated or advanced upon by machines then this accomplishment will in various ways serve the human good. At the same time, thanks to the technophobia promoted in some dystopian science fiction, there is a popular fear that if machines become smarter than people we will end up becoming their slaves.

It turns out that even if there are some irrational fears wrapped up in technophobia, there are good reasons to regard computing devices as a threat to human intelligence.

It’s not that we are creating machines that harbor evil designs to take over the world, but simply that each time we delegate a function of the brain to an external piece of circuitry, our mental faculties inevitably atrophy.

Use it or lose it applies just as much to the brain as it does to any other part of the body.

Carolyn Gregoire writes: Take a moment to think about the last time you memorized someone’s phone number. Was it way back when, perhaps circa 2001? And when was the last time you were at a dinner party or having a conversation with friends, when you whipped out your smartphone to Google the answer to someone’s question? Probably last week.

Technology changes the way we live our daily lives, the way we learn, and the way we use our faculties of attention — and a growing body of research has suggested that it may have profound effects on our memories (particularly the short-term, or working, memory), altering and in some cases impairing its function.

The implications of a poor working memory on our brain functioning and overall intelligence levels are difficult to over-estimate.

“The depth of our intelligence hinges on our ability to transfer information from working memory, the scratch pad of consciousness, to long-term memory, the mind’s filing system,” Nicholas Carr, author of The Shallows: What The Internet Is Doing To Our Brains, wrote in Wired in 2010. “When facts and experiences enter our long-term memory, we are able to weave them into the complex ideas that give richness to our thought.”

While our long-term memory has a nearly unlimited capacity, the short-term memory has more limited storage, and that storage is very fragile. “A break in our attention can sweep its contents from our mind,” Carr explains.

Meanwhile, new research has found that taking photos — an increasingly ubiquitous practice in our smartphone-obsessed culture — actually hinders our ability to remember that which we’re capturing on camera.

Concerned about premature memory loss? You probably should be. Here are five things you should know about the way technology is affecting your memory.

1. Information overload makes it harder to retain information.

Even a single session of Internet usage can make it more difficult to file away information in your memory, says Erik Fransén, computer science professor at Sweden’s KTH Royal Institute of Technology. And according to Tony Schwartz, productivity expert and author of The Way We’re Working Isn’t Working, most of us aren’t able to effectively manage the overload of information we’re constantly bombarded with. [Continue reading...]

As I pointed out in a recent post, the externalization of intelligence long preceded the creation of smart phones and personal computers. Indeed, it goes all the way back to the beginning of civilization when we first learned how to transform language into a material form as the written word, thereby creating a substitute for memory.

Plato foresaw the consequences of writing.

In Phaedrus, he describes an exchange between the god Thamus, king and ruler of all Egypt, and the god Theuth, who has invented writing. Theuth, who is very proud of what he has created says: “This invention, O king, will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.” But Thamus points out that while one man has the ability to invent, the ability to judge an invention’s usefulness or harmfulness belongs to another.

If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.

Bedazzled by our ingenuity and its creations, we are fast forgetting the value of this quality that can never be implanted in a machine (or a text): wisdom.

facebooktwittermail

Plato foresaw the danger of automation

Automation takes many forms and as members of a culture that reveres technology, we generally perceive automation in terms of its output: what it accomplishes, be that through manufacturing, financial transactions, flying aircraft, and so forth.

But automation doesn’t merely accomplish things for human beings; it simultaneously changes us by externalizing intelligence. The intelligence required by a person is transferred to a machine with its embedded commands, allowing the person to turn his intelligence elsewhere — or nowhere.

Automation is invariably sold on the twin claims that it offers greater efficiency, while freeing people from tedious tasks so that — at least in theory — they can give their attention to something more fulfilling.

There’s no disputing the efficiency argument — there could never have been such a thing as mass production without automation — but the promise of freedom has always been oversold. Automation has resulted in the creation of many of the most tedious, soul-destroying forms of labor in human history.

Automated systems are, however, never perfect, and when they break, they reveal the corrupting effect they have had on human intelligence — intelligence whose skilful application has atrophied through lack of use.

Nicholas Carr writes: On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York. As is typical of commercial flights today, the pilots didn’t have all that much to do during the hour-long trip. The captain, Marvin Renslow, manned the controls briefly during takeoff, guiding the Bombardier Q400 turboprop into the air, then switched on the autopilot and let the software do the flying. He and his co-pilot, Rebecca Shaw, chatted — about their families, their careers, the personalities of air-traffic controllers — as the plane cruised uneventfully along its northwesterly route at 16,000 feet. The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity. Rather than preventing a stall, Renslow’s action caused one. The plane spun out of control, then plummeted. “We’re down,” the captain said, just before the Q400 slammed into a house in a Buffalo suburb.

The crash, which killed all 49 people on board as well as one person on the ground, should never have happened. A National Transportation Safety Board investigation concluded that the cause of the accident was pilot error. The captain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.” An executive from the company that operated the flight, the regional carrier Colgan Air, admitted that the pilots seemed to lack “situational awareness” as the emergency unfolded.

The Buffalo crash was not an isolated incident. An eerily similar disaster, with far more casualties, occurred a few months later. On the night of May 31, an Air France Airbus A330 took off from Rio de Janeiro, bound for Paris. The jumbo jet ran into a storm over the Atlantic about three hours after takeoff. Its air-speed sensors, coated with ice, began giving faulty readings, causing the autopilot to disengage. Bewildered, the pilot flying the plane, Pierre-Cédric Bonin, yanked back on the stick. The plane rose and a stall warning sounded, but he continued to pull back heedlessly. As the plane climbed sharply, it lost velocity. The airspeed sensors began working again, providing the crew with accurate numbers. Yet Bonin continued to slow the plane. The jet stalled and began to fall. If he had simply let go of the control, the A330 would likely have righted itself. But he didn’t. The plane dropped 35,000 feet in three minutes before hitting the ocean. All 228 passengers and crew members died.

The first automatic pilot, dubbed a “metal airman” in a 1930 Popular Science article, consisted of two gyroscopes, one mounted horizontally, the other vertically, that were connected to a plane’s controls and powered by a wind-driven generator behind the propeller. The horizontal gyroscope kept the wings level, while the vertical one did the steering. Modern autopilot systems bear little resemblance to that rudimentary device. Controlled by onboard computers running immensely complex software, they gather information from electronic sensors and continuously adjust a plane’s attitude, speed, and bearings. Pilots today work inside what they call “glass cockpits.” The old analog dials and gauges are mostly gone. They’ve been replaced by banks of digital displays. Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes. What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.

And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes, leading to what Jan Noyes, an ergonomics expert at Britain’s University of Bristol, terms “a de-skilling of the crew.” No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and a leading authority on automation. When an autopilot system fails, too many pilots, thrust abruptly into what has become a rare role, make mistakes. Rory Kay, a veteran United captain who has served as the top safety official of the Air Line Pilots Association, put the problem bluntly in a 2011 interview with the Associated Press: “We’re forgetting how to fly.” The Federal Aviation Administration has become so concerned that in January it issued a “safety alert” to airlines, urging them to get their pilots to do more manual flying. An overreliance on automation, the agency warned, could put planes and passengers at risk.

The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result. [Continue reading...]

Now if we think of automation as a form of forgetfulness, we will see that it extends much more deeply into civilization than just its modern manifestations through mechanization and digitization.

In the beginning was the Word and later came the Fall: the point at which language — the primary tool for shaping, expressing and sharing human intelligence — was cut adrift from the human mind and given autonomy in the form of writing.

Through the written word, thought can be immortalized and made universal. No other mechanism could have ever had such a dramatic effect on the exchange of ideas. Without writing, there would have been no such thing as humanity. But we also incurred a loss and because we have such little awareness of this loss, we might find it hard to imagine that preliterate people possessed forms of intelligence we now lack.

Plato described what writing would do — and by extension, what would happen to pilots.

In Phaedrus, he describes an exchange between the god Thamus, king and ruler of all Egypt, and the god Theuth, who has invented writing. Theuth, who is very proud of what he has created says: “This invention, O king, will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.” But Thamus points out that while one man has the ability to invent, the ability to judge an invention’s usefulness or harmfulness belongs to another.

If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.

Bedazzled by our ingenuity and its creations, we are fast forgetting the value of this quality that can never be implanted in a machine (or a text): wisdom.

Even the word itself is beginning to sound arcane — as though it should be reserved for philosophers and storytellers and is no longer something we should all strive to possess.

facebooktwittermail

The inexact mirrors of the human mind

Douglas Hofstadter, author of Gödel, Escher, Bach: An Eternal Golden Braid (GEB), published in 1979, and one of the pioneers of artificial intelligence (AI), gained prominence right at a juncture when the field was abandoning its interest in human intelligence.

douglas-hofstadterJames Somers writes: In GEB, Hofstadter was calling for an approach to AI concerned less with solving human problems intelligently than with understanding human intelligence — at precisely the moment that such an approach, having borne so little fruit, was being abandoned. His star faded quickly. He would increasingly find himself out of a mainstream that had embraced a new imperative: to make machines perform in any way possible, with little regard for psychological plausibility.

Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force. For each legal move it could make at a given point in the game, it would consider its opponent’s responses, its own responses to those responses, and so on for six or more steps down the line. With a fast evaluation function, it would calculate a score for each possible position, and then make the move that led to the best score. What allowed Deep Blue to beat the world’s best humans was raw computational power. It could evaluate up to 330 million positions a second, while Kasparov could evaluate only a few dozen before having to make a decision.

Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?” A brand of AI that didn’t try to answer such questions—however impressive it might have been—was, in Hofstadter’s mind, a diversion. He distanced himself from the field almost as soon as he became a part of it. “To me, as a fledgling AI person,” he says, “it was self-evident that I did not want to get involved in that trickery. It was obvious: I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”

One answer is that the AI enterprise went from being worth a few million dollars in the early 1980s to billions by the end of the decade. (After Deep Blue won in 1997, the value of IBM’s stock increased by $18 billion.) The more staid an engineering discipline AI became, the more it accomplished. Today, on the strength of techniques bearing little relation to the stuff of thought, it seems to be in a kind of golden age. AI pervades heavy industry, transportation, and finance. It powers many of Google’s core functions, Netflix’s movie recommendations, Watson, Siri, autonomous drones, the self-driving car.

“The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics,” Stuart Russell and Peter Norvig write in their leading textbook, Artificial Intelligence: A Modern Approach. AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?

It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something. Russell, a computer-science professor at Berkeley, said to me, “What’s the combined market cap of all of the search companies on the Web? It’s probably four hundred, five hundred billion dollars. Engines that could actually extract all that information and understand it would be worth 10 times as much.”

This, then, is the trillion-dollar question: Will the approach undergirding AI today — an approach that borrows little from the mind, that’s grounded instead in big data and big engineering — get us to where we want to go? How do you make a search engine that understands if you don’t know how you understand? Perhaps, as Russell and Norvig politely acknowledge in the last chapter of their textbook, in taking its practical turn, AI has become too much like the man who tries to get to the moon by climbing a tree: “One can report steady progress, all the way to the top of the tree.”

Consider that computers today still have trouble recognizing a handwritten A. In fact, the task is so difficult that it forms the basis for CAPTCHAs (“Completely Automated Public Turing tests to tell Computers and Humans Apart”), those widgets that require you to read distorted text and type the characters into a box before, say, letting you sign up for a Web site.

In Hofstadter’s mind, there is nothing to be surprised about. To know what all A’s have in common would be, he argued in a 1982 essay, to “understand the fluid nature of mental categories.” And that, he says, is the core of human intelligence. [Continue reading...]

facebooktwittermail

Audio: Proto-Indo-European reconstructed

Here’s a story in English:

A sheep that had no wool saw horses, one of them pulling a heavy wagon, one carrying a big load, and one carrying a man quickly. The sheep said to the horses: “My heart pains me, seeing a man driving horses.” The horses said: “Listen, sheep, our hearts pain us when we see this: a man, the master, makes the wool of the sheep into a warm garment for himself. And the sheep has no wool.” Having heard this, the sheep fled into the plain.

And here is “a very educated approximation” of how that story might have sounded if spoken in Proto-Indo-European about 6,500 years ago:

(Read more at Archeology.)

facebooktwittermail

The vexing question of how to sign off from an email

Ben Pobjie writes: The world of modern technology is filled with potential pitfalls to snare the unwary: how to keep sexting discreet; how to commit libel on Twitter without adverse consequence; how to stop playing the game Candy Crush.

But there are few elements of modernity as vexing as the question of how to sign off from an email. It’s an easy task if you want to look like a passive-aggressive tosser, but if you don’t, it’s one of the most fraught decisions you’ll make – and you have to make it over and over again, every day, knowing that if you slip up you might find yourself on the end of a workplace harassment complaint or scathing mockery from colleagues.

Like many people, I most often go for the safe option: the “cheers”. “Cheers, Ben” my emails tend to conclude. The trouble with “cheers” is, first of all, what does it actually mean? Am I literally cheering the person I’m writing to? Am I saying, “hooray!” at the end of my message? Or is it a toast – am I drinking to their health and electronically clinking e-glasses with them? Of course, it’s neither. “Cheers” doesn’t actually mean anything, and it’s also mind-bogglingly unoriginal: all that says to your correspondent is “I have neither the wit nor the inclination to come up with any meaningful way to end this”. [Continue reading...]

facebooktwittermail

A village invents a language all its own

The New York Times reports: There are many dying languages in the world. But at least one has recently been born, created by children living in a remote village in northern Australia.

Carmel O’Shannessy, a linguist at the University of Michigan, has been studying the young people’s speech for more than a decade and has concluded that they speak neither a dialect nor the mixture of languages called a creole, but a new language with unique grammatical rules.

The language, called Warlpiri rampaku, or Light Warlpiri, is spoken only by people under 35 in Lajamanu, an isolated village of about 700 people in Australia’s Northern Territory. In all, about 350 people speak the language as their native tongue. Dr. O’Shannessy has published several studies of Light Warlpiri, the most recent in the June issue of Language.

“Many of the first speakers of this language are still alive,” said Mary Laughren, a research fellow in linguistics at the University of Queensland in Australia, who was not involved in the studies. One reason Dr. O’Shannessy’s research is so significant, she said, “is that she has been able to record and document a ‘new’ language in the very early period of its existence.” [Continue reading...]

facebooktwittermail

Our conflicting interests in knowing what others think

Tim Kreider writes: Recently I received an e-mail that wasn’t meant for me, but was about me. I’d been cc’d by accident. This is one of the darker hazards of electronic communication, Reason No. 697 Why the Internet Is Bad — the dreadful consequence of hitting “reply all” instead of “reply” or “forward.” The context is that I had rented a herd of goats for reasons that aren’t relevant here and had sent out a mass e-mail with photographs of the goats attached to illustrate that a) I had goats, and b) it was good. Most of the responses I received expressed appropriate admiration and envy of my goats, but the message in question was intended not as a response to me but as an aside to some of the recipient’s co-workers, sighing over the kinds of expenditures on which I was frittering away my uncomfortable income. The word “oof” was used.

I’ve often thought that the single most devastating cyberattack a diabolical and anarchic mind could design would not be on the military or financial sector but simply to simultaneously make every e-mail and text ever sent universally public. It would be like suddenly subtracting the strong nuclear force from the universe; the fabric of society would instantly evaporate, every marriage, friendship and business partnership dissolved. Civilization, which is held together by a fragile web of tactful phrasing, polite omissions and white lies, would collapse in an apocalypse of bitter recriminations and weeping, breakups and fistfights, divorces and bankruptcies, scandals and resignations, blood feuds, litigation, wholesale slaughter in the streets and lingering ill will.

This particular e-mail was, in itself, no big deal. Tone is notoriously easy to misinterpret over e-mail, and my friend’s message could have easily been read as affectionate head shaking rather than a contemptuous eye roll. It’s frankly hard to parse the word “oof” in this context. And let’s be honest — I am terrible with money, but I’ve always liked to think of this as an endearing foible. What was surprisingly wounding wasn’t that the e-mail was insulting but simply that it was unsympathetic. Hearing other people’s uncensored opinions of you is an unpleasant reminder that you’re just another person in the world, and everyone else does not always view you in the forgiving light that you hope they do, making all allowances, always on your side. There’s something existentially alarming about finding out how little room we occupy, and how little allegiance we command, in other people’s heads. [Continue reading...]

facebooktwittermail

The essayification of everything

Christy Wampole writes: Lately, you may have noticed the spate of articles and books that take interest in the essay as a flexible and very human literary form. These include “The Wayward Essay” and Phillip Lopate’s reflections on the relationship between essay and doubt, and books such as “How to Live,” Sarah Bakewell’s elegant portrait of Montaigne, the 16th-century patriarch of the genre, and an edited volume by Carl H. Klaus and Ned Stuckey-French called “Essayists on the Essay: Montaigne to Our Time.”

It seems that, even in the proliferation of new forms of writing and communication before us, the essay has become a talisman of our times. What is behind our attraction to it? Is it the essay’s therapeutic properties? Because it brings miniature joys to its writer and its reader? Because it is small enough to fit in our pocket, portable like our own experiences?

I believe that the essay owes its longevity today mainly to this fact: the genre and its spirit provide an alternative to the dogmatic thinking that dominates much of social and political life in contemporary America. In fact, I would advocate a conscious and more reflective deployment of the essay’s spirit in all aspects of life as a resistance against the zealous closed-endedness of the rigid mind. I’ll call this deployment “the essayification of everything.”

What do I mean with this lofty expression?

Let’s start with form’s beginning. The word Michel de Montaigne chose to describe his prose ruminations published in 1580 was “Essais,” which, at the time, meant merely “Attempts,” as no such genre had yet been codified. This etymology is significant, as it points toward the experimental nature of essayistic writing: it involves the nuanced process of trying something out. Later on, at the end of the 16th century, Francis Bacon imported the French term into English as a title for his more boxy and solemn prose. The deal was thus sealed: essays they were and essays they would stay. There was just one problem: the discrepancy in style and substance between the texts of Michel and Francis was, like the English Channel that separated them, deep enough to drown in. I’ve always been on Team Michel, that guy who would probably show you his rash, tell you some dirty jokes, and ask you what you thought about death. I imagine, perhaps erroneously, that Team Francis tends to attract a more cocksure, buttoned-up fan base, what with all the “He that hath wife and children hath given hostages to fortune; for they are impediments to great enterprises,” and whatnot. [Continue reading...]

facebooktwittermail

Linguists identify words that have changed little in 15,000 years

The Age reports:

You, hear me! Give this fire to that old man. Pull the black worm off the bark and give it to the mother. And no spitting in the ashes!

It’s an odd little speech but if it were spoken clearly to a band of hunter-gatherers in the Caucasus 15,000 years ago, there’s a good chance the listeners would know what you were saying. That’s because all the nouns, verbs, adjectives and adverbs in the four sentences are words that have descended largely unchanged from a language that died out as the glaciers were retreating at the end of the last Ice Age.

The traditional view is that words can’t survive for more than 8000 to 9000 years. Evolution, linguistic “weathering” and the adoption of replacements from other languages eventually drive ancient words to extinction, just like the dinosaurs of the Jurassic era.

But new research suggests a few words survive twice as long. Their existence suggests there was a “proto-Eurasiatic” language that was the common ancestor of about 700 languages used today – and many others that have died out over the centuries.

The descendant tongues are spoken from the Arctic to the southern tip of India. Their speakers are as apparently different as the Uighurs of western China and the Scots of the Outer Hebrides. [Continue reading...]

facebooktwittermail