Author Archives: Attention to the Unseen

Is mirror-touch synesthesia a superpower or a curse?

Erika Hayasaki writes: No one, it seemed, knew what the patient clutching the stuffed blue bunny was feeling. At 33, he looked like a bewildered boy, staring at the doctors who crowded into his room in Massachusetts General Hospital. Lumpy oyster-sized growths shrouded his face, the result of a genetic condition that causes benign tumors to develop on the skin, in the brain, and on organs, hindering the patient’s ability to walk, talk, and feel normally. He looked like he was grimacing in pain, but his mother explained that her son, Josh, did not have a clear threshold for pain or other sensations. If Josh felt any discomfort at all, he was nearly incapable of expressing it.

“Any numbness?” asked Joel Salinas, a soft-spoken doctor in the Harvard Neurology Residency Program, a red-tipped reflex hammer in his doctor’s coat pocket. “Like it feels funny?” Josh did not answer. Salinas pulled up a blanket, revealing Josh’s atrophied legs. He thumped Josh’s left leg with the reflex hammer. Again, Josh barely reacted. But Salinas felt something: The thump against Josh’s left knee registered on Salinas’s own left knee as a tingly tap. Not just a thought of what the thump might feel like, but a distinct physical sensation.

That’s because Salinas himself has a rare medical condition, one that stands in marked contrast to his patients’: While Josh appeared unresponsive even to his own sensations, Salinas is peculiarly attuned to the sensations of others. If he sees someone slapped across the cheek, Salinas feels a hint of the slap against his own cheek. A pinch on a stranger’s right arm might become a tickle on his own. “If a person is touched, I feel it, and then I recognize that it’s touch,” Salinas says.

The condition is called mirror-touch synesthesia, and it has aroused significant interest among neuroscientists in recent years because it appears to be an extreme form of a basic human trait. In all of us, mirror neurons in the premotor cortex and other areas of the brain activate when we watch someone else’s behaviors and actions. Our brains map the regions of the body where we see someone else caressed, jabbed, or whacked, and they mimic just a shade of that feeling on the same spots on our own bodies. For mirror-touch synesthetes like Salinas, that mental simulacrum is so strong that it crosses a threshold into near-tactile sensation, sometimes indistinguishable from one’s own. Neuroscientists regard the condition as a state of “heightened empathic ability.” [Continue reading…]

Facebooktwittermail

Is consciousness an engineering problem?

Michael Graziano writes: The brain is a machine: a device that processes information. That’s according to the last 100 years of neuroscience. And yet, somehow, it also has a subjective experience of at least some of that information. Whether we’re talking about the thoughts and memories swirling around on the inside, or awareness of the stuff entering through the senses, somehow the brain experiences its own data. It has consciousness. How can that be?

That question has been called the ‘hard problem’ of consciousness, where ‘hard’ is a euphemism for ‘impossible’. For decades, it was a disreputable topic among scientists: if you can’t study it or understand it or engineer it, then it isn’t science. On that view, neuroscientists should stick to the mechanics of how information is processed in the brain, not the spooky feeling that comes along with the information. And yet, one can’t deny that the phenomenon exists. What exactly is this consciousness stuff?

Here’s a more pointed way to pose the question: can we build it? Artificial intelligence is growing more intelligent every year, but we’ve never given our machines consciousness. People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own. But that hasn’t panned out (so far as anyone knows). Apparently, the vital spark has to be deliberately designed into the machine. And so the race is on to figure out what exactly consciousness is and how to build it.

I’ve made my own entry into that race, a framework for understanding consciousness called the Attention Schema theory. The theory suggests that consciousness is no bizarre byproduct – it’s a tool for regulating information in the brain. And it’s not as mysterious as most people think. As ambitious as it sounds, I believe we’re close to understanding consciousness well enough to build it.

In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow. [Continue reading…]

Facebooktwittermail

Does Earth have a ‘shadow biosphere’?

Sarah Scoles writes: In the late 1670s, the Dutch scientist Antonie van Leeuwenhoek looked through a microscope at a drop of water and found a whole world. It was tiny; it was squirmy; it was full of weird body types; and it lived, invisibly, all around us. Humans were supposed to be the centre and purpose of the world, and these microscale ‘animalcules’ seemed to have no effect – visible or otherwise – on our existence, so why were they here? Now, we know that those animalcules are microbes and they actually rule our world. They make us sick, keep us healthy, decompose our waste, feed the bottom of our food chain, and make our oxygen. Human ignorance of them had no bearing on their significance, just as gravity was important before an apple dropped on Isaac Newton’s head.

We could be poised on another such philosophical precipice, about to discover a second important world hiding amid our own: alien life on our own planet. Today, scientists seek extraterrestrial microbes in geysers of chilled water shooting from Enceladus and in the ocean sloshing beneath the ice crust of Europa. They search for clues that beings once skittered around the formerly wet rocks of Mars. Telescopes peer into the atmospheres of distant exoplanets, hunting for signs of life. But perhaps these efforts are too far afield. If multiple lines of life bubbled up on Earth and evolved separately from our ancient ancestors, we could discover alien biology without leaving this planet.

The modern-day descendants of these ‘aliens’ might still be here, squirming around with van Leeuwenhoek’s microbes. Scientists call these hypothetical hangers-on the ‘shadow biosphere’. If a shadow biosphere were ever found, it would provide evidence that life isn’t a once-in-a-universe statistical accident. If biology can happen twice on one planet, it must have happened countless times on countless other planets. But most of our scientific methods are ill-equipped to discover a shadow biosphere. And that’s a problem, says Carol Cleland, the originator of the term and its biggest proponent. [Continue reading…]

Facebooktwittermail

The case for shame

Julian Baggini writes: In California they call it #droughtshaming. People found using lots of water when the state is as dry as a cream cracker are facing trial by hashtag. The actor Tom Selleck is the latest target, accused of taking truckloads of water from a fire hydrant for his thirsty avocado crop.

Most people seem happy to harness the power of shame when the victims are the rich and powerful. But our attitudes to shame are actually much more ambivalent and contradictory. That’s why it was a stroke of genius to call Paul Abbott’s Channel Four series Shameless. We are at a point in our social history where the word is perfectly poised between condemnation and celebration.

Shame, like guilt, is something we often feel we are better off without. The shame culture is strongly associated with oppression. So-called honour killings are inflicted on people who bring shame to their families, often for nothing more than loving the “wrong” person or, most horrifically, for being the victims of rape. In the case of gay people, shame has given way to pride. To be shameless is to be who you are, without apology.

And yet in other contexts we are rather conflicted about the cry of shame. You can protest against honour killings one day, then name and shame tax-evading multinationals the next. When politicians are called shameless, there is no doubt that this is a very bad thing. Shame is like rain: whether it’s good or bad depends on where and how heavily it falls. [Continue reading…]

Facebooktwittermail

On the value of not knowing everything

James McWilliams writes: In January 2010, while driving from Chicago to Minneapolis, Sam McNerney played an audiobook and had an epiphany. The book was Jonah Lehrer’s How We Decide, and the epiphany was that consciousness could reside in the brain. The quest for an empirical understanding of consciousness has long preoccupied neurobiologists. But McNerney was no neurobiologist. He was a twenty-year-old philosophy major at Hamilton College. The standard course work — ancient, modern, and contemporary philosophy — enthralled him. But after this drive, after he listened to Lehrer, something changed. “I had to rethink everything I knew about everything,” McNerney said.

Lehrer’s publisher later withdrew How We Decide for inaccuracies. But McNerney was mentally galvanized for good reason. He had stumbled upon what philosophers call the “Hard Problem” — the quest to understand the enigma of the gap between mind and body. Intellectually speaking, what McNerney experienced was like diving for a penny in a pool and coming up with a gold nugget.

The philosopher Thomas Nagel drew popular attention to the Hard Problem four decades ago in an influential essay titled “What Is It Like to Be a Bat?” Frustrated with the “recent wave of reductionist euphoria,” Nagel challenged the reductive conception of mind — the idea that consciousness resides as a physical reality in the brain — by highlighting the radical subjectivity of experience. His main premise was that “an organism has conscious mental states if and only if there is something that it is like to be that organism.”

If that idea seems elusive, consider it this way: A bat has consciousness only if there is something that it is like for that bat to be a bat. Sam has consciousness only if there is something it is like for Sam to be Sam. You have consciousness only if there is something that it is like for you to be you (and you know that there is). And here’s the key to all this: Whatever that “like” happens to be, according to Nagel, it necessarily defies empirical verification. You can’t put your finger on it. It resists physical accountability.

McNerney returned to Hamilton intellectually turbocharged. This was an idea worth pondering. “It took hold of me,” he said. “It chose me — I know you hear that a lot, but that’s how it felt.” He arranged to do research in cognitive science as an independent study project with Russell Marcus, a trusted professor. Marcus let him loose to write what McNerney calls “a seventy-page hodgepodge of psychological research and philosophy and everything in between.” Marcus remembered the project more charitably, as “a huge, ambitious, wide-ranging, smart, and engaging paper.” Once McNerney settled into his research, Marcus added, “it was like he had gone into a phone booth and come out as a super-student.”

When he graduated in 2011, McNerney was proud. “I pulled it off,” he said about earning a degree in philosophy. Not that he had any hard answers to any big problems, much less the Hard Problem. Not that he had a job. All he knew was that he “wanted to become the best writer and thinker I could be.”

So, as one does, he moved to New York City.

McNerney is the kind of young scholar adored by the humanities. He’s inquisitive, open-minded, thrilled by the world of ideas, and touched with a tinge of old-school transcendentalism. What Emerson said of Thoreau — “he declined to give up his large ambition of knowledge and action for any narrow craft or profession” — is certainly true of McNerney. [Continue reading…]

Facebooktwittermail

The logic of effective altruism

Peter Singer writes: I met Matt Wage in 2009 when he took my Practical Ethics class at Princeton University. In the readings relating to global poverty and what we ought to be doing about it, he found an estimate of how much it costs to save the life of one of the millions of children who die each year from diseases that we can prevent or cure. This led him to calculate how many lives he could save, over his lifetime, assuming he earned an average income and donated 10 percent of it to a highly effective organization, such as one providing families with bed nets to prevent malaria, a major killer of children. He discovered that he could, with that level of donation, save about one hundred lives. He thought to himself, “Suppose you see a burning building, and you run through the flames and kick a door open, and let one hundred people out. That would be the greatest moment in your life. And I could do as much good as that!”

Two years later Wage graduated, receiving the Philosophy Department’s prize for the best senior thesis of the year. He was accepted by the University of Oxford for postgraduate study. Many students who major in philosophy dream of an opportunity like that — I know I did — but by then Wage had done a lot of thinking about what career would do the most good. Over many discussions with others, he came to a very different choice: he took a job on Wall Street, working for an arbitrage trading firm. On a higher income, he would be able to give much more, both as a percentage and in dollars, than 10 percent of a professor’s income. One year after graduating, Wage was donating a six-figure sum — roughly half his annual earnings — to highly effective charities. He was on the way to saving a hundred lives, not over his entire career but within the first year or two of his working life and every year thereafter.

Wage is part of an exciting new movement: effective altruism. At universities from Oxford to Harvard and the University of Washington, from Bayreuth in Germany to Brisbane in Australia, effective altruism organizations are forming. Effective altruists are engaging in lively discussions on social media and websites, and their ideas are being examined in the New York Times, the Washington Post, and even the Wall Street Journal. Philosophy, and more specifically practical ethics, has played an important role in effective altruism’s development, and effective altruism shows that philosophy is returning to its Socratic role of challenging our ideas about what it is to live an ethical life. In doing so, philosophy has demonstrated its ability to transform, sometimes quite dramatically, the lives of those who study it. Moreover, it is a transformation that, I believe, should be welcomed because it makes the world a better place. [Continue reading…]

Facebooktwittermail

The tiny engines of life

Tim Flannery writes: In 1609 Galileo Galilei turned his gaze, magnified twentyfold by lenses of Dutch design, toward the heavens, touching off a revolution in human thought. A decade later those same lenses delivered the possibility of a second revolution, when Galileo discovered that by inverting their order he could magnify the very small. For the first time in human history, it lay in our power to see the building blocks of bodies, the causes of diseases, and the mechanism of reproduction. Yet according to Paul Falkowski’s Life’s Engines:

Galileo did not seem to have much interest in what he saw with his inverted telescope. He appears to have made little attempt to understand, let alone interpret, the smallest objects he could observe.

Bewitched by the moons of Saturn and their challenge to the heliocentric model of the universe, Galileo ignored the possibility that the magnified fleas he drew might have anything to do with the plague then ravaging Italy. And so for three centuries more, one of the cruellest of human afflictions would rage on, misunderstood and thus unpreventable, taking the lives of countless millions.

Perhaps it’s fundamentally human both to be awed by the things we look up to and to pass over those we look down on. If so, it’s a tendency that has repeatedly frustrated human progress. Half a century after Galileo looked into his “inverted telescope,” the pioneers of microscopy Antonie van Leeuwenhoek and Robert Hooke revealed that a Lilliputian universe existed all around and even inside us. But neither of them had students, and their researches ended in another false dawn for microscopy. It was not until the middle of the nineteenth century, when German manufacturers began producing superior instruments, that the discovery of the very small began to alter science in fundamental ways.

Today, driven by ongoing technological innovations, the exploration of the “nanoverse,” as the realm of the minuscule is often termed, continues to gather pace. One of the field’s greatest pioneers is Paul Falkowski, a biological oceanographer who has spent much of his scientific career working at the intersection of physics, chemistry, and biology. His book Life’s Engines: How Microbes Made Earth Habitable focuses on one of the most astonishing discoveries of the twentieth century — that our cells are comprised of a series of highly sophisticated “little engines” or nanomachines that carry out life’s vital functions. It is a work full of surprises, arguing for example that all of life’s most important innovations were in existence by around 3.5 billion years ago—less than a billion years after Earth formed, and a period at which our planet was largely hostile to living things. How such mind-bending complexity could have evolved at such an early stage, and in such a hostile environment, has forced a fundamental reconsideration of the origins of life itself. [Continue reading…]

Facebooktwittermail

On not being there: The data-driven body at work and at play

Rebecca Lemov writes: The protagonist of William Gibson’s 2014 science-fiction novel The Peripheral, Flynne Fisher, works remotely in a way that lends a new and fuller sense to that phrase. The novel features a double future: One set of characters inhabits the near future, ten to fifteen years from the present, while another lives seventy years on, after a breakdown of the climate and multiple other systems that has apocalyptically altered human and technological conditions around the world.

In that “further future,” only 20 percent of the Earth’s human population has survived. Each of these fortunate few is well off and able to live a life transformed by healing nanobots, somaticized e-mail (which delivers messages and calls to the roof of the user’s mouth), quantum computing, and clean energy. For their amusement and profit, certain “hobbyists” in this future have the Borgesian option of cultivating an alternative path in history — it’s called “opening up a stub” — and mining it for information as well as labor.

Flynne, the remote worker, lives on one of those paths. A young woman from the American Southeast, possibly Appalachia or the Ozarks, she favors cutoff jeans and resides in a trailer, eking out a living as a for-hire sub playing video games for wealthy aficionados. Recruited by a mysterious entity that is beta-testing drones that are doing “security” in a murky skyscraper in an unnamed city, she thinks at first that she has been taken on to play a kind of video game in simulated reality. As it turns out, she has been employed to work in the future as an “information flow” — low-wage work, though the pay translates to a very high level of remuneration in the place and time in which she lives.

What is of particular interest is the fate of Flynne’s body. Before she goes to work she must tend to its basic needs (nutrition and elimination), because during her shift it will effectively be “vacant.” Lying on a bed with a special data-transmitting helmet attached to her head, she will be elsewhere, inhabiting an ambulatory robot carapace — a “peripheral” — built out of bio-flesh that can receive her consciousness.

Bodies in this data-driven economic backwater of a future world economy are abandoned for long stretches of time — disposable, cheapened, eerily vacant in the temporary absence of “someone at the helm.” Meanwhile, fleets of built bodies, grown from human DNA, await habitation.

Alex Rivera explores similar territory in his Mexican sci-fi film The Sleep Dealer (2008), set in a future world after a wall erected on the US–Mexican border has successfully blocked migrants from entering the United States. Digital networks allow people to connect to strangers all over the world, fostering fantasies of physical and emotional connection. At the same time, low-income would-be migrant workers in Tijuana and elsewhere can opt to do remote work by controlling robots building a skyscraper in a faraway city, locking their bodies into devices that transmit their labor to the site. In tank-like warehouses, lined up in rows of stalls, they “jack in” by connecting data-transmitting cables to nodes implanted in their arms and backs. Their bodies are in Mexico, but their work is in New York or San Francisco, and while they are plugged in and wearing their remote-viewing spectacles, their limbs move like the appendages of ghostly underwater creatures. Their life force drained by the taxing labor, these “sleep dealers” end up as human discards.

What is surprising about these sci-fi conceits, from “transitioning” in The Peripheral to “jacking in” in The Sleep Dealer, is how familiar they seem, or at least how closely they reflect certain aspects of contemporary reality. Almost daily, we encounter people who are there but not there, flickering in and out of what we think of as presence. A growing body of research explores the question of how users interact with their gadgets and media outlets, and how in turn these interactions transform social relationships. The defining feature of this heavily mediated reality is our presence “elsewhere,” a removal of at least part of our conscious awareness from wherever our bodies happen to be. [Continue reading…]

Facebooktwittermail