Category Archives: Consciousness

Neurological conductors that keep the brain in time and tune

Harvard Gazette: Like musical sounds, different states of mind are defined by distinct, characteristic waveforms, recognizable frequencies and rhythms in the brain’s electrical field. When the brain is alert and performing complex computations, the cerebral cortex — the wrinkled outer surface of the brain — thrums with cortical band oscillations in the gamma wavelength. In some neurological disorders like schizophrenia, however, these waves are out of tune and the rhythm is out of sync.

New research led by Harvard Medical School (HMS) scientists at the VA Boston Healthcare System (VABHS) has identified a specific class of neurons — basal forebrain GABA parvalbumin neurons, or PV neurons — that trigger these waves, acting as neurological conductors that trigger the cortex to hum rhythmically and in tune. (GABA is gamma-amniobutyric acid, a major neurotransmitter in the brain.)

The results appear this week in the journal Proceedings of the National Academy of Sciences.

“This is a move toward a unified theory of consciousness control,” said co-senior author Robert McCarley, HMS professor of psychiatry and head of the Department of Psychiatry at VA Boston Healthcare. “We’ve known that the basal forebrain is important in turning consciousness on and off in sleep and wake, but now we’ve found that these specific cells also play a key role in triggering the synchronized rhythms that characterize conscious thought, perception, and problem-solving.” [Continue reading…]

Facebooktwittermail

The thoughts of our ancient ancestors

The discovery of what appear to have been deliberately etched markings made by a human ancestor, Homo erectus, on the surface of a shell, call for a reconsideration of assumptions that have been made about the origins of abstract thought.

While the meaning of these zigzag markings will most likely remain forever unknown, it can reasonably be inferred that for the individual who created them, the marks had some significance.

In a report in Nature, Josephine Joordens, a biologist at Leiden University whose team discovered the markings, says:

“We’ve looked at all possibilities, but in the end we are really certain that this must have been made by an agent who did a very deliberate action with a very sharp implement,” says Joordens. Her team tried replicating the pattern on fresh and fossilized shells, “and that made us realize how difficult it really was”, she says.

Saying much more about the engraving is tricky. “If you don’t know the intention of the person who made it, it’s impossible to call it art,” says Joordens.

“But on the other hand, it is an ancient drawing. It is a way of expressing yourself. What was meant by the person who did this, we simply don’t know, ” she adds. “It could have been to impress his girlfriend, or to doodle a bit, or to mark the shell as his own property.”

Clive Finlayson, a zoologist at the Gibraltar Museum who was part of the team that described cross-hatch patterns linked to Neanderthals, is also agnostic about whether to call the H. erectus doodles art. What is more important, he says, is the growing realization that abilities such as abstract thinking, once ascribed to only H. sapiens, were present in other archaic humans, including, now, their ancestors.

“I’ve been suggesting increasingly strongly that a lot of these things that are meant to be modern human we’re finding in other hominids,” he says. “We really need to revisit these concepts and take stock.”

Palaeoanthropology, by necessity, is a highly speculative discipline — therein lies both its strength and its weakness.

The conservatism of hard science recoils at the idea that some scratches on a single shell amount to sufficient evidence to prompt a reconsideration about the origins of the human mind, and yet to refrain from such speculation seems like an effort to restrain the powers of the very thing we are trying to understand.

Rationally, there is as much reason to assume that abstract thinking long predates modern humans and thus searching for evidence of its absence and finding none would leave us agnostic about its presence or absence, than there is reason to assume that at some juncture it was born.

My inclination is to believe that any living creature that has some capacity to construct a neurological representation of their surroundings is by that very capacity employing something akin to abstract thinking.

This ability for the inner to mirror the outer has no doubt evolved, becoming progressively more complex and more deeply abstract, and yet mind, if defined as world-mirroring, seems to have been born when life first moved.

Facebooktwittermail

What is it like to be a bee?

honey-bee

In the minds of many humans, empathy is the signature of humanity and yet if this empathy extends further and includes non-humans we may be suspected of indulging in anthropomorphism — a sentimental projection of our own feelings into places where similar feelings supposedly cannot exist.

But the concept of anthropomorphism is itself a strange idea since it seems to invalidate what should be one of the most basic assumptions we can reasonably make about living creatures: that without the capacity to suffer, nothing would survive.

Just as the deadening of sensation makes people more susceptible to injury, an inability to feel pain would impede any creature’s need to avoid harm.

The seemingly suicidal draw of the moth to a flame is the exception rather than the rule. Moreover the insect is driven by a mistake, not a death wish. It is drawn towards the light, not the heat, oblivious that the two are one.

If humans indulge in projections about the feelings of others — human and non-human — perhaps we more commonly engage in negative projections: choosing to assume that feelings are absent where it would cause us discomfort to be attuned to their presence.

Our inclination is to avoid feeling too much and thus we construct neat enclosures for our concerns.

These enclosures shut out the feelings of strangers and then by extension seal away boundless life from which we have become even more estranged.

Heather Swan writes: It was a warm day in early spring when I had my first long conversation with the entomologist and science studies scholar Sainath Suryanarayanan. We met over a couple of hives I had recently inherited. One was thriving. Piles of dead bees filled the other. Parts of the comb were covered with mould and oozing something that looked like molasses.

Having recently attended a class for hobby beekeepers with Marla Spivak, an entomologist at the University of Minnesota, I was aware of the many different diseases to which bees are susceptible. American foulbrood, which was a mean one, concerned me most. Beekeepers recommended burning all of your equipment if you discovered it in your hives. Some of these bees were alive, but obviously in low spirits, and I didn’t want to destroy them unnecessarily. I called Sainath because I thought he could help me with the diagnosis.

Beekeeping, these days, is riddled with risks. New viruses, habitat loss, pesticides and mites all contribute to creating a deadly labyrinth through which nearly every bee must travel. Additionally, in 2004, mysterious bee disappearances began to plague thousands of beekeepers. Seemingly healthy bees started abandoning their homes. This strange disappearing act became known as colony collapse disorder (CCD).

Since then, the world has seen the decline of many other pollinating species, too. Because honeybees and other pollinators are responsible for pollinating at least one-third of all the food we eat, this is a serious problem globally. Diagnosing bee problems is not simple, but some answers are emerging. A ubiquitous class of pesticides called neonicotinoids have been implicated in pollinator decline, which has fuelled conversations among beekeepers, scientists, policy-makers and growers. A beekeeper facing a failing hive now has to consider not only the health of the hive itself, but also the health of the landscape around the hive. Dead bees lead beekeepers down a path of many questions. And some beekeepers have lost so many hives, they feel like giving up.

When we met at my troubled hives, Sainath brought his own hive tool and veil. He had already been down a path of many questions about bee deaths, one that started in his youth with a fascination for observing insects. When he was 14, he began his ‘Amateur Entomologist’s Record’, where he kept taxonomic notes on such things as wing textures, body shapes, colour patterns and behaviours. But the young scientist’s approach occasionally slipped to include his exuberance, describing one moment as ‘a stupefying experience!’ All this led him to study biology and chemistry in college, then to work on the behavioural ecology of paper wasps during his doctoral studies, and eventually to Minnesota to help Spivak investigate the role of pesticides in CCD.

Sainath had spent several years doing lab and field experiments with wasps and bees, but ultimately wanted to shift from traditional practices in entomology to research that included human/insect relationships. It was Sainath who made me wonder about the role of emotion in science – both in the scientists themselves and in the subjects of their experiments. I had always thought of emotion as something excised from science, but this was impossible for some scientists. What was the role of empathy in experimentation? How do we, with our human limitations, understand something as radically different from us as the honeybee? Did bees have feelings, too? If so, what did that mean for the scientist? For the science? [Continue reading…]

Facebooktwittermail

The grand illusion of time

Jim Holt writes: It was Albert Einstein who initiated the revolution in our understanding of time. In 1905, Einstein proved that time, as it had been understood by physicist and plain man alike, was a fiction. Our idea of time, Einstein realized, is abstracted from our experience with rhythmic phenomena: heartbeats, planetary rotations and revolutions, the swinging of pendulums, the ticking of clocks. Time judgments always come down to judgments of what happens at the same time — of simultaneity. “If, for instance, I say, ‘That train arrives here at seven o’clock,’ I mean something like this: ‘The pointing of the small hand of my watch to seven and the arrival of the train are simultaneous events,’” Einstein wrote. If the events in question are distant from each other, judgments of simultaneity can be made only by sending light signals back and forth. Einstein proved that whether an observer deems two events at different locations to be happening “at the same time” depends on his state of motion. Suppose, for example, that Jones is walking uptown on Fifth Avenue and Smith is walking downtown. Their relative motion results in a discrepancy of several days in what they would judge to be happening “now” in the Andromeda galaxy at the moment they pass each other on the sidewalk. For Smith, the space fleet launched to destroy life on earth is already on its way; for Jones, the Andromedan council of tyrants has not even decided whether to send the fleet.

What Einstein had shown was that there is no universal “now.” Whether two events are simultaneous is relative to the observer. And once simultaneity goes by the board, the very division of moments into “past,” “present,” and “future” becomes meaningless. Events judged to be in the past by one observer may still lie in the future of another; therefore, past and present must be equally definite, equally “real.” In place of the fleeting present, we are left with a vast frozen timescape — a four-dimensional “block universe.” Over here, you are being born; over there, you are celebrating the turn of the millennium; and over yonder, you’ve been dead for a while. Nothing is “flowing” from one event to another. As the mathematician Hermann Weyl memorably put it, “The objective world simply is; it does not happen.” [Continue reading…]

Facebooktwittermail

The orchestration of attention

The New Yorker: Every moment, our brains are bombarded with information, from without and within. The eyes alone convey more than a hundred billion signals to the brain every second. The ears receive another avalanche of sounds. Then there are the fragments of thoughts, conscious and unconscious, that race from one neuron to the next. Much of this data seems random and meaningless. Indeed, for us to function, much of it must be ignored. But clearly not all. How do our brains select the relevant data? How do we decide to pay attention to the turn of a doorknob and ignore the drip of a leaky faucet? How do we become conscious of a certain stimulus, or indeed “conscious” at all?

For decades, philosophers and scientists have debated the process by which we pay attention to things, based on cognitive models of the mind. But, in the view of many modern psychologists and neurobiologists, the “mind” is not some nonmaterial and exotic essence separate from the body. All questions about the mind must ultimately be answered by studies of physical cells, explained in terms of the detailed workings of the more than eighty billion neurons in the brain. At this level, the question is: How do neurons signal to one another and to a cognitive command center that they have something important to say?

“Years ago, we were satisfied to know which areas of the brain light up under various stimuli,” the neuroscientist Robert Desimone told me during a recent visit to his office. “Now we want to know mechanisms.” Desimone directs the McGovern Institute for Brain Research at the Massachusetts Institute of Technology; youthful and trim at the age of sixty-two, he was dressed casually, in a blue pinstripe shirt, and had only the slightest gray in his hair. On the bookshelf of his tidy office were photographs of his two young children; on the wall was a large watercolor titled “Neural Gardens,” depicting a forest of tangled neurons, their spindly axons and dendrites wending downward like roots in rich soil.

Earlier this year, in an article published in the journal Science, Desimone and his colleague Daniel Baldauf reported on an experiment that shed light on the physical mechanism of paying attention. The researchers presented a series of two kinds of images — faces and houses — to their subjects in rapid succession, like passing frames of a movie, and asked them to concentrate on the faces but disregard the houses (or vice versa). The images were “tagged” by being presented at two frequencies — a new face every two-thirds of a second, a new house every half second. By monitoring the frequencies of the electrical activity of the subjects’ brains with magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), Desimone and Baldauf could determine where in the brain the images were being directed.

The scientists found that, even though the two sets of images were presented to the eye almost on top of each other, they were processed by different places in the brain — the face images by a particular region on the surface of the temporal lobe that is known to specialize in facial recognition, and the house images by a neighboring but separate group of neurons specializing in place recognition.

Most importantly, the neurons in the two regions behaved differently. When the subjects were told to concentrate on the faces and to disregard the houses, the neurons in the face location fired in synchrony, like a group of people singing in unison, while the neurons in the house location fired like a group of people singing out of synch, each beginning at a random point in the score. When the subjects concentrated instead on houses, the reverse happened. [Continue reading…]

Facebooktwittermail

Nothingness: From a childhood hallucination to the halls of theoretical physics

Alan Lightman writes: My most vivid encounter with Nothingness occurred in a remarkable experience I had as a child of 9 years old. It was a Sunday afternoon. I was standing alone in a bedroom of my home in Memphis Tennessee, gazing out the window at the empty street, listening to the faint sound of a train passing a great distance away, and suddenly I felt that I was looking at myself from outside my body. I was somewhere in the cosmos. For a brief few moments, I had the sensation of seeing my entire life, and indeed the life of the entire planet, as a brief flicker in a vast chasm of time, with an infinite span of time before my existence and an infinite span of time afterward. My fleeting sensation included infinite space. Without body or mind, I was somehow floating in the gargantuan stretch of space, far beyond the solar system and even the galaxy, space that stretched on and on and on. I felt myself to be a tiny speck, insignificant in a vast universe that cared nothing about me or any living beings and their little dots of existence, a universe that simply was. And I felt that everything I had experienced in my young life, the joy and the sadness, and everything that I would later experience, meant absolutely nothing in the grand scheme of things. It was a realization both liberating and terrifying at once. Then, the moment was over, and I was back in my body.

The strange hallucination lasted only a minute or so. I have never experienced it since. Although Nothingness would seem to exclude awareness along with the exclusion of everything else, awareness was part of that childhood experience, but not the usual awareness I would locate within the three pounds of gray matter in my head. It was a different kind of awareness. I am not religious, and I do not believe in the supernatural. I do not think for a minute that my mind actually left my body. But for a few moments I did experience a profound absence of the familiar surroundings and thoughts we create to anchor our lives. It was a kind of Nothingness.

To understand anything, as Aristotle argued, we must understand what it is not, and Nothingness is the ultimate opposition to any thing. To understand matter, said the ancient Greeks, we must understand the “void,” or the absence of matter. Indeed, in the fifth century B.C., Leucippus argued that without the void there could be no motion because there would be no empty spaces for matter to move into. According to Buddhism, to understand our ego we must understand the ego-free state of “emptiness,” called śūnyatā. To understand the civilizing effects of society, we must understand the behavior of human beings removed from society, as William Golding so powerfully explored in his novel Lord of the Flies.

Following Aristotle, let me say what Nothingness is not. It is not a unique and absolute condition. Nothingness means different things in different contexts. From the perspective of life, Nothingness might mean death. To a physicist, it might mean the complete absence of matter and energy (an impossibility, as we will see), or even the absence of time and space. To a lover, Nothingness might mean the absence of the beloved. To a parent, it might mean the absence of children. To a painter, the absence of color. To a reader, a world without books. To a person impassioned with empathy, emotional numbness. To a theologian or philosopher like Pascal, Nothingness meant the timeless and spaceless infinity known only by God. [Continue reading…]

Facebooktwittermail

The way we live our lives in stories

Jonathan Gottschall: There’s a big question about what it is that makes people people. What is it that most sets our species apart from every other species? That’s the debate that I’ve been involved in lately.

When we call the species homo sapiens that’s an argument in the debate. It’s an argument that it is our sapience, our wisdom, our intelligence, or our big brains that most sets our species apart. Other scientists, other philosophers have pointed out that, no, a lot of the time we’re really not behaving all that rationally and reasonably. It’s our upright posture that sets us apart, or it’s our opposable thumb that allows us to do this incredible tool use, or it’s our cultural sophistication, or it’s the sophistication of language, and so on and so forth. I’m not arguing against any of those things, I’m just arguing that one thing of equal stature has typically been left off of this list, and that’s the way that people live their lives inside stories.

We live in stories all day long—fiction stories, novels, TV shows, films, interactive video games. We daydream in stories all day long. Estimates suggest we just do this for hours and hours per day — making up these little fantasies in our heads, these little fictions in our heads. We go to sleep at night to rest; the body rests, but not the brain. The brain stays up at night. What is it doing? It’s telling itself stories for about two hours per night. It’s eight or ten years out of our lifetime composing these little vivid stories in the theaters of our minds.

I’m not here to downplay any of those other entries into the “what makes us special” sweepstakes. I’m just here to say that one thing that has been left off the list is storytelling. We live our lives in stories, and it’s sort of mysterious that we do this. We’re not really sure why we do this. It’s one of these questions — storytelling — that falls in the gap between the sciences and the humanities. If you have this division into two cultures: you have the science people over here in their buildings, and the humanities people over here in their buildings. They’re writing in their own journals, and publishing their own book series, and the scientists are doing the same thing.

You have this division, and you have all this area in between the sciences and the humanities that no one is colonizing. There are all these questions in the borderlands between these disciplines that are rich and relatively unexplored. One of them is storytelling and it’s one of these questions that humanities people aren’t going to be able to figure out on their own because they don’t have a scientific toolkit that will help them gradually, painstakingly narrow down the field of competing ideas. The science people don’t really see these questions about storytelling as in their jurisdiction: “This belongs to someone else, this is the humanities’ territory, we don’t know anything about it.”

What is needed is fusion — people bringing together methods, ideas, approaches from scholarship and from the sciences to try to answer some of these questions about storytelling. Humans are addicted to stories, and they play an enormous role in human life and yet we know very, very little about this subject. [Continue reading… or watch a video of Gottschall’s talk.]

Facebooktwittermail

How memory speaks

Jerome Groopman writes: I began writing these words on what appeared to be an unremarkable Sunday morning. Shortly before sunrise, the bedroom still dim, I awoke and quietly made my way to the kitchen, careful not to disturb my still-sleeping wife. The dark-roast coffee was retrieved from its place in the pantry, four scoops then placed in a filter. While the coffee was brewing, I picked up The New York Times at the door. Scanning the front page, my eyes rested on an article mentioning Svoboda, the far-right Ukrainian political party (svoboda, means, I remembered, “freedom”).

I prepared an egg-white omelette and toasted two slices of multigrain bread. After a few sips of coffee, fragments of the night’s dream came to mind: I am rushing to take my final examination in college chemistry, but as I enter the amphitheater where the test is given, no one is there. Am I early? Or in the wrong room? The dream was not new to me. It often occurs before I embark on a project, whether it’s an experiment in the laboratory, a drug to be tested in the clinic, or an article to write on memory.

The start of that Sunday morning seems quite mundane. But when we reflect on the manifold manifestations of memory, the mundane becomes marvelous. Memory is operative not only in recalling the meaning of svoboda, knowing who was sleeping with me in bed, and registering my dream as recurrent, but also in rote tasks: navigating the still-dark bedroom, scooping the coffee, using a knife and fork to eat breakfast. Simple activities of life, hardly noticed, reveal memory as a map, clock, and mirror, vital to our sense of place, time, and person.

This role of memory in virtually every activity of our day is put in sharp focus when it is lost. Su Meck, in I Forgot to Remember, pieces together a fascinating tale of life after suffering head trauma as a young mother. A ceiling fan fell and struck her head:

You might wonder how it feels to wake up one morning and not know who you are. I don’t know. The accident didn’t just wipe out all my memories; it hindered me from making new ones for quite some time. I awoke each day to a house full of strangers…. And this wasn’t just a few days. It was weeks before I recognized my boys when they toddled into the room, months before I knew my own telephone number, years before I was able to find my way home from anywhere. I have no more memory of those first several years after the accident than my own kids have of their first years of life.

A computed tomography (CT) scan of Meck’s brain showed swelling over the right frontal area. But neurologists were at a loss to explain the genesis of her amnesia. Memory does not exist in a single site or region of the central nervous system. There are estimated to be 10 to 100 billion neurons in the human brain, each neuron making about one thousand connections to other neurons at the junctions termed synapses. Learning, and then storing what we learn through life, involve intricate changes in the nature and number of these trillions of neuronal connections. But memory is made not only via alterations at the synaptic level. It also involves regional remodeling of parts of our cortex. Our brain is constantly changing in its elaborate circuitry and, to some degree, configuration. [Continue reading…]

Facebooktwittermail

How music hijacks our perception of time

Jonathan Berger writes: One evening, some 40 years ago, I got lost in time. I was at a performance of Schubert’s String Quintet in C major. During the second movement I had the unnerving feeling that time was literally grinding to a halt. The sensation was powerful, visceral, overwhelming. It was a life-changing moment, or, as it felt at the time, a life-changing eon.

It has been my goal ever since to compose music that usurps the perceived flow of time and commandeers the sense of how time passes. Although I’ve learned to manipulate subjective time, I still stand in awe of Schubert’s unparalleled power. Nearly two centuries ago, the composer anticipated the neurological underpinnings of time perception that science has underscored in the past few decades.

The human brain, we have learned, adjusts and recalibrates temporal perception. Our ability to encode and decode sequential information, to integrate and segregate simultaneous signals, is fundamental to human survival. It allows us to find our place in, and navigate, our physical world. But music also demonstrates that time perception is inherently subjective—and an integral part of our lives. “For the time element in music is single,” wrote Thomas Mann in his novel, The Magic Mountain. “Into a section of mortal time music pours itself, thereby inexpressibly enhancing and ennobling what it fills.” [Continue reading…]

Facebooktwittermail

The living death of solitary confinement

Lisa Guenther writes: I first met Five Omar Mualimm-ak at a forum on solitary confinement in New York City. He wore track shoes with his tailored suit. ‘As long as the Prison Industrial Complex keeps running, so will I,’ he explained. After hearing him speak about the connections between racism, poverty, mass incarceration and police violence, I invited Five to speak at a conference I was organising in Nashville, Tennessee. He arrived, as always, in a suit and track shoes. As we walked across campus to a conference reception, I worked up the courage to ask him how he got his name. He told me: ‘I spent five years in solitary confinement, and when I came out I was a different person.’

In an article for The Guardian last October, Five described his isolation as a process of sensory and existential annihilation:

After only a short time in solitary, I felt all of my senses begin to diminish. There was nothing to see but grey walls. In New York’s so-called special housing units, or SHUs, most cells have solid steel doors, and many do not have windows. You cannot even tape up pictures or photographs; they must be kept in an envelope. To fight the blankness, I counted bricks and measured the walls. I stared obsessively at the bolts on the door to my cell.

There was nothing to hear except empty, echoing voices from other parts of the prison. I was so lonely that I hallucinated words coming out of the wind. They sounded like whispers. Sometimes, I smelled the paint on the wall, but more often, I just smelled myself, revolted by my own scent.

There was no touch. My food was pushed through a slot. Doors were activated by buzzers, even the one that led to a literal cage directly outside of my cell for one hour per day of ‘recreation’.

Even time had no meaning in the SHU. The lights were kept on for 24 hours. I often found myself wondering if an event I was recollecting had happened that morning or days before. I talked to myself. I began to get scared that the guards would come in and kill me and leave me hanging in the cell. Who would know if something happened to me? Just as I was invisible, so was the space I inhabited.

The very essence of life, I came to learn during those seemingly endless days, is human contact, and the affirmation of existence that comes with it. Losing that contact, you lose your sense of identity. You become nothing.

Five’s experience of solitary confinement is extreme, but it’s not atypical. His feeling of disconnection from the world, to the point of losing his capacity to make sense of his own identity and existence, raises philosophical questions about the relation between sense perception, sociality, and a meaningful life. Why does prolonged isolation typically corrode a prisoner’s ability to perceive the world and to sustain a meaningful connection with his own existence? The short answer to this question is that we are social beings who rely on our interactions with other people to make sense of things. But what does it mean to exist socially, and what is the precise connection between our relations with others, our perception of the world, and the affirmation of our own existence?

My response to this question is shaped by the philosophical practice of phenomenology. Phenomenology begins with a description of lived experience and reflects on the structures that make this experience possible and meaningful. The main insight of phenomenology is that consciousness is relational. [Continue reading…]

Facebooktwittermail

Searching for the elephant’s genius inside the largest brain on land

elephant

Ferris Jabr writes: Many years ago, while wandering through Amboseli National Park in Kenya, an elephant matriarch named Echo came upon the bones of her former companion Emily. Echo and her family slowed down and began to inspect the remains. They stroked Emily’s skull with their trunks, investigating every crevice; they touched her skeleton gingerly with their padded hind feet; they carried around her tusks. Elephants consistently react this way to other dead elephants, but do not show much interest in deceased rhinos, buffalo or other species. Sometimes elephants will even cover their dead with soil and leaves.

What is going through an elephant’s mind in these moments? We cannot explain their behavior as an instinctual and immediate reaction to a dying or recently perished compatriot. Rather, they seem to understand—even years and years after a friend or relative’s death—that an irreversible change has taken place, that, here on the ground, is an elephant who used to be alive, but no longer is. In other words, elephants grieve.

Such grief is but one of many indications that elephants are exceptionally intelligent, social and empathic creatures. After decades of observing wild elephants—and a series of carefully controlled experiments in the last eight years—scientists now agree that elephants form lifelong kinships, talk to one another with a large vocabulary of rumbles and trumpets and make group decisions; elephants play, mimic their parents and cooperate to solve problems; they use tools, console one another when distressed, and probably have a sense of self (See: The Science Is In: Elephants Are Even Smarter Than We Realized)

All this intellect must emerge, in one way or another, from the elephant brain—the largest of any land animal, three times as big as the human brain with individual neurons that seem to be three to five times the size of human brain cells. [Continue reading…]

Facebooktwittermail

Nature-deficit disorder and the effects of selective attention

e13-iconRichard Louv writes: Not long ago, from a vantage point on a high bluff above a shoreline, Carol Birrell watched a group of high school students as they hiked through a park that was bordered on one side by a bay of the blue Pacific and on the other by a subtropical ecosystem.

​Birrell, who teaches nature education at the Centre for Education Research, University of Western Sydney, described the scene: “All had their heads lowered and backs bent with eyes focused on their feet like blinkered horses.” The scene also reminded her of how children walk along fixated on their cell phone screens.

Not more than 100 meters from the hikers, in the bay, a dolphin was slowly circled by three other dolphins. They were splashing loudly. And then it happened.

“A tiny vapor spout joining the group of larger spouts. A dolphin had given birth!”

​The students never saw it. They had walked right past this once-in-a lifetime event without looking up.

​Surely many other people on such an outing would have turned and looked. But in an increasingly distracting, virtual environment, many of us spend as much or more time blocking out our senses than using and growing them.

​“What are all of us missing out on when we rush through the bush, rush through life?” Birrell wonders.

​At least these students made it to the sea.

​In San Diego, where I live, Oceans Discovery Institute, a nature education organization, conducted an informal study of local inner-city children and found that approximately 90 percent of these children did not know how to swim, 95 percent had never been in a boat, and 34 percent had never been to the Pacific Ocean – less than 20 minutes away.

Among the similarities between Americans and Australians is a shared reputation for being an outdoors-oriented people. But Australians (who live in the world’s most urbanized nation), like Americans, are experiencing what I’ve called nature-deficit disorder. That’s not a medical diagnosis, but a metaphor. [Continue reading…]

The blinkered awareness that Louv writes about extends much more widely than our relationship with nature.

A dramatic example which drew widespread concern occurred last year in San Francisco when dozens of train passengers, whose attention extended no wider than the screens of their phones, failed to notice a gunman brandishing his weapon multiple times. No one looked up, that is, until he randomly shot a passenger in the back, killing him.

While technology physically reinforces this type of selective attention, it is also becoming more entrenched psychologically and culturally as we construct personal worlds populated by the people, ideas, styles, forms, and networks of association, with which we experience affinity.

More and more we live in worlds of our own making and as we do so we are losing touch with the outside world — a world which constantly presents itself but which we have multiple and multiplying means to ignore.

The awareness which nature requires is one with a 360-degree horizon. It is one in which cognitive preoccupations must not rise to a level where they block sensory awareness. It rests on an intuitive understanding that we cannot sustainably exist separated from everything around us.

To the extent that a sense of separation is becoming endemic in human experience, it means we are not only losing touch with nature but also losing touch with what it means to be alive.

Facebooktwittermail

Douglas Hofstadter — Research on artificial intelligence is sidestepping the core question: how do people think?

f13-iconDouglas Hofstadter is a cognitive scientist at Indiana University and the Pulitzer Prize-winning author of Gödel, Escher, Bach: An Eternal Golden Braid.

Popular Mechanics: You’ve said in the past that IBM’s Jeopardy-playing computer, Watson, isn’t deserving of the term artificial intelligence. Why?

Douglas Hofstadter: Well, artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial level — things that are impressive if you don’t look at the details. In that sense, we’ve already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinking — that’s inaccurate. Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.

Do you think we’ll start seeing diminishing returns from a Watson-like approach to AI?

I can’t really predict that. But what I can say is that I’ve monitored Google Translate — which uses a similar approach — for many years. Google Translate is developing and it’s making progress because the developers are inventing new, clever ways of milking the quickness of computers and the vastness of its database. But it’s not making progress at all in the sense of understanding your text, and you can still see it falling flat on its face a lot of the time. And I know it’ll never produce polished [translated] text, because real translating involves understanding what is being said and then reproducing the ideas that you just heard in a different language. Translation has to do with ideas, it doesn’t have to do with words, and Google Translate is about words triggering other words.

So why are AI researchers so focused on building programs and computers that don’t do anything like thinking?

They’re not studying the mind and they’re not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They’re doing product development.

I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn’t about making money, and the people in the field weren’t driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That’s a beautiful paradox and very exciting, philosophically. [Continue reading…]

Facebooktwittermail

Slow-motion world for small animals

chipmunk

BBC News reports: Smaller animals tend to perceive time as if it is passing in slow motion, a new study has shown.

This means that they can observe movement on a finer timescale than bigger creatures, allowing them to escape from larger predators.

Insects and small birds, for example, can see more information in one second than a larger animal such as an elephant.

The work is published in the journal Animal Behaviour.

“The ability to perceive time on very small scales may be the difference between life and death for fast-moving organisms such as predators and their prey,” said lead author Kevin Healy, at Trinity College Dublin (TCD), Ireland.

The reverse was found in bigger animals, which may miss things that smaller creatures can rapidly spot. [Continue reading…]

Facebooktwittermail

A theory of how networks become conscious

Wired: It’s a question that’s perplexed philosophers for centuries and scientists for decades: Where does consciousness come from? We know it exists, at least in ourselves. But how it arises from chemistry and electricity in our brains is an unsolved mystery.

Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That’s just the way the universe works.

“The electric charge of an electron doesn’t arise out of more elemental properties. It simply has a charge,” says Koch. “Likewise, I argue that we live in a universe of space, time, mass, energy, and consciousness arising out of complex systems.”

What Koch proposes is a scientifically refined version of an ancient philosophical doctrine called panpsychism — and, coming from someone else, it might sound more like spirituality than science. But Koch has devoted the last three decades to studying the neurological basis of consciousness. His work at the Allen Institute now puts him at the forefront of the BRAIN Initiative, the massive new effort to understand how brains work, which will begin next year.

Koch’s insights have been detailed in dozens of scientific articles and a series of books, including last year’s Consciousness: Confessions of a Romantic Reductionist. WIRED talked to Koch about his understanding of this age-old question. [Continue reading…]

Facebooktwittermail

The inexact mirrors of the human mind

Douglas Hofstadter, author of Gödel, Escher, Bach: An Eternal Golden Braid (GEB), published in 1979, and one of the pioneers of artificial intelligence (AI), gained prominence right at a juncture when the field was abandoning its interest in human intelligence.

douglas-hofstadterJames Somers writes: In GEB, Hofstadter was calling for an approach to AI concerned less with solving human problems intelligently than with understanding human intelligence — at precisely the moment that such an approach, having borne so little fruit, was being abandoned. His star faded quickly. He would increasingly find himself out of a mainstream that had embraced a new imperative: to make machines perform in any way possible, with little regard for psychological plausibility.

Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force. For each legal move it could make at a given point in the game, it would consider its opponent’s responses, its own responses to those responses, and so on for six or more steps down the line. With a fast evaluation function, it would calculate a score for each possible position, and then make the move that led to the best score. What allowed Deep Blue to beat the world’s best humans was raw computational power. It could evaluate up to 330 million positions a second, while Kasparov could evaluate only a few dozen before having to make a decision.

Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?” A brand of AI that didn’t try to answer such questions—however impressive it might have been—was, in Hofstadter’s mind, a diversion. He distanced himself from the field almost as soon as he became a part of it. “To me, as a fledgling AI person,” he says, “it was self-evident that I did not want to get involved in that trickery. It was obvious: I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”

One answer is that the AI enterprise went from being worth a few million dollars in the early 1980s to billions by the end of the decade. (After Deep Blue won in 1997, the value of IBM’s stock increased by $18 billion.) The more staid an engineering discipline AI became, the more it accomplished. Today, on the strength of techniques bearing little relation to the stuff of thought, it seems to be in a kind of golden age. AI pervades heavy industry, transportation, and finance. It powers many of Google’s core functions, Netflix’s movie recommendations, Watson, Siri, autonomous drones, the self-driving car.

“The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics,” Stuart Russell and Peter Norvig write in their leading textbook, Artificial Intelligence: A Modern Approach. AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?

It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something. Russell, a computer-science professor at Berkeley, said to me, “What’s the combined market cap of all of the search companies on the Web? It’s probably four hundred, five hundred billion dollars. Engines that could actually extract all that information and understand it would be worth 10 times as much.”

This, then, is the trillion-dollar question: Will the approach undergirding AI today — an approach that borrows little from the mind, that’s grounded instead in big data and big engineering — get us to where we want to go? How do you make a search engine that understands if you don’t know how you understand? Perhaps, as Russell and Norvig politely acknowledge in the last chapter of their textbook, in taking its practical turn, AI has become too much like the man who tries to get to the moon by climbing a tree: “One can report steady progress, all the way to the top of the tree.”

Consider that computers today still have trouble recognizing a handwritten A. In fact, the task is so difficult that it forms the basis for CAPTCHAs (“Completely Automated Public Turing tests to tell Computers and Humans Apart”), those widgets that require you to read distorted text and type the characters into a box before, say, letting you sign up for a Web site.

In Hofstadter’s mind, there is nothing to be surprised about. To know what all A’s have in common would be, he argued in a 1982 essay, to “understand the fluid nature of mental categories.” And that, he says, is the core of human intelligence. [Continue reading…]

Facebooktwittermail

Henry Gustave Molaison — the man who forgot everything

Steven Shapin writes: In the movie “Groundhog Day,” the TV weatherman Phil Connors finds himself living the same day again and again. This has its advantages, as he has hundreds of chances to get things right. He can learn to speak French, to sculpt ice, to play jazz piano, and to become the kind of person with whom his beautiful colleague Rita might fall in love. But it’s a torment, too. An awful solitude flows from the fact that he’s the only one in Punxsutawney, Pennsylvania, who knows that something has gone terribly wrong with time. Nobody else seems to have any memory of all the previous iterations of the day. What is a new day for Rita is another of the same for Phil. Their realities are different—what passes between them in Phil’s world leaves no trace in hers—as are their senses of selfhood: Phil knows Rita as she cannot know him, because he knows her day after day after day, while she knows him only today. Time, reality, and identity are each curated by memory, but Phil’s and Rita’s memories work differently. From Phil’s point of view, she, and everyone else in Punxsutawney, is suffering from amnesia.

Amnesia comes in distinct varieties. In “retrograde amnesia,” a movie staple, victims are unable to retrieve some or all of their past knowledge — Who am I? Why does this woman say that she’s my wife? — but they can accumulate memories for everything that they experience after the onset of the condition. In the less cinematically attractive “anterograde amnesia,” memory of the past is more or less intact, but those who suffer from it can’t lay down new memories; every person encountered every day is met for the first time. In extremely unfortunate cases, retrograde and anterograde amnesia can occur in the same individual, who is then said to suffer from “transient global amnesia,” a condition that is, thankfully, temporary. Amnesias vary in their duration, scope, and originating events: brain injury, stroke, tumors, epilepsy, electroconvulsive therapy, and psychological trauma are common causes, while drug and alcohol use, malnutrition, and chemotherapy may play a part.

There isn’t a lot that modern medicine can do for amnesiacs. If cerebral bleeding or clots are involved, these may be treated, and occupational and cognitive therapy can help in some cases. Usually, either the condition goes away or amnesiacs learn to live with it as best they can — unless the notion of learning is itself compromised, along with what it means to have a life. Then, a few select amnesiacs disappear from systems of medical treatment and reappear as star players in neuroscience and cognitive psychology.

No star ever shone more brightly in these areas than Henry Gustave Molaison, a patient who, for more than half a century, until his death, in 2008, was known only as H.M., and who is now the subject of a book, “Permanent Present Tense” (Basic), by Suzanne Corkin, the neuroscientist most intimately involved in his case. [Continue reading…]

Facebooktwittermail