The neuroscience of a sense of place

Rick Paulas writes: Comedian Eddie Pepitone once said — and I’m paraphrasing here — that there are no great neighborhoods in Los Angeles, only great blocks. The stretch of Echo Park on Sunset Boulevard between Glendale and Logan is one. The establishments on that short stretch include an upscale wine bar, a hipster concert venue, a vegan restaurant, a deep dish pizza place, cheap thrift stores, not-so-cheap “vintage” stores selling roughly the same stuff, a check-cashing joint, a few fast food chains, and even a supermarket for time travelers.

While it’s not the most diverse cross-section you’ll find in the city, the block can be used as a social barometer when brought up in conversations. Mention the stretch, and whatever landmark the other person’s familiar with tells the tale of the socioeconomic sphere they inhabit; the landmark that puts a gleam of recognition in the other person’s eye says everything about their story.

Blocks and neighborhoods aren’t concrete concepts that mean the same thing to everyone, unlike, say, things like “apple” or “sky.” Points of reference shift depending on the person that’s using that reference, so blocks/neighborhoods are more like alternate realities laid atop one another, like plastic sheets on an overhead projector. There’s even a phrase for the study of this murky concept: mental maps. They can help us understand why some neighborhoods thrive, others die, and how changes are made.

The theory of mental (or cognitive) maps was first developed in 1960 by Massachusetts Institute of Technology professor Kevin Lynch in his book The Image of the City. Rather than relying on how cartographers saw a city, Lynch asked residents to draw a map, from memory, depicting how their city was arranged. He found that five elements compose a person’s understanding of where they are: landmarks, paths, edges, districts, and nodes. Landmarks are reference points, paths connect them, edges mark boundaries, and the other elements define larger areas that contain some combination of each of those designations.

Neuroscience backs up Lynch’s findings. In 1971, Jon O’Keefe discovered “place cells” in the hippocampus, neurons that activate when an animal enters an environment. The neurons calculate a current location based on what the animal can see, as well as through “dead reckoning” — that is, accounting based on subconscious calculations using previous positions in the recent past and how quickly it traveled over a stretch of time. In 2005, husband-and-wife team Edvard and May-Britt Moser discovered “grid cells,” neurons that fire in a grid-like pattern to measure distances and direction. O’Keefe and the Mosers all won Nobel Prizes in 2014 for their discoveries. [Continue reading…]

facebooktwittermail

What rats in a maze can teach us about our sense of direction

By Francis Carpenter, UCL and Caswell Barry, UCL

London’s taxi drivers have to pass an exam in which they are asked to name the shortest route between any two places within six miles of Charing Cross – an area with more than 60,000 roads. We know from brain scans that learning “the knowledge” – as the drivers call it – increases the size of their hippocampi, the part of the brain crucial to spatial memory.

Now, new research suggests that bigger hippocampi may not be the only neurological benefit of driving a black cab. While the average person likely has many separate mental maps for different areas of London, the hours cabbies spend navigating may result in the joining of these maps into a single, global map.

[Read more…]

facebooktwittermail

The rhythm of consciousness

Gregory Hickok writes: In 1890, the American psychologist William James famously likened our conscious experience to the flow of a stream. “A ‘river’ or a ‘stream’ are the metaphors by which it is most naturally described,” he wrote. “In talking of it hereafter, let’s call it the stream of thought, consciousness, or subjective life.”

While there is no disputing the aptness of this metaphor in capturing our subjective experience of the world, recent research has shown that the “stream” of consciousness is, in fact, an illusion. We actually perceive the world in rhythmic pulses rather than as a continuous flow.

Some of the first hints of this new understanding came as early as the 1920s, when physiologists discovered brain waves: rhythmic electrical currents measurable on the surface of the scalp by means of electroencephalography. Subsequent research cataloged a spectrum of such rhythms (alpha waves, delta waves and so on) that correlated with various mental states, such as calm alertness and deep sleep.

Researchers also found that the properties of these rhythms varied with perceptual or cognitive events. The phase and amplitude of your brain waves, for example, might change if you saw or heard something, or if you increased your concentration on something, or if you shifted your attention.

But those early discoveries themselves did not change scientific thinking about the stream-like nature of conscious perception. Instead, brain waves were largely viewed as a tool for indexing mental experience, much like the waves that a ship generates in the water can be used to index the ship’s size and motion (e.g., the bigger the waves, the bigger the ship).

Recently, however, scientists have flipped this thinking on its head. We are exploring the possibility that brain rhythms are not merely a reflection of mental activity but a cause of it, helping shape perception, movement, memory and even consciousness itself. [Continue reading…]

facebooktwittermail

What ants can teach us about the operation of the human brain

Carrie Arnold writes: Deborah Gordon spent the morning of August 27 watching a group of harvester ants foraging for seeds outside the dusty town of Rodeo, N.M. Long before the first rays of sun hit the desert floor, a group of patroller ants was already on the move. Their task was to find out whether the area near the nest was free from flash floods, high winds, and predators. If they didn’t return to the nest, departing foragers would know it wasn’t safe to go search for food.

When the patrollers returned and the first foragers did leave, they scattered in all directions, hunting for the fat-laden, energy-rich seeds on which the colony depends. Other foragers waited in the entrance of the nest for the first wave to return. If lots of food were nearby, foragers would return and depart quickly, creating a massive chain reaction. If food was scarce, however, the second group of foragers might not leave the nest at all.

“It’s a brilliant system. The ants can take advantage of sudden windfalls of food but they don’t waste energy and resources if there’s nothing there,” said Gordon, who is an ecologist at Stanford University.

The behavior of each individual in the group is set by the rate at which it meets other ants and a set of basic rules. Its behavior alters that of its neighbors, which in turn affects the original ant, in a classic example of feedback. The result is astonishing, complex behavior. “Individually, an ant is dumb,” Gordon says. She gazes off into the distance and inhales sharply. “But the colony? That’s where the intelligence is.”

About 110 miles from Gordon’s offices in Palo Alto, Calif., Mark Goldman studies a different kind of complex, emergent behavior. Goldman is a neuroscientist at the University of California, Davis. For most of his life, he was never particularly interested in ants. But when he traveled to Stanford in 2012 to plan some experiments with a colleague who had recently attended one of Gordon’s talks, something clicked.

“As I watched films of these ant colonies, it looked like what was happening at the synapse of neurons. Both of these systems accumulate evidence about their inputs—returning ants or incoming voltage pulses—to make their decisions about whether to generate an output—an outgoing forager or a packet of neurotransmitter,” Goldman said. On his next trip to Stanford, he extended his stay. An unusual research collaboration had begun to coalesce: Ants would be used to study the brain, and the brain, to study ants. [Continue reading…]

facebooktwittermail

A deficit in patience produces the illusion of a shortage of time

Chelsea Wald writes: Not long ago I diagnosed myself with the recently identified condition of sidewalk rage. It’s most pronounced when it comes to a certain friend who is a slow walker. Last month, as we sashayed our way to dinner, I found myself biting my tongue, thinking, I have to stop going places with her if I ever want to … get there!

You too can measure yourself on the “Pedestrian Aggressiveness Syndrome Scale,” a tool developed by University of Hawaii psychologist Leon James. While walking in a crowd, do you find yourself “acting in a hostile manner (staring, presenting a mean face, moving closer or faster than expected)” and “enjoying thoughts of violence?”

Slowness rage is not confined to the sidewalk, of course. Slow drivers, slow Internet, slow grocery lines — they all drive us crazy. Even the opening of this article may be going on a little too long for you. So I’ll get to the point. Slow things drive us crazy because the fast pace of society has warped our sense of timing. Things that our great-great-grandparents would have found miraculously efficient now drive us around the bend. Patience is a virtue that’s been vanquished in the Twitter age.

Once upon a time, cognitive scientists tell us, patience and impatience had an evolutionary purpose. They constituted a yin and yang balance, a finely tuned internal timer that tells when we’ve waited too long for something and should move on. When that timer went buzz, it was time to stop foraging at an unproductive patch or abandon a failing hunt.

“Why are we impatient? It’s a heritage from our evolution,” says Marc Wittmann, a psychologist at the Institute for Frontier Areas of Psychology and Mental Health in Freiburg, Germany. Impatience made sure we didn’t die from spending too long on a single unrewarding activity. It gave us the impulse to act.

But that good thing is gone. The fast pace of society has thrown our internal timer out of balance. It creates expectations that can’t be rewarded fast enough — or rewarded at all. When things move more slowly than we expect, our internal timer even plays tricks on us, stretching out the wait, summoning anger out of proportion to the delay. [Continue reading…]

facebooktwittermail

Your gut tells your mind, more than you may imagine

Charles Schmidt writes: The notion that the state of our gut governs our state of mind dates back more than 100 years. Many 19th- and early 20th-century scientists believed that accumulating wastes in the colon triggered a state of “auto-intoxication,” whereby poisons emanating from the gut produced infections that were in turn linked with depression, anxiety and psychosis. Patients were treated with colonic purges and even bowel surgeries until these practices were dismissed as quackery.

The ongoing exploration of the human microbiome promises to bring the link between the gut and the brain into clearer focus. Scientists are increasingly convinced that the vast assemblage of microfauna in our intestines may have a major impact on our state of mind. The gut-brain axis seems to be bidirectional — the brain acts on gastrointestinal and immune functions that help to shape the gut’s microbial makeup, and gut microbes make neuroactive compounds, including neurotransmitters and metabolites that also act on the brain. These interactions could occur in various ways: microbial compounds communicate via the vagus nerve, which connects the brain and the digestive tract, and microbially derived metabolites interact with the immune system, which maintains its own communication with the brain. Sven Pettersson, a microbiologist at the Karolinska Institute in Stockholm, has recently shown that gut microbes help to control leakage through both the intestinal lining and the blood-brain barrier, which ordinarily protects the brain from potentially harmful agents.

Microbes may have their own evolutionary reasons for communicating with the brain. They need us to be social, says John Cryan, a neuroscientist at University College Cork in Ireland, so that they can spread through the human population. Cryan’s research shows that when bred in sterile conditions, germ-free mice lacking in intestinal microbes also lack an ability to recognize other mice with whom they interact. In other studies, disruptions of the microbiome induced mice behavior that mimics human anxiety, depression and even autism. In some cases, scientists restored more normal behavior by treating their test subjects with certain strains of benign bacteria. Nearly all the data so far are limited to mice, but Cryan believes the findings provide fertile ground for developing analogous compounds, which he calls psychobiotics, for humans. “That dietary treatments could be used as either adjunct or sole therapy for mood disorders is not beyond the realm of possibility,” he says. [Continue reading…]

facebooktwittermail

Neurological conductors that keep the brain in time and tune

Harvard Gazette: Like musical sounds, different states of mind are defined by distinct, characteristic waveforms, recognizable frequencies and rhythms in the brain’s electrical field. When the brain is alert and performing complex computations, the cerebral cortex — the wrinkled outer surface of the brain — thrums with cortical band oscillations in the gamma wavelength. In some neurological disorders like schizophrenia, however, these waves are out of tune and the rhythm is out of sync.

New research led by Harvard Medical School (HMS) scientists at the VA Boston Healthcare System (VABHS) has identified a specific class of neurons — basal forebrain GABA parvalbumin neurons, or PV neurons — that trigger these waves, acting as neurological conductors that trigger the cortex to hum rhythmically and in tune. (GABA is gamma-amniobutyric acid, a major neurotransmitter in the brain.)

The results appear this week in the journal Proceedings of the National Academy of Sciences.

“This is a move toward a unified theory of consciousness control,” said co-senior author Robert McCarley, HMS professor of psychiatry and head of the Department of Psychiatry at VA Boston Healthcare. “We’ve known that the basal forebrain is important in turning consciousness on and off in sleep and wake, but now we’ve found that these specific cells also play a key role in triggering the synchronized rhythms that characterize conscious thought, perception, and problem-solving.” [Continue reading…]

facebooktwittermail

Music permeates our brain

Jonathan Berger writes: Neurological research has shown that vivid musical hallucinations are more than metaphorical. They don’t just feel real, they are, from a cognitive perspective, entirely real. In the absence of sound waves, brain activation is strikingly similar to that triggered by external auditory sounds. Why should that be?

Music, repetitive and patterned by nature, provides structure within which we find anchors, context, and a basis for organizing time. In the prehistory of civilization, humans likely found comfort in the audible patterns and structures that accompanied their circadian rhythms — from the coo of a morning dove to the nocturnal chirps of crickets. With the evolution of music a more malleable framework for segmenting and structuring time developed. Humans generated predictable and replicable temporal patterns by drumming, vocalizing, blowing, and plucking. This metered, temporal framework provides an internal world in which we construct predictions about the future — what will happen next, and when it will happen.

This process spotlights the brain itself. The composer Karlheinz Stockhausen hyphenated the term for his craft to underscore the literal meaning of “com-pose” — to put together elements, from com (“with” or “together”) and pose (“put” or “place”). When we imagine music, we literally compose — sometimes recognizable tunes, other times novel combinations of patterns and musical ideas. Toddlers sing themselves to sleep with vocalizations of musical snippets they are conjuring up in their imagination. Typically, these “spontaneous melodies,” as they are referred to by child psychologists, comprise fragments of salient features of multiple songs that the baby is piecing together. In short, we do not merely retrieve music that we store in memory. Rather, a supremely complex web of associations can be stirred and generated as we compose music in our minds.

Today, amid widely disseminated music, we are barraged by a cacophony of disparate musical patterns — more often than not uninvited and unwanted — and likely spend more time than ever obsessing over imagined musical fragments. The brain is a composer whose music orchestrates our lives. And right now the brain is working overtime. [Continue reading…]

facebooktwittermail

Meet Walter Pitts, the homeless genius who revolutionized artificial intelligence

Amanda Gefter writes: Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker, had no trouble raising his fists to get his way. The neighborhood boys weren’t much better. One afternoon in 1935, they chased him through the streets until he ducked into the local library to hide. The library was familiar ground, where he had taught himself Greek, Latin, logic, and mathematics—better than home, where his father insisted he drop out of school and go to work. Outside, the world was messy. Inside, it all made sense.

Not wanting to risk another run-in that night, Pitts stayed hidden until the library closed for the evening. Alone, he wandered through the stacks of books until he came across Principia Mathematica, a three-volume tome written by Bertrand Russell and Alfred Whitehead between 1910 and 1913, which attempted to reduce all of mathematics to pure logic. Pitts sat down and began to read. For three days he remained in the library until he had read each volume cover to cover — nearly 2,000 pages in all — and had identified several mistakes. Deciding that Bertrand Russell himself needed to know about these, the boy drafted a letter to Russell detailing the errors. Not only did Russell write back, he was so impressed that he invited Pitts to study with him as a graduate student at Cambridge University in England. Pitts couldn’t oblige him, though — he was only 12 years old. But three years later, when he heard that Russell would be visiting the University of Chicago, the 15-year-old ran away from home and headed for Illinois. He never saw his family again. [Continue reading…]

facebooktwittermail

The conception of perception shaped by context

facebooktwittermail

A universal logic of discernment

nebula

Natalie Wolchover writes: When in 2012 a computer learned to recognize cats in YouTube videos and just last month another correctly captioned a photo of “a group of young people playing a game of Frisbee,” artificial intelligence researchers hailed yet more triumphs in “deep learning,” the wildly successful set of algorithms loosely modeled on the way brains grow sensitive to features of the real world simply through exposure.

Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.

Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos.

The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, “a cat” regardless of its color, size or posture in a given video.

“They actually wrote down on paper, with exact proofs, something that people only dreamed existed,” said Ilya Nemenman, a biophysicist at Emory University. “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

As for our own remarkable knack for spotting a cat in the bushes, a familiar face in a crowd or indeed any object amid the swirl of color, texture and sound that surrounds us, strong similarities between deep learning and biological learning suggest that the brain may also employ a form of renormalization to make sense of the world. [Continue reading…]

facebooktwittermail

Mirror neurons may reveal more about neurons than they do about people

Jason G. Goldman writes: In his 2011 book, The Tell-Tale Brain, neuroscientist V. S. Ramachandran says that some of the cells in your brain are of a special variety. He calls them the “neurons that built civilization,” but you might know them as mirror neurons. They’ve been implicated in just about everything from the development of empathy in earlier primates, millions of years ago, to the emergence of complex culture in our species.

Ramachandran says that mirror neurons help explain the things that make us so apparently unique: tool use, cooking with fire, using complex linguistics to communicate.

It’s an inherently seductive idea: that one small tweak to a particular set of brain cells could have transformed an early primate into something that was somehow more. Indeed, experimental psychologist Cecilia Hayes wrote in 2010 (pdf), “[mirror neurons] intrigue both specialists and non-specialists, celebrated as a ‘revolution’ in understanding social behaviour and ‘the driving force’ behind ‘the great leap forward’ in human evolution.”

The story of mirror neurons begins in the 1990s at the University of Parma in Italy. A group of neuroscientists were studying rhesus monkeys by implanting small electrodes in their brains, and they found that some cells exhibited a curious kind of behavior. They fired both when the monkey executed a movement, such as grasping a banana, and also when the monkey watched the experimenter execute that very same movement.

It was immediately an exciting find. These neurons were located in a part of the brain thought solely responsible for sending motor commands out from the brain, through the brainstem to the spine, and out to the nerves that control the body’s muscles. This finding suggested that they’re not just used for executing actions, but are somehow involved in understanding the observed actions of others.

After that came a flood of research connecting mirror neurons to the development of empathy, autism, language, tool use, fire, and more. Psychologist and science writer Christian Jarrett has twice referred to mirror neurons as “the most hyped concept in neuroscience.” Is he right? Where does empirical evidence end and overheated speculation begin? [Continue reading…]

facebooktwittermail

Cognitive disinhibition: the kernel of genius and madness

Dean Keith Simonton writes: When John Forbes Nash, the Nobel Prize-winning mathematician, schizophrenic, and paranoid delusional, was asked how he could believe that space aliens had recruited him to save the world, he gave a simple response. “Because the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”

Nash is hardly the only so-called mad genius in history. Suicide victims like painters Vincent Van Gogh and Mark Rothko, novelists Virginia Woolf and Ernest Hemingway, and poets Anne Sexton and Sylvia Plath all offer prime examples. Even ignoring those great creators who did not kill themselves in a fit of deep depression, it remains easy to list persons who endured well-documented psychopathology, including the composer Robert Schumann, the poet Emily Dickinson, and Nash. Creative geniuses who have succumbed to alcoholism or other addictions are also legion.

Instances such as these have led many to suppose that creativity and psychopathology are intimately related. Indeed, the notion that creative genius might have some touch of madness goes back to Plato and Aristotle. But some recent psychologists argue that the whole idea is a pure hoax. After all, it is certainly no problem to come up with the names of creative geniuses who seem to have displayed no signs or symptoms of mental illness.

Opponents of the mad genius idea can also point to two solid facts. First, the number of creative geniuses in the entire history of human civilization is very large. Thus, even if these people were actually less prone to psychopathology than the average person, the number with mental illness could still be extremely large. Second, the permanent inhabitants of mental asylums do not usually produce creative masterworks. The closest exception that anyone might imagine is the notorious Marquis de Sade. Even in his case, his greatest (or rather most sadistic) works were written while he was imprisoned as a criminal rather than institutionalized as a lunatic.

So should we believe that creative genius is connected with madness or not? Modern empirical research suggests that we should because it has pinpointed the connection between madness and creativity clearly. The most important process underlying strokes of creative genius is cognitive disinhibition — the tendency to pay attention to things that normally should be ignored or filtered out by attention because they appear irrelevant. [Continue reading…]

facebooktwittermail

How we use memory to look at the future

Virginia Hughes writes: Over the past few decades, researchers have worked to uncover the details of how the brain organizes memories. Much remains a mystery, but scientists have identified a key event: the formation of an intense brain wave called a “sharp-wave ripple” (SWR). This process is the brain’s version of an instant replay — a sped-up version of the neural activity that occurred during a recent experience. These ripples are a strikingly synchronous neural symphony, the product of tens of thousands of cells firing over just 100 milliseconds. Any more activity than that could trigger a seizure.

Now researchers have begun to realize that SWRs may be involved in much more than memory formation. Recently, a slew of high-profile rodent studies have suggested that the brain uses SWRs to anticipate future events. A recent experiment, for example, finds that SWRs connect to activity in the prefrontal cortex, a region at the front of the brain that is involved in planning for the future.

Studies such as this one have begun to illuminate the complex relationship between memory and the decision-making process. Until a few years ago, most studies on SWRs focused only on their role in creating and consolidating memories, said Loren Frank, a neuroscientist at the University of California, San Francisco. “None of them really dealt with this issue of: How does the animal actually pull [the memory] back up again? How does it actually use this to figure out what to do?” [Continue reading…]

facebooktwittermail

The healing power of silence

Daniel A. Gross writes: One icy night in March 2010, 100 marketing experts piled into the Sea Horse Restaurant in Helsinki, with the modest goal of making a remote and medium-sized country a world-famous tourist destination. The problem was that Finland was known as a rather quiet country, and since 2008, the Country Brand Delegation had been looking for a national brand that would make some noise.

Over drinks at the Sea Horse, the experts puzzled over the various strengths of their nation. Here was a country with exceptional teachers, an abundance of wild berries and mushrooms, and a vibrant cultural capital the size of Nashville, Tennessee. These things fell a bit short of a compelling national identity. Someone jokingly suggested that nudity could be named a national theme — it would emphasize the honesty of Finns. Someone else, less jokingly, proposed that perhaps quiet wasn’t such a bad thing. That got them thinking.

A few months later, the delegation issued a slick “Country Brand Report.” It highlighted a host of marketable themes, including Finland’s renowned educational system and school of functional design. One key theme was brand new: silence. As the report explained, modern society often seems intolerably loud and busy. “Silence is a resource,” it said. It could be marketed just like clean water or wild mushrooms. “In the future, people will be prepared to pay for the experience of silence.”

People already do. In a loud world, silence sells. Noise-canceling headphones retail for hundreds of dollars; the cost of some weeklong silent meditation courses can run into the thousands. Finland saw that it was possible to quite literally make something out of nothing.

In 2011, the Finnish Tourist Board released a series of photographs of lone figures in the wilderness, with the caption “Silence, Please.” An international “country branding” consultant, Simon Anholt, proposed the playful tagline “No talking, but action.” And a Finnish watch company, Rönkkö, launched its own new slogan: “Handmade in Finnish silence.”

“We decided, instead of saying that it’s really empty and really quiet and nobody is talking about anything here, let’s embrace it and make it a good thing,” explains Eva Kiviranta, who manages social media for VisitFinland.com.

Silence is a peculiar starting point for a marketing campaign. After all, you can’t weigh, record, or export it. You can’t eat it, collect it, or give it away. The Finland campaign raises the question of just what the tangible effects of silence really are. Science has begun to pipe up on the subject. In recent years researchers have highlighted the peculiar power of silence to calm our bodies, turn up the volume on our inner thoughts, and attune our connection to the world. Their findings begin where we might expect: with noise.

The word “noise” comes from a Latin root meaning either queasiness or pain. According to the historian Hillel Schwartz, there’s even a Mesopotamian legend in which the gods grow so angry at the clamor of earthly humans that they go on a killing spree. (City-dwellers with loud neighbors may empathize, though hopefully not too closely.)

Dislike of noise has produced some of history’s most eager advocates of silence, as Schwartz explains in his book Making Noise: From Babel to the Big Bang and Beyond. In 1859, the British nurse and social reformer Florence Nightingale wrote, “Unnecessary noise is the most cruel absence of care that can be inflicted on sick or well.” Every careless clatter or banal bit of banter, Nightingale argued, can be a source of alarm, distress, and loss of sleep for recovering patients. She even quoted a lecture that identified “sudden noises” as a cause of death among sick children. [Continue reading…]

facebooktwittermail

What Shakespeare can teach science about language and the limits of the human mind

Jillian Hinchliffe and Seth Frey write: Although [Stephen] Booth is now retired [from the University of California, Berkeley], his work [on Shakespeare] couldn’t be more relevant. In the study of the human mind, old disciplinary boundaries have begun to dissolve and fruitful new relationships between the sciences and humanities have sprung up in their place. When it comes to the cognitive science of language, Booth may be the most prescient literary critic who ever put pen to paper. In his fieldwork in poetic experience, he unwittingly anticipated several language-processing phenomena that cognitive scientists have only recently begun to study. Booth’s work not only provides one of the most original and penetrating looks into the nature of Shakespeare’s genius, it has profound implications for understanding the processes that shape how we think.

Until the early decades of the 20th century, Shakespeare criticism fell primarily into two areas: textual, which grapples with the numerous variants of published works in order to produce an edition as close as possible to the original, and biographical. Scholarship took a more political turn beginning in the 1960s, providing new perspectives from various strains of feminist, Marxist, structuralist, and queer theory. Booth is resolutely dismissive of most of these modes of study. What he cares about is poetics. Specifically, how poetic language operates on and in audiences of a literary work.

Close reading, the school that flourished mid-century and with which Booth’s work is most nearly affiliated, has never gone completely out of style. But Booth’s approach is even more minute—microscopic reading, according to fellow Shakespeare scholar Russ McDonald. And as the microscope opens up new worlds, so does Booth’s critical lens. What makes him radically different from his predecessors is that he doesn’t try to resolve or collapse his readings into any single interpretation. That people are so hung up on interpretation, on meaning, Booth maintains, is “no more than habit.” Instead, he revels in the uncertainty caused by the myriad currents of phonetic, semantic, and ideational patterns at play. [Continue reading…]

facebooktwittermail

Don’t overestimate your untapped brain power

Nathalia Gjersoe writes: Luc Besson’s latest sci-fi romp, Lucy, is based on the premise that the average person only uses 10% of their brain. This brain-myth has been fodder for books and movies for decades and is a tantalizing plot-device. Alarmingly, however, it seems to be widely accepted as fact. Of those asked, 48% of teachers in the UK, 65% of Americans and 30% of American Psychology students endorsed the myth.

In the movie, Lucy absorbs vast quantities of a nootropic that triggers rampant production of new connections between her neurons. As her brain becomes more and more densely connected, Lucy experiences omniscience, omnipotence and omnipresence. Telepathy, telekinesis and time-travel all become possible.

It’s true that increased connectivity between neurons is associated with greater expertise. Musicians who train for years have greater connectivity and activation of those regions of the brain that control their finger movements and those that bind sensory and motor information. This is the first principle of neural connectivity: cells that fire together wire together.

But resources are limited and the brain is incredibly hungry. It takes a huge amount of energy just to keep it electrically ticking over. There is an excellent TEDEd animation here that explains this nicely. The human adult brain makes up only 2% of the body’s mass yet uses 20% of energy intake. Babies’ brains use 60%! Evolution would necessarily cull any redundant parts of such an expensive organ. [Continue reading…]

facebooktwittermail

The orchestration of attention

The New Yorker: Every moment, our brains are bombarded with information, from without and within. The eyes alone convey more than a hundred billion signals to the brain every second. The ears receive another avalanche of sounds. Then there are the fragments of thoughts, conscious and unconscious, that race from one neuron to the next. Much of this data seems random and meaningless. Indeed, for us to function, much of it must be ignored. But clearly not all. How do our brains select the relevant data? How do we decide to pay attention to the turn of a doorknob and ignore the drip of a leaky faucet? How do we become conscious of a certain stimulus, or indeed “conscious” at all?

For decades, philosophers and scientists have debated the process by which we pay attention to things, based on cognitive models of the mind. But, in the view of many modern psychologists and neurobiologists, the “mind” is not some nonmaterial and exotic essence separate from the body. All questions about the mind must ultimately be answered by studies of physical cells, explained in terms of the detailed workings of the more than eighty billion neurons in the brain. At this level, the question is: How do neurons signal to one another and to a cognitive command center that they have something important to say?

“Years ago, we were satisfied to know which areas of the brain light up under various stimuli,” the neuroscientist Robert Desimone told me during a recent visit to his office. “Now we want to know mechanisms.” Desimone directs the McGovern Institute for Brain Research at the Massachusetts Institute of Technology; youthful and trim at the age of sixty-two, he was dressed casually, in a blue pinstripe shirt, and had only the slightest gray in his hair. On the bookshelf of his tidy office were photographs of his two young children; on the wall was a large watercolor titled “Neural Gardens,” depicting a forest of tangled neurons, their spindly axons and dendrites wending downward like roots in rich soil.

Earlier this year, in an article published in the journal Science, Desimone and his colleague Daniel Baldauf reported on an experiment that shed light on the physical mechanism of paying attention. The researchers presented a series of two kinds of images — faces and houses — to their subjects in rapid succession, like passing frames of a movie, and asked them to concentrate on the faces but disregard the houses (or vice versa). The images were “tagged” by being presented at two frequencies — a new face every two-thirds of a second, a new house every half second. By monitoring the frequencies of the electrical activity of the subjects’ brains with magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), Desimone and Baldauf could determine where in the brain the images were being directed.

The scientists found that, even though the two sets of images were presented to the eye almost on top of each other, they were processed by different places in the brain — the face images by a particular region on the surface of the temporal lobe that is known to specialize in facial recognition, and the house images by a neighboring but separate group of neurons specializing in place recognition.

Most importantly, the neurons in the two regions behaved differently. When the subjects were told to concentrate on the faces and to disregard the houses, the neurons in the face location fired in synchrony, like a group of people singing in unison, while the neurons in the house location fired like a group of people singing out of synch, each beginning at a random point in the score. When the subjects concentrated instead on houses, the reverse happened. [Continue reading…]

facebooktwittermail