Kerouac’s French-Canadian roots hold the key to his literary identity

Deni Ellis Béchard writes: The real-life backstory of Jack Kerouac’s unpublished novel is classic beat generation. It was December 1952, and tensions were running high as Jack and his friend Neal Cassady — the inspiration for the character of Dean Moriarty in On the Road — drove from San Francisco to Mexico City.

Whereas Neal was looking for adventure and a chance to stock up on weed, Jack was in a difficult period. His first novel, The Town and the City, published under the name John Kerouac in 1950, had met with lukewarm reviews and poor sales. In April 1951, he had written On the Road on a (now famous) 120-foot-long scroll, but hadn’t been able to find a publisher. He was thirty and had been laid off by the railroad after a bout of phlebitis in his leg.

Kerouac decided to convalesce in Mexico City with William S. Burroughs, who would later author Naked Lunch. Three months earlier, Burroughs had performed a William Tell act with his wife, Joan, while they were drunk and accidentally shot her in the head, killing her. Shortly after Kerouac’s arrival, Burroughs skipped bail and fled the country. Neal Cassady went home. Alone, living in a rooftop apartment in Mexico City, Jack wrote a short novel over the course of five days.

The first line reads: Dans l’moi d’Octobre, 1935, (dans la nuit de nos vra vie bardasseuze) y’arriva une machine du West, de Denver, sur le chemin pour New York. Written in the language of Kerouac’s childhood — a French-Canadian patois then commonly spoken in parts of New England — the line has an epic, North American ring. Kerouac would later translate it as “In the month of October, 1935, in the night of our real restless lives, a car came from the West, from Denver, on the road for New York.”

The novel’s title is Sur le chemin — “On the Road.” But it is not the On the Road we all know (which would be translated in France as Sur la route). It was the On the Road of Kerouac’s vernacular — chemin being used in the title to mean both “path” and “road.”

Over the course of his literary career, Kerouac redefined the archetype of the American man, and he has since become so integral to American culture that his identity as an immigrant writer is often forgotten. He was born in 1922 as Jean-Louis Lebris de Kérouac to parents from Quebec. He spoke French at home and grew up in the French-Canadian community of Lowell, Massachusetts. In one of his letters, he wrote, “The English language is a tool lately found . . . so late (I never spoke English before I was six or seven). At 21, I was still somewhat awkward and illiterate sounding in my [English] speech and writings.”

In 1954, Kerouac created a list of everything he had written and included Sur le chemin among his “completed novels” — even though it would remain in his archives for more than six decades before publication was finally arranged this year. Sur le chemin and his other French writings provide a key to unlocking his more famous works, revealing a man just as obsessed with the difficulty of living between two languages as he was with his better-known spiritual quests.

In particular, they help explain the path — le chemin — he took as he developed his influential style, which changed the way many writers throughout the world have thought about prose. To this day, Kerouac remains one of the most translated authors, and one whose work is shared across generations. His unpublished French works shine a light on how the voice and ideas of an iconic American figure emerged from the experiences of French-Canadian immigrants — a group whose language and culture remain largely unknown to mainstream America. [Continue reading…]

Facebooktwittermail

Why we have globalization to thank for Thanksgiving

By Farok J. Contractor, Rutgers University

As Americans sit down to their Thanksgiving Day feasts, some may recall the story of the “Pilgrim Fathers” who founded one of the first English settlements in North America in 1620, at what is today the town of Plymouth, Massachusetts.

The history we know is one of English settlers seeking religious freedom in a New World but instead finding “a hideous and desolate wilderness, full of wilde beasts and wilde men.”

What many Americans don’t realize, however, is that the story of those early settlers’ struggle, which culminated in what we remember today as the first Thanksgiving feast, is also a tale of globalization, many centuries before the word was even coined.

Crossing the Atlantic began a century before the Pilgrims’ passage to the New World aboard the Mayflower. By the 1600s, trans-Atlantic travel had became increasingly common. It was because of globalization that those first settlers were able to survive in an inhospitable and unforgiving land. And the turkey on Thanksgiving tables may not be a bird native to the U.S. but is more likely a (re)import from Europe.

Two short stories will help me explain. As a professor of international business at Rutgers University, I have been fascinated by the history of trade going back millennia, and how most Americans do not know the background story of Thanksgiving Day.

[Read more…]

Facebooktwittermail

Meet the frail, small-brained people who first trekked out of Africa

Science magazine reports: On a promontory high above the sweeping grasslands of the Georgian steppe, a medieval church marks the spot where humans have come and gone along Silk Road trade routes for thousands of years. But 1.77 million years ago, this place was a crossroads for a different set of migrants. Among them were saber-toothed cats, Etruscan wolves, hyenas the size of lions—and early members of the human family.

Here, primitive hominins poked their tiny heads into animal dens to scavenge abandoned kills, fileting meat from the bones of mammoths and wolves with crude stone tools and eating it raw. They stalked deer as the animals drank from an ancient lake and gathered hackberries and nuts from chestnut and walnut trees lining nearby rivers. Sometimes the hominins themselves became the prey, as gnaw marks from big cats or hyenas on their fossilized limb bones now testify.

“Someone rang the dinner bell in gully one,” says geologist Reid Ferring of the University of North Texas in Denton, part of an international team analyzing the site. “Humans and carnivores were eating each other.”

This is the famous site of Dmanisi, Georgia, which offers an unparalleled glimpse into a harsh early chapter in human evolution, when primitive members of our genus Homo struggled to survive in a new land far north of their ancestors’ African home, braving winters without clothes or fire and competing with fierce carnivores for meat. The 4-hectare site has yielded closely packed, beautifully preserved fossils that are the oldest hominins known outside of Africa, including five skulls, about 50 skeletal bones, and an as-yet-unpublished pelvis unearthed 2 years ago. “There’s no other place like it,” says archaeologist Nick Toth of Indiana University in Bloomington. “It’s just this mother lode for one moment in time.”

Until the discovery of the first jawbone at Dmanisi 25 years ago, researchers thought that the first hominins to leave Africa were classic H. erectus (also known as H. ergaster in Africa). These tall, relatively large-brained ancestors of modern humans arose about 1.9 million years ago and soon afterward invented a sophisticated new tool, the hand ax. They were thought to be the first people to migrate out of Africa, making it all the way to Java, at the far end of Asia, as early as 1.6 million years ago. But as the bones and tools from Dmanisi accumulate, a different picture of the earliest migrants is emerging. [Continue reading…]

Facebooktwittermail

Huddled mice could change the way we think about evolution

Stuart P Wilson, University of Sheffield and James V Stone, University of Sheffield

Adapt or die. That’s the reality for an animal species when it is faced with a harsh environment. Until now, many scientists have assumed that the more challenging an animal’s environment, the greater the pressure to adapt and the faster its genes evolve. But we have just published new research in Royal Society Open Science that shows that genes might actually evolve faster when the pressure to adapt is reduced.

We built a simple computer model of how evolution may be affected by the way animals interact with each other when they’re in groups. Specifically, we looked at what happens to animals that huddle together to keep warm.

We found that when animals huddle in larger groups, their genes for regulating temperature evolve faster, even though there is less pressure to adapt to the cold environment because of the warmth of the huddle. This shows that an organism’s evolution doesn’t just depend on its environment but also on how it behaves.

When animals such as rats and mice huddle together in groups, they can maintain a high body temperature without using as much energy as they would on their own. We wanted to understand how this kind of group behaviour would affect a species’ evolution.

To do this, we built a computer model simulating how the species’ genes changed and were passed on over multiple generations. When the effects of huddling were built into the computer model, the reduced pressure to adapt was actually found to accelerate evolution of the genes controlling heat production and heat loss.

[Read more…]

Facebooktwittermail

China is at the forefront of manipulating DNA to create a new class of superhumans

G. Owen Schaefer writes: Would you want to alter your future children’s genes to make them smarter, stronger, or better looking? As the state of science brings prospects like these closer to reality, an international debate has been raging over the ethics of enhancing human capacities with biotechnologies such as so-called smart pills, brain implants, and gene editing. This discussion has only intensified in the past year with the advent of the CRISPR-cas9 gene editing tool, which raises the specter of tinkering with our DNA to improve traits like intelligence, athleticism, and even moral reasoning.

So are we on the brink of a brave new world of genetically enhanced humanity? Perhaps. And there’s an interesting wrinkle: It’s reasonable to believe that any seismic shift toward genetic enhancement will not be centered in Western countries like the US or the UK, where many modern technologies are pioneered. Instead, genetic enhancement is more likely to emerge out of China.

Numerous surveys among Western populations have found significant opposition to many forms of human enhancement. For example, a recent Pew study of 4,726 Americans found that most would not want to use a brain chip to improve their memory, and a plurality view such interventions as morally unacceptable. [Continue reading…]

Facebooktwittermail

A unified theory of evolution requires input from Darwin and Lamarck

lichen8

Michael Skinner writes: The unifying theme for much of modern biology is based on Charles Darwin’s theory of evolution, the process of natural selection by which nature selects the fittest, best-adapted organisms to reproduce, multiply and survive. The process is also called adaptation, and traits most likely to help an individual survive are considered adaptive. As organisms change and new variants thrive, species emerge and evolve. In the 1850s, when Darwin described this engine of natural selection, the underlying molecular mechanisms were unknown. But over the past century, advances in genetics and molecular biology have outlined a modern, neo-Darwinian theory of how evolution works: DNA sequences randomly mutate, and organisms with the specific sequences best adapted to the environment multiply and prevail. Those are the species that dominate a niche, until the environment changes and the engine of evolution fires up again.

But this explanation for evolution turns out to be incomplete, suggesting that other molecular mechanisms also play a role in how species evolve. One problem with Darwin’s theory is that, while species do evolve more adaptive traits (called phenotypes by biologists), the rate of random DNA sequence mutation turns out to be too slow to explain many of the changes observed. Scientists, well-aware of the issue, have proposed a variety of genetic mechanisms to compensate: genetic drift, in which small groups of individuals undergo dramatic genetic change; or epistasis, in which one set of genes suppress another, to name just two.

Yet even with such mechanisms in play, genetic mutation rates for complex organisms such as humans are dramatically lower than the frequency of change for a host of traits, from adjustments in metabolism to resistance to disease. The rapid emergence of trait variety is difficult to explain just through classic genetics and neo-Darwinian theory. To quote the prominent evolutionary biologist Jonathan B L Bard, who was paraphrasing T S Eliot: ‘Between the phenotype and genotype falls the shadow.’

And the problems with Darwin’s theory extend out of evolutionary science into other areas of biology and biomedicine. For instance, if genetic inheritance determines our traits, then why do identical twins with the same genes generally have different types of diseases? And why do just a low percentage (often less than 1 per cent) of those with many specific diseases share a common genetic mutation? If the rate of mutation is random and steady, then why have many diseases increased more than 10-fold in frequency in only a couple decades? How is it that hundreds of environmental contaminants can alter disease onset, but not DNA sequences? In evolution and biomedicine, the rates of phenotypic trait divergence is far more rapid than the rate of genetic variation and mutation – but why?

Part of the explanation can be found in some concepts that Jean-Baptiste Lamarck proposed 50 years before Darwin published his work. Lamarck’s theory, long relegated to the dustbin of science, held, among other things, ‘that the environment can directly alter traits, which are then inherited by generations to come’. [Continue reading…]

Facebooktwittermail

Digging our own graves in deep time

By David Farrier, Aeon, October 31, 2016

Late one summer night in 1949, the British archaeologist Jacquetta Hawkes went out into her small back garden in north London, and lay down. She sensed the bedrock covered by its thin layer of soil, and felt the hard ground pressing her flesh against her bones. Shimmering through the leaves and out beyond the black lines of her neighbours’ chimney pots were the stars, beacons ‘whose light left them long before there were eyes on this planet to receive it’, as she put it in A Land (1951), her classic book of imaginative nature writing.

We are accustomed to the idea of geology and astronomy speaking the secrets of ‘deep time, the immense arc of non-human history that shaped the world as we perceive it. Hawkes’s lyrical meditation mingles the intimate and the eternal, the biological and the inanimate, the domestic with a sense of deep time that is very much of its time. The state of the topsoil was a matter of genuine concern in a country wearied by wartime rationing, while land itself rises into focus just as Britain is rethinking its place in the world. But in lying down in her garden, Hawkes also lies on the far side of a fundamental boundary. A Land was written at the cusp of the Holocene; we, on the other hand, read it in the Anthropocene.

The Anthropocene, or era of the human, denotes how industrial civilisation has changed the Earth in ways that are comparable with deep-time processes. The planet’s carbon and nitrogen cycles, ocean chemistry and biodiversity – each one the product of millions of years of slow evolution – have been radically and permanently disrupted by human activity. The development of agriculture 10,000 years ago, and the Industrial Revolution in the middle of the 19th century, have both been proposed as start dates for the Anthropocene. But a consensus has gathered around the Great Acceleration – the sudden and dramatic jump in consumption that began around 1950, followed by a huge rise in global population, an explosion in the use of plastics, and the collapse of agricultural diversity.

[Read more…]

Facebooktwittermail

To identify risky drivers, insurer will track language use in social media

Financial Times reports: UK-based insurer Admiral has come up with a way to crunch through social media posts to work out who deserves a lower premium. People who seem cautious and deliberate in their choice of words are likely to pay a lot less than those with overconfident remarks. [Continue reading…]

Facebooktwittermail

Iraqis are world’s most generous to strangers

Reuters reports: Although torn by civil war, Iraq is the world’s most generous country towards strangers in need, according to a new global index of charitable giving.

Eighty one percent of Iraqis reported helping someone they didn’t know in the previous month, in a global poll commissioned by the Charities Aid Foundation (CAF).

For the first time since CAF began the poll in 2010, more than half of people in 140 countries surveyed said they had helped strangers – with many of the most generous found in countries hit hard by disaster and war. [Continue reading…]

Facebooktwittermail

Science shows the richer you get, the less you pay attention to other people

Lila MacLellan writes: No one can pay attention to everything they encounter. We simply do not have enough time or mental capacity for it. Most of us, though, do make an effort to acknowledge our fellow humans. Wealth, it seems, might change that.

There’s a growing body of research showing how having money changes the way people see — or are oblivious to — others and their problems. The latest is a paper published in the journal Psychological Science in which psychologists at New York University show that wealthy people unconsciously pay less attention to passersby on the street.

In the paper, the researchers describe experiments they conducted to measure the effects of social class on what’s called the “motivational relevance” of other human beings. According to some schools of psychological thought, we’re motivated to pay attention to something when we assign more value to it, whether because it threatens us or offers the potential for some kind of reward. [Continue reading…]

Facebooktwittermail

Our slow, uncertain brains are still better than computers — here’s why

By Parashkev Nachev, UCL

Automated financial trading machines can make complex decisions in a thousandth of a second. A human being making a choice – however simple – can never be faster than about one-fifth of a second. Our reaction times are not only slow but also remarkably variable, ranging over hundreds of milliseconds.

Is this because our brains are poorly designed, prone to random uncertainty – or “noise” in the electronic jargon? Measured in the laboratory, even the neurons of a fly are both fast and precise in their responses to external events, down to a few milliseconds. The sloppiness of our reaction times looks less like an accident than a built-in feature. The brain deliberately procrastinates, even if we ask it to do otherwise.

Massively parallel wetware

Why should this be? Unlike computers, our brains are massively parallel in their organisation, concurrently running many millions of separate processes. They must do this because they are not designed to perform a specific set of actions but to select from a vast repertoire of alternatives that the fundamental unpredictability of our environment offers us. From an evolutionary perspective, it is best to trust nothing and no one, least of all oneself. So before each action the brain must flip through a vast Rolodex of possibilities. It is amazing it can do this at all, let alone in a fraction of a second.

But why the variability? There is hierarchically nothing higher than the brain, so decisions have to arise through peer-to-peer interactions between different groups of neurons. Since there can be only one winner at any one time – our movements would otherwise be chaotic – the mode of resolution is less negotiation than competition: a winner-takes-all race. To ensure the competition is fair, the race must run for a minimum length of time – hence the delay – and the time it takes will depend on the nature and quality of the field of competitors, hence the variability.

Fanciful though this may sound, the distributions of human reaction times, across different tasks, limbs, and people, have been repeatedly shown to fit the “race” model remarkably well. And one part of the brain – the medial frontal cortex – seems to track reaction time tightly, as an area crucial to procrastination ought to. Disrupting the medial frontal cortex should therefore disrupt the race, bringing it to an early close. Rather than slowing us down, disrupting the brain should here speed us up, accelerating behaviour but at the cost of less considered actions.

[Read more…]

Facebooktwittermail

Humans aren’t the only primates that can make sharp stone tools

 

The Guardian reports: Monkeys have been observed producing sharp stone flakes that closely resemble the earliest known tools made by our ancient relatives, proving that this ability is not uniquely human.

Previously, modifying stones to create razor-edged fragments was thought to be an activity confined to hominins, the family including early humans and their more primitive cousins. The latest observations re-write this view, showing that monkeys unintentionally produce almost identical artefacts simply by smashing stones together.

The findings put archaeologists on alert that they can no longer assume that stone flakes they discover are linked to the deliberate crafting of tools by early humans as their brains became more sophisticated.

Tomos Proffitt, an archaeologist at the University of Oxford and the study’s lead author, said: “At a very fundamental level – if you’re looking at a very simple flake – if you had a capuchin flake and a human flake they would be the same. It raises really important questions about what level of cognitive complexity is required to produce a sophisticated cutting tool.”

Unlike early humans, the flakes produced by the capuchins were the unintentional byproduct of hammering stones – an activity that the monkeys pursued decisively, but the purpose of which was not clear. Originally scientists thought the behaviour was a flamboyant display of aggression in response to an intruder, but after more extensive observations the monkeys appeared to be seeking out the quartz dust produced by smashing the rocks, possibly because it has a nutritional benefit. [Continue reading…]

Facebooktwittermail

The vulnerability of monolingual Americans in an English-speaking world

Ivan Krastev writes: In our increasingly Anglophone world, Americans have become nakedly transparent to English speakers everywhere, yet the world remains bafflingly and often frighteningly opaque to monolingual Americans. While the world devours America’s movies and follows its politics closely, Americans know precious little about how non-Americans think and live. Americans have never heard of other countries’ movie stars and have only the vaguest notions of what their political conflicts are about.

This gross epistemic asymmetry is a real weakness. When WikiLeaks revealed the secret cables of the American State Department or leaked the emails of the Clinton campaign, it became a global news sensation and a major embarrassment for American diplomacy. Leaking Chinese diplomatic cables or Russian officials’ emails could never become a worldwide human-interest story, simply because only a relative handful of non-Chinese or non-Russians could read them, let alone make sense of them. [Continue reading…]

Although I’m pessimistic about the prospects of the meek inheriting the earth, the bi-lingual are in a very promising position. And Anglo-Americans should never forget that this is after all a country with a Spanish name. As for where I stand personally, I’m with the bi-lingual camp in spirit even if my own claim to be bi-lingual is a bit tenuous — an English-speaker who understands American-English but speaks British-English; does that count?

Facebooktwittermail

Where did the first farmers live? Looking for answers in DNA

Carl Zimmer writes: Beneath a rocky slope in central Jordan lie the remains of a 10,000-year-old village called Ain Ghazal, whose inhabitants lived in stone houses with timber roof beams, the walls and floors gleaming with white plaster.

Hundreds of people living there worshiped in circular shrines and made haunting, wide-eyed sculptures that stood three feet high. They buried their cherished dead under the floors of their houses, decapitating the bodies in order to decorate the skulls.

But as fascinating as this culture was, something else about Ain Ghazal intrigues archaeologists more: It was one of the first farming villages to have emerged after the dawn of agriculture.

Around the settlement, Ain Ghazal farmers raised barley, wheat, chickpeas and lentils. Other villagers would leave for months at a time to herd sheep and goats in the surrounding hills.

Sites like Ain Ghazal provide a glimpse of one of the most important transitions in human history: the moment that people domesticated plants and animals, settled down, and began to produce the kind of society in which most of us live today.

But for all that sites like Ain Ghazal have taught archaeologists, they are still grappling with enormous questions. Who exactly were the first farmers? How did agriculture, a cornerstone of civilization itself, spread to other parts of the world?

Some answers are now emerging from a surprising source: DNA extracted from skeletons at Ain Ghazal and other early settlements in the Near East. These findings have already challenged long-held ideas about how agriculture and domestication arose. [Continue reading…]

Facebooktwittermail

There may be two trillion other galaxies

galaxies

Brian Gallagher writes: In 1939, the year Edwin Hubble won the Benjamin Franklin award for his studies of “extra-galactic nebulae,” he paid a visit to an ailing friend. Depressed and interred at Las Encinas Hospital, a mental health facility, the friend, an actor and playwright named John Emerson, asked Hubble what — spiritually, cosmically — he believed in. In Edwin Hubble: Mariner of the Nebulae, Gale E. Christianson writes that Hubble, a Christian-turned-agnostic, “pulled no punches” in his reply. “The whole thing is so much bigger than I am,” he told Emerson, “and I can’t understand it, so I just trust myself to it, and forget about it.”

Even though he was moved by a sense of the universe’s immensity, it’s arresting to recall how small Hubble thought the cosmos was at the time. “The picture suggested by the reconnaissance,” he wrote in his 1937 book, The Observational Approach to Cosmology, “is a sphere, centred on the observer, about 1,000 million light-years in diameter, throughout which are scattered about 100 million nebulae,” or galaxies. “A suitable model,” he went on, “would be furnished by tennis balls, 50 feet apart, scattered through a sphere 5 miles in diameter.” From the instrument later named after him, the Hubble Space Telescope, launched in 1990, we learned from a series of pictures taken, starting five years later, just how unsuitable that model was.

The first is called the Hubble Deep Field, arguably “the most important image ever taken” according to this YouTube video. (I recommend watching it.) The Hubble gazed, for ten days, at an apparently empty spot in the sky, one about the size of a pinhead held up at arm’s length — a fragment one 24-millionth of the whole sky. The resulting picture had 3,000 objects, almost all of them galaxies in various stages of development, and many of them as far away as 12 billion light-years. Robert Williams, the former director of the Space Telescope Science Institute, wrote in the New York Times, “The image is really a core sample of the universe.” Next came the Ultra Deep Field, in 2003 (after a three-month exposure with a new camera, the Hubble image came back with 10,000 galaxies), then the eXtreme Deep Field, in 2012, a refined version of the Ultra that reveals galaxies that formed just 450 million years after the Big Bang. [Continue reading…]

Facebooktwittermail