Author Archives: Attention to the Unseen
Music: Milton Nascimento — ‘Cravo E Canela (Clove and Cinnamon)’
Music: Milton Nascimento — ‘Nothing Will Be As It Was (Nada Sera Como Antes)’
Thousands of Einstein documents now accessible online
The New York Times reports: They have been called the Dead Sea Scrolls of physics. Since 1986, the Princeton University Press and the Hebrew University of Jerusalem, to whom Albert Einstein bequeathed his copyright, have been engaged in a mammoth effort to study some 80,000 documents he left behind.
Starting on Friday, when Digital Einstein is introduced, anyone with an Internet connection will be able to share in the letters, papers, postcards, notebooks and diaries that Einstein left scattered in Princeton and in other archives, attics and shoeboxes around the world when he died in 1955.
The Einstein Papers Project, currently edited by Diana Kormos-Buchwald, a professor of physics and the history of science at the California Institute of Technology, has already published 13 volumes in print out of a projected 30. [Continue reading…]
Music: Milton Nascimento — ‘Francisco’
How Darkness Visible shined a light
Peter Fulham writes: Twenty-five years ago, in December, 1989, Darkness Visible, William Styron’s account of his descent into the depths of clinical depression and back, appeared in Vanity Fair. The piece revealed in unsparing detail how Styron’s lifelong melancholy at once gave way to a seductive urge to end his own life. A few months later, he released the essay as a book, augmenting the article with a recollection of when the illness first took hold of him: in Paris, as he was about to accept the 1985 Prix mondial Cino Del Duca, the French literary award. By the author’s own acknowledgement, the response from readers was unprecedented. “This was just overwhelming. It was just by the thousands that the letters came in,” he told Charlie Rose. “I had not really realized that it was going to touch that kind of a nerve.”
Styron may have been startled by the outpouring of mail, but in many ways, it’s easy to understand. The academic research on mental illness at the time was relatively comprehensive, but no one to date had offered the kind of report that Styron gave to the public: a firsthand account of what it’s like to have the monstrous condition overtake you. He also exposed the inadequacy of the word itself, which is still used interchangeably to describe a case of the blues, rather than the tempestuous agony sufferers know too well.
Depression is notoriously hard to describe, but Styron managed to split the atom. “I’d feel the horror, like some poisonous fogbank, roll in upon my mind,” he wrote in one chapter. In another: “It is not an immediately identifiable pain, like that of a broken limb. It may be more accurate to say that despair… comes to resemble the diabolical discomfort of being imprisoned in a fiercely overheated room. And because no breeze stirs this cauldron… it is entirely natural that the victim begins to think ceaselessly of oblivion.”
As someone who has fought intermittently with the same illness since college, those sentences were cathartic, just as I suspect they were for the many readers who wrote to Styron disclosing unequivocally that he had saved their lives. As brutal as depression can be, one of the main ways a person can restrain it is through solidarity. You are not alone, Styron reminded his readers, and the fog will lift. Patience is paramount. [Continue reading…]
Music: Milton Nascimento — ‘Fairy Tale Song (Cade)’
A universal logic of discernment
Natalie Wolchover writes: When in 2012 a computer learned to recognize cats in YouTube videos and just last month another correctly captioned a photo of “a group of young people playing a game of Frisbee,” artificial intelligence researchers hailed yet more triumphs in “deep learning,” the wildly successful set of algorithms loosely modeled on the way brains grow sensitive to features of the real world simply through exposure.
Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.
Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos.
The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, “a cat” regardless of its color, size or posture in a given video.
“They actually wrote down on paper, with exact proofs, something that people only dreamed existed,” said Ilya Nemenman, a biophysicist at Emory University. “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”
As for our own remarkable knack for spotting a cat in the bushes, a familiar face in a crowd or indeed any object amid the swirl of color, texture and sound that surrounds us, strong similarities between deep learning and biological learning suggest that the brain may also employ a form of renormalization to make sense of the world. [Continue reading…]
Music: Milton Nascimento — ‘Raça’
Music: Ali Farka Toure & Toumani Diabate — ‘Simbo’
There is no language instinct. Chomsky was wrong
Vyvyan Evans writes: Imagine you’re a traveller in a strange land. A local approaches you and starts jabbering away in an unfamiliar language. He seems earnest, and is pointing off somewhere. But you can’t decipher the words, no matter how hard you try.
That’s pretty much the position of a young child when she first encounters language. In fact, she would seem to be in an even more challenging position. Not only is her world full of ceaseless gobbledygook; unlike our hypothetical traveller, she isn’t even aware that these people are attempting to communicate. And yet, by the age of four, every cognitively normal child on the planet has been transformed into a linguistic genius: this before formal schooling, before they can ride bicycles, tie their own shoelaces or do rudimentary addition and subtraction. It seems like a miracle. The task of explaining this miracle has been, arguably, the central concern of the scientific study of language for more than 50 years.
In the 1960s, the US linguist and philosopher Noam Chomsky offered what looked like a solution. He argued that children don’t in fact learn their mother tongue – or at least, not right down to the grammatical building blocks (the whole process was far too quick and painless for that). He concluded that they must be born with a rudimentary body of grammatical knowledge – a ‘Universal Grammar’ – written into the human DNA. With this hard-wired predisposition for language, it should be a relatively trivial matter to pick up the superficial differences between, say, English and French. The process works because infants have an instinct for language: a grammatical toolkit that works on all languages the world over.
At a stroke, this device removes the pain of learning one’s mother tongue, and explains how a child can pick up a native language in such a short time. It’s brilliant. Chomsky’s idea dominated the science of language for four decades. And yet it turns out to be a myth. A welter of new evidence has emerged over the past few years, demonstrating that Chomsky is plain wrong. [Continue reading…]
Music: AfroCubism — ‘Bensema’
Music: AfroCubism — ‘Karamo’
Long before we learned how to make wine our ancestors acquired a taste for rotten fruit
Live Science: Human ancestors may have begun evolving the knack for consuming alcohol about 10 million years ago, long before modern humans began brewing booze, researchers say.
The ability to break down alcohol likely helped human ancestors make the most out of rotting, fermented fruit that fell onto the forest floor, the researchers said. Therefore, knowing when this ability developed could help researchers figure out when these human ancestors began moving to life on the ground, as opposed to mostly in trees, as earlier human ancestors had lived.
“A lot of aspects about the modern human condition — everything from back pain to ingesting too much salt, sugar and fat — goes back to our evolutionary history,” said lead study author Matthew Carrigan, a paleogeneticist at Santa Fe College in Gainesville, Florida. “We wanted to understand more about the modern human condition with regards to ethanol,” he said, referring to the kind of alcohol found in rotting fruit and that’s also used in liquor and fuel. [Continue reading…]
Milky Way over Devils Tower
Incredible panorama of the Milky Way over Devils Tower in Wyoming (Photo: David Lane) http://t.co/zrGJb1Lu8o pic.twitter.com/sCmADwAPX4
— Meredith Frost (@MeredithFrost) December 1, 2014
Why Devils Tower and not Devil’s Tower? The question might sound trivial when posed next to the expanse of the Milky Way, but for what it’s worth, here’s the answer from the United States Board on Geographic Names:
Since its inception in 1890, the U.S. Board on Geographic Names has discouraged the use of the possessive form—the genitive apostrophe and the “s”. The possessive form using an “s” is allowed, but the apostrophe is almost always removed. The Board’s archives contain no indication of the reason for this policy.
However, there are many names in the GNIS database that do carry the genitive apostrophe, because the Board chooses not to apply its policies to some types of features. Although the legal authority of the Board includes all named entities except Federal Buildings, certain categories—broadly determined to be “administrative”—are best left to the organization that administers them. Examples include schools, churches, cemeteries, hospitals, airports, shopping centers, etc. The Board promulgates the names, but leaves issues such as the use of the genitive or possessive apostrophe to the data owners.
Myths attempting to explain the policy include the idea that the apostrophe looks too much like a rock in water when printed on a map, and is therefore a hazard, or that in the days of “stick–up type” for maps, the apostrophe would become lost and create confusion. The probable explanation is that the Board does not want to show possession for natural features because, “ownership of a feature is not in and of itself a reason to name a feature or change its name.”
Since 1890, only five Board decisions have allowed the genitive apostrophe for natural features. These are: Martha’s Vineyard (1933) after an extensive local campaign; Ike’s Point in New Jersey (1944) because “it would be unrecognizable otherwise”; John E’s Pond in Rhode Island (1963) because otherwise it would be confused as John S Pond (note the lack of the use of a period, which is also discouraged); and Carlos Elmer’s Joshua View (1995 at the specific request of the Arizona State Board on Geographic and Historic Names because, “otherwise three apparently given names in succession would dilute the meaning,” that is, Joshua refers to a stand of trees. Clark’s Mountain in Oregon (2002) was approved at the request of the Oregon Board to correspond with the personal references of Lewis and Clark.