Dylan Matthews writes: “What would you think about a law that said that blacks couldn’t get a job without the government’s permission, or women couldn’t get a job without the government’s permission, or gays or Christians or anyone else?” George Mason economist Bryan Caplan asks. It’s a pretty easy question. Obviously, such a law is discriminatory on its face, serves no rational purpose, and is unacceptable in a liberal democracy. But Caplan continues: “So why, exactly, is it that people who are born on the wrong side of the border have to get government permission just to get a job?”
This is Caplan’s elevator pitch for open borders, an idea that for years was treated as deeply unserious, as an extreme straw man that nativists could beat up in the course of resisting more modest efforts to help immigrants. It had its defenders — philosopher Joseph Carens primary among them — but they were relatively lonely voices.
But in recent years, a small but devoted group of advocates have succeeded in turning open borders from a dirty word to a real movement with strong arguments backing it up. The team at OpenBorders.info — Vipul Naik, John Lee, Nathan Smith, Paul Crider — has led the charge, as Shaun Raviv wrote in an excellent profile of the group in the Atlantic. The University of Colorado’s Michael Huemer honed Carens’ moral case, while the Center for Global Development’s Michael Clemens has been hugely influential in arguing that we’re leaving trillions in potential economic growth on the table by enforcing border restrictions.
But few have been as prolific and forceful in their advocacy for the idea as Caplan. “The upside of open borders,” he once wrote, “would be the rapid elimination of absolute poverty on earth.” He is relentless at rebutting objections. It would take jobs away from native-born workers? It’d hurt growth in poor countries as more and more people leave? It’d leave us vulnerable to crime? No, no, and no. [Continue reading…]
Mark Oppenheimer writes: Several women told me that women new to the movement were often warned about the intentions of certain older men, especially [Michael] Shermer [the founder of Skeptic magazine]. Two more women agreed to go on the record, by name, with their Shermer stories… These stories help flesh out a man who, whatever his progressive views on science and reason, is decidedly less evolved when it comes to women.
Yet Shermer remains a leader in freethought — arguably the leader. And in his attitudes, he is hardly an exception. Hitchens, the best-selling author of God Is Not Great, who died in 2011, wrote a notorious Vanity Fair article called “Why Women Aren’t Funny.” Richard Dawkins, another author whose books have brought atheism to the masses, has alienated many women — and men — by belittling accusations of sexism in the movement; he seems to go out of his way to antagonize feminists generally, and just this past July 29 he tweeted, “Date rape is bad. Stranger rape at knifepoint is worse. If you think that’s an endorsement of date rape, go away and learn how to think.” And Penn Jillette, the talking half of the Penn and Teller duo, famously revels in using words like “cunt.”
The reality of sexism in freethought is not limited to a few famous leaders; it has implications throughout the small but quickly growing movement. Thanks to the internet, and to popular authors like Dawkins, Hitchens, and Sam Harris, atheism has greater visibility than at any time since the 18th-century Enlightenment. Yet it is now cannibalizing itself. For the past several years, Twitter, Facebook, Reddit, and online forums have become hostile places for women who identify as feminists or express concern about widely circulated tales of sexism in the movement. Some women say they are now harassed or mocked at conventions, and the online attacks — which include Jew-baiting, threats of anal rape, and other pleasantries — are so vicious that two activists I spoke with have been diagnosed with post-traumatic stress disorder. One of these women has been bedridden for two years.
To those outside the community, freethought would seem an unlikely candidate for this sort of internal strife. Aren’t atheists and agnostics supposed to be liberal, forward-thinking types? But from the beginning, there has been a division in freethought between the humanists, who see atheism as one part of a larger progressive vision for society, and the libertarians, for whom the banishment of God sits comfortably with capitalism, gun rights, and free-speech absolutism. One group sees men like Michael Shermer as freethought’s big problem, while the other sees defending them as crucial to freethought’s mission. [Continue reading…]
Jeremy Caradonna writes: The stock narrative of the Industrial Revolution is one of moral and economic progress. Indeed, economic progress is cast as moral progress.
The story tends to go something like this: Inventors, economists, and statesmen in Western Europe dreamed up a new industrialized world. Fueled by the optimism and scientific know-how of the Enlightenment, a series of heroic men — James Watt, Adam Smith, William Huskisson, and so on — fought back against the stultifying effects of regulated economies, irrational laws and customs, and a traditional guild structure that quashed innovation. By the mid-19th century, they had managed to implement a laissez-faire (“free”) economy that ran on new machines and was centered around modern factories and an urban working class. It was a long and difficult process, but this revolution eventually brought Europeans to a new plateau of civilization. In the end, Europeans lived in a new world based on wage labor, easy mobility, and the consumption of sparkling products.
Europe had rescued itself from the pre-industrial misery that had hampered humankind since the dawn of time. Cheap and abundant fossil fuel powered the trains and other steam engines that drove humankind into this brave new future. Later, around the time that Europeans decided that colonial slavery wasn’t such a good idea, they exported this revolution to other parts of the world, so that everyone could participate in freedom and industrialized modernity. They did this, in part, by “opening up markets” in primitive agrarian societies. The net result has been increased human happiness, wealth, and productivity — the attainment of our true potential as a species.
Sadly, this saccharine story still sweetens our societal self-image. Indeed, it is deeply ingrained in the collective identity of the industrialized world. The narrative has gotten more complex but remains à la base a triumphalist story. Consider, for instance, the closing lines of Joel Mokyr’s 2009 The Enlightened Economy: An Economic History of Britain, 1700–1850: “Material life in Britain and in the industrialized world that followed it is far better today than could have been imagined by the most wild-eyed optimistic 18th-century philosophe — and whereas this outcome may have been an unforeseen consequence, most economists, at least, would regard it as an undivided blessing.”
The idea that the Industrial Revolution has made us not only more technologically advanced and materially furnished but also better for it is a powerful narrative and one that’s hard to shake. It makes it difficult to dissent from the idea that new technologies, economic growth, and a consumer society are absolutely necessary. To criticize industrial modernity is somehow to criticize the moral advancement of humankind, since a central theme in this narrative is the idea that industrialization revolutionized our humanity, too. Those who criticize industrial society are often met with defensive snarkiness: “So you’d like us to go back to living in caves, would ya?” or “you can’t stop progress!”
Narratives are inevitably moralistic; they are never created spontaneously from “the facts” but are rather stories imposed upon a range of phenomena that always include implicit ideas about what’s right and what’s wrong. The proponents of the Industrial Revolution inherited from the philosophers of the Enlightenment the narrative of human (read: European) progress over time but placed technological advancement and economic liberalization at the center of their conception of progress. This narrative remains today an ingrained operating principle that propels us in a seemingly unstoppable way toward more growth and more technology, because the assumption is that these things are ultimately beneficial for humanity.
Advocates of sustainability are not opposed to industrialization per se, and don’t seek a return to the Stone Age. But what they do oppose is the dubious narrative of progress caricatured above. Along with Jean-Jacques Rousseau, they acknowledge the objective advancement of technology, but they don’t necessarily think that it has made us more virtuous, and they don’t assume that the key values of the Industrial Revolution are beyond reproach: social inequality for the sake of private wealth; economic growth at the expense of everything, including the integrity of the environment; and the assumption that mechanized newness is always a positive thing. Above all, sustainability-minded thinkers question whether the Industrial Revolution has jeopardized humankind’s ability to live happily and sustainably upon the Earth. Have the fossil-fueled good times put future generations at risk of returning to the same misery that industrialists were in such a rush to leave behind? [Continue reading…]
Quanta Magazine: In his fourth-floor lab at Harvard University, Michael Desai has created hundreds of identical worlds in order to watch evolution at work. Each of his meticulously controlled environments is home to a separate strain of baker’s yeast. Every 12 hours, Desai’s robot assistants pluck out the fastest-growing yeast in each world — selecting the fittest to live on — and discard the rest. Desai then monitors the strains as they evolve over the course of 500 generations. His experiment, which other scientists say is unprecedented in scale, seeks to gain insight into a question that has long bedeviled biologists: If we could start the world over again, would life evolve the same way?
Many biologists argue that it would not, that chance mutations early in the evolutionary journey of a species will profoundly influence its fate. “If you replay the tape of life, you might have one initial mutation that takes you in a totally different direction,” Desai said, paraphrasing an idea first put forth by the biologist Stephen Jay Gould in the 1980s.
Desai’s yeast cells call this belief into question. According to results published in Science in June, all of Desai’s yeast varieties arrived at roughly the same evolutionary endpoint (as measured by their ability to grow under specific lab conditions) regardless of which precise genetic path each strain took. It’s as if 100 New York City taxis agreed to take separate highways in a race to the Pacific Ocean, and 50 hours later they all converged at the Santa Monica pier.
The findings also suggest a disconnect between evolution at the genetic level and at the level of the whole organism. [Continue reading…]
Paul Currion writes: I became an aid worker in the 1990s, just as the break-up of Yugoslavia and the genocide in Rwanda cast a long shadow over the humanitarian sector. Those highly visible political failures were a major influence on my decision. I was possessed of a distressingly youthful belief that we could do better in the core humanitarian mission of saving lives, feeding the starving, healing the sick, and sheltering the displaced from natural disasters and armed conflicts.
I worked on co-ordination with the United Nations and non-governmental organisations (NGOs): identifying gaps and overlaps in the delivery of aid, then persuading humanitarian organisations to avoid those overlaps and fill those gaps, a slow and frustrating process of herding cats. Co-ordination had become increasingly important as the humanitarian sector expanded dramatically following the end of the Cold War. In Kosovo, after the NATO bombing campaign of 1999, we registered one NGO for every day of the year. A decade later, after the 2010 earthquake near Port-au-Prince, an estimated 3,000 NGOs descended on Haiti.
It wasn’t just the size of the humanitarian sector that was increasing – the scope of humanitarian work was widening as well. In the post-Cold War world, humanitarian organisations were increasingly enlisted as government sub-contractors in a larger political project: the post-conflict reconstruction of entire countries. After Kosovo I found myself in Afghanistan, where Secretary of State Colin Powell referred to NGOs as a ‘force multiplier’ for the US military; then Iraq, where Andrew Natsios, then head of US overseas aid, asserted without apparent irony, that NGOs were ‘an arm of the US government’. [Continue reading…]
Quanta Magazine: The Western Ghats in India rise like a wall between the Arabian Sea and the heart of the subcontinent to the east. The 1,000-mile-long chain of coastal mountains is dense with lush rainforest and grasslands, and each year, clouds bearing monsoon rains blow in from the southwest and break against the mountains’ flanks, unloading water that helps make them hospitable to numerous spectacular and endangered species. The Western Ghats are one of the most biodiverse places on the planet. They were also the first testing ground of an unusual new theory in ecology that applies insights from physics to the study of the environment.
John Harte, a professor of ecology at the University of California, Berkeley, has a wry, wizened face and green eyes that light up when he describes his latest work. He has developed what he calls the maximum entropy (MaxEnt) theory of ecology, which may offer a solution to a long-standing problem in ecology: how to calculate the total number of species in an ecosystem, as well as other important numbers, based on extremely limited information — which is all that ecologists, no matter how many years they spend in the field, ever have. Five years ago, the Ghats convinced him that what he thought was possible from back-of-the-envelope calculations could work in the real world. He and his colleagues will soon publish the results of a study that estimates the number of insect and tree species living in a tropical forest in Panama. The paper will also suggest how MaxEnt could give species estimates in the Amazon, a swath of more than 2 million square miles of land that is notoriously difficult to survey.
John Harte thinks it is possible to predict the behavior of ecosystems using just a few key attributes. His method ignores nature’s small-grained complexities, which makes many ecologists skeptical of the project.
If the MaxEnt theory of ecology can give good estimates in a wide variety of scenarios, it could help answer the many questions that revolve around how species are spread across the landscape, such as how many would be lost if a forest were cleared, how to design wildlife preserves that keep species intact, or how many rarely seen species might be hiding in a given area. Perhaps more importantly, the theory hints at a unified way of thinking about ecology — as a system that can be described with just a few variables, with all the complexity of life built on top. [Continue reading…]
Quinn Norton: “It’s called ‘the crackpot realism of the present’” someone said to me, and handed me a note. I folded up the note, and stuffed it in my purse. This was a phrase used to explain, much more clearly than I was doing at the time, the bias of thinking that now is right, forgetting that the future will look back on our ideas with the same curious and horrified amusement we watch the human past with. It’s believing, without any good reason, that right now makes sense.
The present I was in right then didn’t make a lot of sense.
I was sitting in a cleared facility near Tyson’s Corner in Virginia, the beating heart of the industrial-military-intelligence-policing complex, the Office of the Director of National Intelligence. I was there to help the government. Of the places I did not expect to ever go, at least not of my free will, the ODNI would be up there.
A few weeks ago, a friend from the Institute for the Future [IFTF] asked me if I would fly to DC for a one day workshop on the future of identity with the Office of the Director of National Intelligence. “What?” I sputtered, “Did they google me?” and then, mentally: Duh. The ODNI can do a lot more than google me.
I knew IFTF had intel clients, with whom I have occasionally chatted at events in the past. My policy when confronted with spooks asking questions about how the world works is to give them as much information as I can — one of my biggest problems with how security services work is their lack of wisdom. If I can reach people in positions of power and persuade them to critically examine that power, I consider that a win. I also consider it a long shot.
An invite from the ODNI is a strange thing. I’ve been publicly critical of them, sometimes viciously so. A few days earlier I tweeted that their director should be publicly tried for lying to Congress. I’ve written about the toxicity of the NSA spying (under ODNI direction), the corrupt fictions of Anonymous staged by the FBI (FBI/NSB is within ODNI’s area) and spoken out countless times in the last eight years against warrantless spying. I have even less love for the FBI and DOJ.
I turned the offer over in my head. I was influenced by a few things –yes it was paid, but not well paid. It was what I normally get from IFTF for a day of my time, and given the travel commitment, a bit low. I weighed the official imprimatur of involvement, and that was a factor. I am afraid of being pursued and harassed by my government. This has never happened to me in relation to my work, though I have been turned down for housing by people who feared I might bring police attention. It has to my friends, sources and associates. I know what it feels like, what they do when you’re a target, because I have been subject to terrorizing tactics and harassment because of whom I chose to love. I have publicly acknowledged that I self-censor because of this fear. I have a child to raise, and you can’t do that while you fight for your life and freedom in court. Raising my profile with the government as an expert probably makes me harder to harass.
I told my IFTF contact I don’t sign NDAs (which he already knew) and that I’d have to be public about my attendance and write about it. He told me they were publicly publishing their work for the ODNI too. “Huh,” I said to my screen. The organizers were on board with all of it. They wanted me in particular.
Finally, I thought about the hell I would get from the internet — like government harassment, internet harassment is part of the difficult and hated process of self-censorship for me.
In the end, I said yes, because you only get so far talking to your friends. [Continue reading…]
Noah Berlatsky writes: Chance is an uncomfortable thing. So Curtis Johnson argues in Darwin’s Dice: The Idea of Chance in the Thought of Charles Darwin, and he makes a compelling case. The central controversy, and the central innovation, in Darwin’s work is not the theory of natural selection itself, according to Johnson, but Darwin’s more basic, and more innovative, turn to randomness as a way to explain natural phenomena. This application of randomness was so controversial, Johnson argues, that Darwin tried to cover it up, replacing words like “accident” and “chance” with terms like “spontaneous variation” in later editions of his work. Nonetheless, the terminological shift was cosmetic: Randomness remained, and still remains, the disturbing center of Darwin’s theories.
Johnson, a political theorist at Lewis & Clark College, explains that there are two basic kinds of chance in Darwin’s thought. The first—most familiar and least disconcerting—is chance as probability. According to the theory of natural selection, individuals with advantageous adaptations are most likely to survive. A giraffe with a longer neck has a better shot of reaching those lofty leaves and living to munch another day; a polar bear blessed with a warmer coat has a higher probability of surviving a frigid winter than one with less hair. The long-necked giraffe may not always win—it may, for example, be pulverized by a meteor before it can pass on its long-necked genes. But over time, the odds will go its way. There is randomness here, but it is controlled and predictable: It works in accordance with a rule. Natural selection makes sense.
The second kind of chance in Darwin’s work, though, is more mysterious. For natural selection to work, you need to have a range of traits to select among. That range is provided by individual variation, the fact that two different animals (whether giraffe or bear) are different from each other. Some giraffes have longer necks than others. Some bears have thicker fur than others. Why should this be? Darwin’s answer was chance. [Continue reading…]
Natalie Wolchover writes: Though galaxies look larger than atoms and elephants appear to outweigh ants, some physicists have begun to suspect that size differences are illusory. Perhaps the fundamental description of the universe does not include the concepts of “mass” and “length,” implying that at its core, nature lacks a sense of scale.
This little-explored idea, known as scale symmetry, constitutes a radical departure from long-standing assumptions about how elementary particles acquire their properties. But it has recently emerged as a common theme of numerous talks and papers by respected particle physicists. With their field stuck at a nasty impasse, the researchers have returned to the master equations that describe the known particles and their interactions, and are asking: What happens when you erase the terms in the equations having to do with mass and length?
Nature, at the deepest level, may not differentiate between scales. With scale symmetry, physicists start with a basic equation that sets forth a massless collection of particles, each a unique confluence of characteristics such as whether it is matter or antimatter and has positive or negative electric charge. As these particles attract and repel one another and the effects of their interactions cascade like dominoes through the calculations, scale symmetry “breaks,” and masses and lengths spontaneously arise.
Similar dynamical effects generate 99 percent of the mass in the visible universe. Protons and neutrons are amalgams — each one a trio of lightweight elementary particles called quarks. The energy used to hold these quarks together gives them a combined mass that is around 100 times more than the sum of the parts. “Most of the mass that we see is generated in this way, so we are interested in seeing if it’s possible to generate all mass in this way,” said Alberto Salvio, a particle physicist at the Autonomous University of Madrid and the co-author of a recent paper on a scale-symmetric theory of nature. [Continue reading…]
Michael Graziano writes: About four thousand years ago, somewhere in the Middle East — we don’t know where or when, exactly — a scribe drew a picture of an ox head. The picture was rather simple: just a face with two horns on top. It was used as part of an abjad, a set of characters that represent the consonants in a language. Over thousands of years, that ox-head icon gradually changed as it found its way into many different abjads and alphabets. It became more angular, then rotated to its side. Finally it turned upside down entirely, so that it was resting on its horns. Today it no longer represents an ox head or even a consonant. We know it as the capital letter A.
The moral of this story is that symbols evolve.
Long before written symbols, even before spoken language, our ancestors communicated by gesture. Even now, a lot of what we communicate to each other is non-verbal, partly hidden beneath the surface of awareness. We smile, laugh, cry, cringe, stand tall, shrug. These behaviours are natural, but they are also symbolic. Some of them, indeed, are pretty bizarre when you think about them. Why do we expose our teeth to express friendliness? Why do we leak lubricant from our eyes to communicate a need for help? Why do we laugh?
One of the first scientists to think about these questions was Charles Darwin. In his 1872 book, The Expression of the Emotions in Man and Animals, Darwin observed that all people express their feelings in more or less the same ways. He argued that we probably evolved these gestures from precursor actions in ancestral animals. A modern champion of the same idea is Paul Ekman, the American psychologist. Ekman categorised a basic set of human facial expressions — happy, frightened, disgusted, and so on — and found that they were the same across widely different cultures. People from tribal Papua New Guinea make the same smiles and frowns as people from the industrialised USA.
Our emotional expressions seem to be inborn, in other words: they are part of our evolutionary heritage. And yet their etymology, if I can put it that way, remains a mystery. Can we trace these social signals back to their evolutionary root, to some original behaviour of our ancestors? To explain them fully, we would have to follow the trail back until we left the symbolic realm altogether, until we came face to face with something that had nothing to do with communication. We would have to find the ox head in the letter A.
I think we can do that. [Continue reading…]
For Common Dreams, Lance Tapley reports: Like many other news websites, Common Dreams has been plagued by inflammatory anti-Semitic comments following its stories. But on Common Dreams these posts have been so frequent and intense they have driven away donors from a nonprofit dependent on reader generosity.
A Common Dreams investigation has discovered that more than a thousand of these damaging comments over the past two years were written with a deceptive purpose by a Jewish Harvard graduate in his thirties who was irritated by the website’s discussion of issues involving Israel.
His intricate campaign, which he has admitted to Common Dreams, included posting comments by a screen name, “JewishProgressive,” whose purpose was to draw attention to and denounce the anti-Semitic comments that he had written under many other screen names.
The deception was many-layered. At one point he had one of his characters charge that the anti-Semitic comments and the criticism of the anti-Semitic comments must be written by “internet trolls who have been known to impersonate anti-Semites in order to then double-back and accuse others of supporting anti-Semitism” — exactly what he was doing. (Trolls are posters who foment discord.)
The impersonation, this character wrote, must be part of an “elaborate Hasbara setup,” referring to an Israeli international public-relations campaign. When Common Dreams finally confronted the man behind the deceptive posting, he denied that he himself was involved with Hasbara.
His posting on Common Dreams illustrates the susceptibility of website comment threads to massive manipulation. [Continue reading…]
Ali Khedery assesses Iraq’s newly designated prime minister, Haider al-Abadi: Like Mr. Maliki, Mr. Abadi is a Shiite Islamist Arab and a longtime leader in the Dawa Party, an entity that was founded to combat Iraq’s pre-2003 secular state and create a Shiite theocracy. Fueled by generous support from Iran’s intelligence services, Dawa was motivated to bring about change by any means necessary in the 1980s. Its members staged terrorist attacks across Iraq and elsewhere in the Middle East in a bid to weaken Hussein and his Western backers. The American and French embassies in Kuwait were bombed; a housing compound of the defense contractor Raytheon was overrun; and there were countless assassination attempts against Hussein and his senior deputies. Sensing an existential threat, the regime declared membership in Dawa to be a capital offense and thousands of suspected members were rounded up, tortured and executed.
Those events still resonate in every Iraqi leader’s mind — on both sides of the sectarian divide. The secular Sunnis and Shiites who were sympathetic to Hussein’s Baath Party rule view Dawa members and other Shiite Islamists as puppets of Iran. Likewise, they see Sunni Islamist parties like Speaker Salim al-Jubouri’s Iraqi Islamic Party as mere extensions of the fanatical Muslim Brotherhood. The Islamists see the secularists as drinking, smoking, whoring agents of Western intelligence services on an unholy crusade to separate mosque and state. Their visions of life, religion and politics are fundamentally incompatible, and that’s the heart of Iraqis’ violent struggle to define themselves and their future.
Increasing Iranian influence has only made matters worse. America sat back and watched in 2010 as Mr. Maliki’s cabinet was formed by Iranian generals in Tehran, thereby assuring its strategic defeat in Iraq. ISIS is a direct outgrowth of that defeat. Sensing an American vacuum, both Mr. Maliki and his Iranian patrons sought to consolidate their gains by economically, politically and physically crushing their Sunni and Kurdish rivals. Consequently, today’s “Iraqi security forces” are almost exclusively Shiite, reinforced by militias financed, trained, armed and directed by Iran. Given Mr. Maliki’s blatant sectarianism and his complicity in Bashar al-Assad’s campaign of genocide against Syria’s Sunnis, Sunni radicalization and the spread of ISIS across the region were predictable.
But if anyone has the potential to unite Iraq and hold it together in the face of ISIS terrorism and Iranian meddling, it is Mr. Abadi. In a society where name and upbringing count for a lot, he comes from a respected Baghdad family and was raised in an upscale neighborhood. He studied at one of the capital’s best high schools, earned a degree from one of its top universities and later received a doctorate in engineering in Britain.
While Mr. Maliki spent his years in exile in Iran and Syria and earned degrees in Islamic studies and Arabic literature, Mr. Abadi, a fluent English speaker, worked his own way through his long and costly studies abroad. In meetings over the past decade, Mr. Abadi always impressed me and other American diplomats with his self-effacing humor, humility, willingness to listen and ability to compromise — extremely rare traits among Iraq’s political elite, and precisely the characteristics that are needed to help heal the wounds Iraqis sustained under Hussein and Mr. Maliki. [Continue reading…]
Newsweek reports: In a grim government compound 40 km from Vienna, five young Syrian men are huddled together examining the screen of a battered mobile phone. Beside them is a rickety plastic chair with a glass of sweet, amber-coloured tea perched on top, a vestige of Arab domesticity. This day is like any other: the young men pore over family photographs and talk incessantly of home as they wait for the residence permits that will allow them to start their lives here in Austria.
“Internet and talk,” says one of them, gesturing around the bare dormitory. “There is nothing else.” This compound could be anywhere; as it happens, it borders a quiet village with manicured gardens, picket fences and residents who keep to themselves – a far cry from the war-ravaged Syrian towns these men have abandoned. For the past few weeks, the village of Muthmannsdorf has been a place of surreal limbo, where they wait for the life of freedom they believe Europe holds. It has been hard won.
Murat is an ethnic Turkmen from Damascus, a 28-year-old with striking green eyes and prematurely white-flecked hair. The photo everyone is admiring is of his daughter, three-year-old Aya. Murat fled from Syria with his parents, wife and daughter in August 2012, when Bashar al-Assad’s army started dropping barrel bombs around their home in the southern suburbs of Damascus. Murat knew that even if they survived, he would be forced to join the army and might never see his family again. They drove to Tripoli in Lebanon, where they boarded a boat to the port of Mersin on the southeastern coast of Turkey, and then travelled on to Istanbul. There, with no official refugee status, no passport and no right to work, Murat left his pregnant wife and child in the care of his elder brother and set out for the more promising cities of Europe. Crossing to Greece one night in a rubber dinghy, he began a seven month odyssey during which he entrusted himself to a mafia of people smugglers, risked clandestine border crossings and Balkan police patrols and now, finally, confronts the stony face of Austrian bureaucracy. After weeks on the road, it’s time to wait.
Around 2.8 million Syrians have fled their homeland since conflict broke out in their country three years ago, and, while most are living in camps in Turkey, Jordan and Lebanon, those who can afford the journey are headed to Europe. I am in Austria to meet Murat and his friends, who made their way here overland from Greece, having traced their route, with the luxury of an EU passport, from the Turkish-Syrian border to Istanbul, then Athens and finally Vienna. At every stop I have encountered young Syrian men armed with their families’ savings and a few contacts in their mobile phones, relatively undaunted by the dangers of capsizing boats, impenetrable asylum procedures and the lack of any common language with the officials and smugglers who control their fate. Many of these men left Syria to avoid joining either the Islamic State rebels or Assad’s army, escaping without the passports that they could only claim by alerting the authorities to their presence – and subsequent absence. Many of them have left families behind. “The journey is too difficult for women and children,” says Khaled, a small, hoarse man in his late thirties. “We barely made it ourselves.” [Continue reading…]
In “The most wanted man in the world,” his feature article for Wired on Edward Snowden, James Bamford writes: The massive surveillance effort was bad enough, but Snowden was even more disturbed to discover a new, Strangelovian cyberwarfare program in the works, codenamed MonsterMind. The program, disclosed here for the first time, would automate the process of hunting for the beginnings of a foreign cyberattack. Software would constantly be on the lookout for traffic patterns indicating known or suspected attacks. When it detected an attack, MonsterMind would automatically block it from entering the country — a “kill” in cyber terminology.
Programs like this had existed for decades, but MonsterMind software would add a unique new capability: Instead of simply detecting and killing the malware at the point of entry, MonsterMind would automatically fire back, with no human involvement. That’s a problem, Snowden says, because the initial attacks are often routed through computers in innocent third countries. “These attacks can be spoofed,” he says. “You could have someone sitting in China, for example, making it appear that one of these attacks is originating in Russia. And then we end up shooting back at a Russian hospital. What happens next?”
In addition to the possibility of accidentally starting a war, Snowden views MonsterMind as the ultimate threat to privacy because, in order for the system to work, the NSA first would have to secretly get access to virtually all private communications coming in from overseas to people in the US. “The argument is that the only way we can identify these malicious traffic flows and respond to them is if we’re analyzing all traffic flows,” he says. “And if we’re analyzing all traffic flows, that means we have to be intercepting all traffic flows. That means violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing. For everyone, all the time.”
Gregory D. Johnsen writes: Muhammad al-Tuhayf was relaxing at his house late in the afternoon on Dec. 12, 2013, when his iPhone rang. A boxy, tired-looking Yemeni shaykh with large hands and a slow voice, Tuhayf heard the news: A few miles from where he was sitting, along a rutted-out dirt track that snaked through the mountains and wadis of central Yemen, U.S. drones had fired four missiles at a convoy of vehicles. Drone strikes were nothing new in Yemen — there had been one four days earlier, another one a couple weeks before that, and a burst of eight strikes in 12 days in late July and August that had set the country on edge. But this one was different: This time the Americans had hit a wedding party. And now the government needed Tuhayf’s help.
The corpses had already started to arrive in the provincial capital of Radaa, and by the next morning angry tribesmen were lining the dead up in the street. Laid out side by side on bright blue tarps and wrapped in cheap blankets, what was left of the men looked distorted by death. Heads were thrown back at awkward angles, splattered with blood that had caked and dried in the hours since the strike. Faces that had been whole were now in pieces, missing chunks of skin and bone, and off to one side, as if he didn’t quite belong, lay a bearded man with no visible wounds.
Clustered around them in a sweaty, jostling circle, dozens of men bumped up against one another as they struggled for position and a peek at the remains. Above the crowd, swaying out over the row of bodies as he hung onto what appeared to be the back of a truck with one hand, a leathery old Yemeni screamed into the crowd. “This is a massacre,” he shouted, his arm slicing through the air. “They were a wedding party.” Dressed in a gray jacket and a dusty beige robe with prayer beads draped over his dagger, the man was shaking with fury as his voice faltered under the strain. “An American drone killed them,” he croaked with another wild gesture from his one free hand. “Look at them.”
A few miles outside of town, Tuhayf already knew what he had to do. This had happened in his backyard; he was one of the shaykhs on the ground. Only three hours south of the capital, the central government held little sway in Radaa. Like a rural sheriff in a disaster zone, he was a local authority, someone who was known and respected. And on Dec. 12, that meant acting as a first responder. Tuhayf needed to assess the situation and deal with the fallout. Every few minutes his phone went off again, the marimba ringtone sounding with yet another update. Already he was hearing reports that angry tribesmen had cut the road north. Frightened municipal employees, worried that they might be targeted, kept calling, begging for his help. So did the governor, who was three hours away at his compound in Sanaa.
It didn’t take Tuhayf long to reach a conclusion. The Americans had made a mess, and to clean it up he was going to need money and guns.
This is the other side of America’s drone program: the part that comes after the missiles fly and the cars explode, when the smoke clears and the bodies are sorted. Because it is here, at desert strike sites across the Middle East, where unsettling questions emerge about culpability and responsibility — about the value of a human life and assessing the true costs of a surgical war.
For much of the past century, the United States has gone to war with lawyers, men and women who follow the fighting, adjudicating claims of civilian casualties and dispensing cash for errors. They write reports and interview survivors. But what happens when there are no boots on the ground? When the lawyers are thousands of miles away and dependent on aerial footage that is as ambiguous as it is inconclusive? How do you determine innocence or guilt from a pre-strike video? When everyone has beards and guns, like they do in rural Yemen, can you tell the good guys from the bad? Is it even possible? And when the U.S. gets it wrong, when it kills the wrong man: What happens then? Who is accountable when a drone does the killing? [Continue reading…]
The Wall Street Journal reports: On Sept. 7, 2000, in the waning days of the Clinton administration, a U.S. Predator drone flew over Afghanistan for the first time. The unmanned, unarmed plane buzzed over Tarnak Farms, a major al Qaeda camp. When U.S. analysts later pored over video footage from this maiden voyage, they were struck by the image of a commandingly tall man clad in white robes. CIA analysts later concluded that he was Osama bin Laden.
From that first mission, the drone program has grown into perhaps the most prominent instrument of U.S. counterterrorism policy — and, for many in the Muslim world, a synonym for American callousness and arrogance. The U.S. has used drones to support ground troops in Iraq and Afghanistan and, particularly under President Barack Obama, to hammer the high command of al Qaeda. A recent study by the Stimson Center, a think tank in Washington, D.C., estimates that U.S. drone strikes in Pakistan have killed 2,000 to 4,000 people. Other countries are trying to get into the act, including Iran, which U.S. officials say has flown drones over Iraq during the current crisis there.
Drones seem to be everywhere these days, buzzing into civilian life and even pop culture. French players complained before the World Cup that a mysterious drone-borne camera had spied on their training sessions. Amazon owner Jeff Bezos hopes to use drones for faster home delivery. Tom Cruise starred last summer as a futuristic drone repairman in the sci-fi thriller “Oblivion,” and Captain America himself faced down lethal super-drones in this spring’s “The Winter Soldier.” Hollywood is even using drones in real life, helping to film such tricky scenes as the chase early in the 2012 James Bond caper “Skyfall,” when Daniel Craig as 007 races across the rooftops of Istanbul.
But as ubiquitous as Predators, Reapers, Global Hawks and their ilk may now seem, the U.S. actually stumbled into the drone era. Washington got into the business of using drones for counterterrorism well before 9/11—not out of any steely strategic design or master plan but out of bureaucratic frustration, bickering and a series of only half-intentional decisions. [Continue reading…]
Dwyer Gunn writes: Bethesda in the state of Maryland is the kind of safe, upscale Washington DC suburb that well-educated, high-earning professionals retreat to when it’s time to raise a family. Some 80 per cent of the city’s adult residents have college degrees. Bethesda’s posh Bradley Manor-Longwood neighbourhood was recently ranked the second richest in the country. And yet, on 11 March 2011, a young woman was brutally murdered by a fellow employee at a local Lululemon store (where yoga pants retail for about $100 each). Two employees of the Apple store next door heard the murder as it occurred, debated, and ultimately decided not to call the police.
If the attack had occurred in poor, crowded, crime-ridden Rio de Janeiro, the outcome might have been different: in one series of experiments, researchers found bystanders in the Brazilian city to be extraordinarily helpful, stepping in to offer a hand to a blind person and aiding a stranger who dropped a pen nearly 100 per cent of the time. This apparent paradox reflects a nuanced understanding of ‘bystander apathy’, the term coined by the US psychologists John Darley and Bibb Latané in the 1960s to describe the puzzling, and often horrifying, inaction of witnesses to intervene in violent crimes or other tragedies.
The phenomenon first received widespread attention in 1964, when the New York bar manager Kitty Genovese was sexually assaulted and murdered outside her apartment building in the borough of Queens. Media coverage focused on the alleged inaction of her neighbours – The New York Times’s defining story opened with the chilling assertion that: ‘For more than half an hour, 38 respectable, law-abiding citizens in Queens watched a killer stalk and stab a woman in three separate attacks.’ Over the years, that media account has been largely debunked, but the incident served to establish a narrative that persists today: society has changed irrevocably for the worse, and the days of neighbour helping neighbour are a nicety of the past. True or not, the Genovese story became a cultural meme for callousness and man’s inhumanity to man, a trend said to signify our modern age.
It also launched a whole new field of study. [Continue reading…]
Jill Neimark writes: In 1962, physicist and historian Thomas Kuhn proposed that science makes progress not just through the gradual accumulation and analysis of knowledge, but also through periodic revolutions in perspective. Anomalies and incongruities that may have been initially ignored drive a field into crisis, he argued, and eventually force a new scientific framework. Copernicus, Darwin, Newton, Galileo, Pasteur—all have spearheaded what Kuhn has called a “paradigm shift.”
Thomas Kuhn is Claudia Miller’s hero. An immunologist and environmental health expert at the University of Texas School of Medicine in San Antonio, and a visiting senior scientist at Harvard University, Miller lives by Kuhn’s maxim that “the scientist who embraces a new paradigm is like the man wearing inverting lenses…[he] has undergone a revolutionary transformation of vision.”
Miller has spent 30 years hammering out a theory to explain the contemporary surge in perplexing, multi-symptom illnesses — from autism to Gulf War Syndrome — which represent a Kuhnian shift in medicine. She calls her theory “TILT,” short for Toxicant Induced Loss of Tolerance.
TILT posits that a surprising range of today’s most common chronic conditions are linked to daily exposure to very low doses of synthetic chemicals that have been in mass production since World War II. These include organophosphate pesticides, flame-retardants, formaldehyde, benzene, and tens of thousands of other chemicals.
TILT, says Miller, is a two-step process. Genetically susceptible individuals get sick after a toxic exposure or series of exposures. Instead of recovering, their neurological and immune systems become “tilted.” Then, they lose tolerance to a wide range of chemicals commonly found at low doses in everyday life and develop ongoing illnesses. [Continue reading…]