Pope says welcoming refugees keeps us safe from terrorism

Catholic News Agency reports: Pope Francis has encouraged Europeans to welcome refugees, calling authentic hospitality “our greatest security against hateful acts of terrorism.”

Francis Saturday spoke to alumni of Jesuit schools in Europe who were in Rome for a conference on refugees.

The pope said: “I encourage you to welcome refugees into your homes and communities, so that their first experience of Europe is not the traumatic experience of sleeping cold on the streets, but one of warm welcome.”

He said each refugee “has a name, a face and a story, as well as an inalienable right to live in peace and to aspire to a better future” for their children.

“At this place and time in history, there is great need for men and women who hear the cry of the poor and respond with mercy and generosity,” the pope told a group of Jesuit alumni Sept. 17.

He noted how there are “tragically more than 65 million” forcibly displaced persons around the globe, calling the number “unprecedented” and “beyond all imagination.” [Continue reading…]

Facebooktwittermail

Ethical shifts come with thinking in a different language

Julie Sedivy writes: What defines who we are? Our habits? Our aesthetic tastes? Our memories? If pressed, I would answer that if there is any part of me that sits at my core, that is an essential part of who I am, then surely it must be my moral center, my deep-seated sense of right and wrong.

And yet, like many other people who speak more than one language, I often have the sense that I’m a slightly different person in each of my languages — more assertive in English, more relaxed in French, more sentimental in Czech. Is it possible that, along with these differences, my moral compass also points in somewhat different directions depending on the language I’m using at the time?

Psychologists who study moral judgments have become very interested in this question. Several recent studies have focused on how people think about ethics in a non-native language — as might take place, for example, among a group of delegates at the United Nations using a lingua franca to hash out a resolution. The findings suggest that when people are confronted with moral dilemmas, they do indeed respond differently when considering them in a foreign language than when using their native tongue.

In a 2014 paper led by Albert Costa, volunteers were presented with a moral dilemma known as the “trolley problem”: imagine that a runaway trolley is careening toward a group of five people standing on the tracks, unable to move. You are next to a switch that can shift the trolley to a different set of tracks, thereby sparing the five people, but resulting in the death of one who is standing on the side tracks. Do you pull the switch?

Most people agree that they would. But what if the only way to stop the trolley is by pushing a large stranger off a footbridge into its path? People tend to be very reluctant to say they would do this, even though in both scenarios, one person is sacrificed to save five. But Costa and his colleagues found that posing the dilemma in a language that volunteers had learned as a foreign tongue dramatically increased their stated willingness to shove the sacrificial person off the footbridge, from fewer than 20% of respondents working in their native language to about 50% of those using the foreign one. [Continue reading…]

Facebooktwittermail

What to do about Liberia’s island colony of abandoned lab chimps?

By Ben Garrod, Anglia Ruskin University

The story of Liberia’s former research chimpanzees is both well-known and contentious. A non-profit blood bank, the New York Blood Centre (NYBC), set up a virus-testing laboratory in the country in 1974, and wild chimpanzees were trapped from their forests and housed within the “Vilab II” facility. They were subjected to medical experiments and were intentionally infected with hepatitis and other pathogens to help develop a range of vaccines.

By 2005, the director of Vilab II, Alfred M Prince, announced that all research had been terminated and that the NYBC had started to make “lifetime care” arrangements for the chimpanzees through an endowment. Over the next ten years, the chimps were “retired” to a series of small islands in a river estuary, receiving food, water and necessary captive care (at a cost of around US$20,000 a month).

Then, in March 2015, the NYBC withdrew its help and financial support and disowned Prince’s commitments. The move left about 85 chimps to fend for themselves. Escape is impossible, as chimpanzees are incapable of swimming well, and many are suspected to have likely died from a lack of food and water.

Although the Liberian government owns the chimps as a legal technicality, the day-to-day management of the chimps and the experiments were carried out by NYBC and it in no way absolves it from ultimate responsibility. But it has used this to distance itself from calls for it to continue funding care. In a statement last year it said it had had “unproductive discussions” with the Liberian government and that it “never had any obligation for care for the chimps, contractual or otherwise”. It has also said that it can “no longer sustain diverting millions of dollars away from our lifesaving mission”.

Understandably, animal rights groups are vocally opposing the blood bank’s actions.

[Read more…]

Facebooktwittermail

Torturing animals injures humanity

tufted-capuchin-monkey

John P. Gluck writes: Five years ago, the National Institutes of Health all but ended biomedical and behavioral research on chimpanzees, concluding that, as the closest human relative, they deserved “special consideration and respect.”

But chimpanzees were far from the only nonhuman primates used in research then, or now. About 70,000 other primates are still living their lives as research subjects in labs across the United States.

On Wednesday, the N.I.H. will hold a workshop on “continued responsible research” with these animals. This sounds like a positive development. But as someone who spent decades working almost daily with macaque monkeys in primate research laboratories, I know firsthand that “responsible” research is not enough. What we really need to examine is the very moral ground of animal research itself.

Like many researchers, I once believed that intermittent scientific gains justified methods that almost always did harm. As a graduate student in the late 1960s, I came to see that my natural recoil from intentionally harming animals was a hindrance to how I understood scientific progress. I told myself that we were being responsible by providing good nutrition, safe cages, skilled and caring caretakers and veterinarians for the animals — and, crucially, that what we stood to learn outweighed any momentary or prolonged anguish these animals might experience. The potential for a medical breakthrough, the excitement of research and discovering whether my hypotheses were correct — and let’s not leave out smoldering ambition — made my transition to a more “rigorous” stance easier than I could have imagined.

One of my areas of study focused on the effects of early social deprivation on the intellectual abilities of rhesus monkeys. We kept young, intelligent monkeys separated from their families and others of their kind for many months in soundproof cages that remained lit 24 hours a day, then measured how their potential for complex social and intellectual lives unraveled. All the while, I comforted myself with the idea that these monkeys were my research partners, and that by creating developmental disorders in monkeys born in a lab, we could better understand these disorders in humans.

But it was impossible to fully quell my repugnance at all that I continued to witness and to inflict. At the same time, in the classroom, I began to face questions from students who had become increasingly concerned about the predicament of lab animals. [Continue reading…]

Facebooktwittermail

It is not what you believe, but what you do that matters

Steven Nadler writes: In July 1656, the 23-year-old Bento de Spinoza was excommunicated from the Portuguese-Jewish congregation of Amsterdam. It was the harshest punishment of herem (ban) ever issued by that community. The extant document, a lengthy and vitriolic diatribe, refers to the young man’s ‘abominable heresies’ and ‘monstrous deeds’. The leaders of the community, having consulted with the rabbis and using Spinoza’s Hebrew name, proclaim that they hereby ‘expel, excommunicate, curse, and damn Baruch de Spinoza’. He is to be ‘cast out from all the tribes of Israel’ and his name is to be ‘blotted out from under heaven’.

Over the centuries, there have been periodic calls for the herem against Spinoza to be lifted. Even David Ben-Gurion, when he was prime minister of Israel, issued a public plea for ‘amending the injustice’ done to Spinoza by the Amsterdam Portuguese community. It was not until early 2012, however, that the Amsterdam congregation, at the insistence of one of its members, formally took up the question of whether it was time to rehabilitate Spinoza and welcome him back into the congregation that had expelled him with such prejudice. There was, though, one thing that they needed to know: should we still regard Spinoza as a heretic?

Unfortunately, the herem document fails to mention specifically what Spinoza’s offences were – at the time he had not yet written anything – and so there is a mystery surrounding this seminal event in the future philosopher’s life. And yet, for anyone who is familiar with Spinoza’s mature philosophical ideas, which he began putting in writing a few years after the excommunication, there really is no such mystery. By the standards of early modern rabbinic Judaism – and especially among the Sephardic Jews of Amsterdam, many of whom were descendants of converso refugees from the Iberian Inquisitions and who were still struggling to build a proper Jewish community on the banks of the Amstel River – Spinoza was a heretic, and a dangerous one at that.

What is remarkable is how popular this heretic remains nearly three and a half centuries after his death, and not just among scholars. Spinoza’s contemporaries, René Descartes and Gottfried Leibniz, made enormously important and influential contributions to the rise of modern philosophy and science, but you won’t find many committed Cartesians or Leibnizians around today. The Spinozists, however, walk among us. They are non-academic devotees who form Spinoza societies and study groups, who gather to read him in public libraries and in synagogues and Jewish community centres. Hundreds of people, of various political and religious persuasions, will turn out for a day of lectures on Spinoza, whether or not they have ever read him. There have been novels, poems, sculptures, paintings, even plays and operas devoted to Spinoza. This is all a very good thing.

It is also a very curious thing. Why should a 17th-century Portuguese-Jewish philosopher whose dense and opaque writings are notoriously difficult to understand incite such passionate devotion, even obsession, among a lay audience in the 21st century? Part of the answer is the drama and mystery at the centre of his life: why exactly was Spinoza so harshly punished by the community that raised and nurtured him? Just as significant, I suspect, is that everyone loves an iconoclast – especially a radical and fearless one that suffered persecution in his lifetime for ideas and values that are still so important to us today. Spinoza is a model of intellectual courage. Like a prophet, he took on the powers-that-be with an unflinching honesty that revealed ugly truths about his fellow citizens and their society. [Continue reading…]

Facebooktwittermail

A week from hell

Charles Blow writes: This was yet another week that tore at the very fiber of our nation.

After two videos emerged showing the gruesome killings of two black men by police officers, one in Baton Rouge, La., and the other in Falcon Heights, Minn., a black man shot and killed five officers in a cowardly ambush at an otherwise peaceful protest and wounded nine more people. The Dallas police chief, David O. Brown, said, “He was upset about Black Lives Matter” and “about the recent police shootings” and “was upset at white people” and “wanted to kill white people, especially white officers.”

We seem caught in a cycle of escalating atrocities without an easy way out, without enough clear voices of calm, without tools for reduction, without resolutions that will satisfy.

There is so much loss and pain. There are so many families whose hearts hurt for a loved one needlessly taken, never to be embraced again.

There is so much disintegrating trust, so much animosity stirring.

So many — too many — Americans now seem to be living with an ambient terror that someone is somehow targeting them. [Continue reading…]

Facebooktwittermail

Trump’s unchristian spirit

Peter Wehner writes: Since Donald Trump assures us that the Bible is his favorite book, it’s worth asking: Just what is his theology?

After Mr. Trump met with hundreds of evangelical Christians a couple of weeks ago, James Dobson, who is among the most influential leaders in the evangelical world and serves on Mr. Trump’s evangelical executive advisory board, declared that “Trump appears to be tender to things of the Spirit,” by which Dr. Dobson meant the Holy Spirit.

Of all the descriptions of Mr. Trump we’ve heard this election season, this may be the most farcical. As described by St. Paul, the “fruit of the Spirit” includes forbearance, kindness, goodness, faithfulness, gentleness and self-control, hardly qualities one associates with Mr. Trump. It shows you the lengths Mr. Trump’s supporters will go to in order to rationalize their enthusiastic support of him.

Dr. Dobson is not alone. Jerry Falwell Jr., the president of Liberty University, has praised Mr. Trump’s life as in many ways exemplary and said that he believes that “Donald Trump is God’s man to lead our nation.” Eric Metaxas, who has written popular biographies of William Wilberforce and Dietrich Bonhoeffer, has rhapsodized about Mr. Trump and argued that Christians “must” vote for him because he is “the last best hope of keeping America from sliding into oblivion.” [Continue reading…]

Facebooktwittermail

‘Gene drives’ that tinker with evolution are an unknown risk, researchers say

MIT Technology Review reports: With great power — in this case, a technology that can alter the rules of evolution — comes great responsibility. And since there are “considerable gaps in knowledge” about the possible consequences of releasing this technology, called a gene drive, into natural environments, it is not yet responsible to do so. That’s the major conclusion of a report published today by the National Academies of Science, Engineering, and Medicine.

Gene drives hold immense promise for controlling or eradicating vector-borne diseases like Zika virus and malaria, or in managing agricultural pests or invasive species. But the 200-page report, written by a committee of 16 experts, highlights how ill-equipped we are to assess the environmental and ecological risks of using gene drives. And it provides a glimpse at the challenges they will create for policymakers.

The technology is inspired by natural phenomena through which particular “selfish” genes are passed to offspring at higher rate than is normally allowed by nature in sexually reproducing organisms. There are multiple ways to make gene drives in the lab, but scientists are now using the gene-editing tool known as CRISPR to very rapidly and effectively do the trick. Evidence in mosquitoes, fruit flies, and yeast suggests that this could be used to spread a gene through nearly 100 percent of a population.

The possible ecological effects, intended or not, are far from clear, though. How long will gene drives persist in the environment? What is the chance that an engineered organism could pass the gene drive to an unintended recipient? How might these things affect the whole ecosystem? How much does all this vary depending on the particular organism and ecosystem?

Research on the molecular biology of gene drives has outpaced ecological research on how genes move through populations and between species, the report says, making it impossible to adequately answer these and other thorny questions. Substantially more laboratory research and confined field testing is needed to better grasp the risks. [Continue reading…]

Jim Thomas writes: If there is a prize for the fastest emerging tech controversy of the century the ‘gene drive’ may have just won it. In under eighteen months the sci-fi concept of a ‘mutagenic chain reaction’ that can drive a genetic trait through an entire species (and maybe eradicate that species too) has gone from theory to published proof of principle to massively-shared TED talk (apparently an important step these days) to the subject of a US National Academy of Sciences high profile study – complete with committees, hearings, public inputs and a glossy 216 page report release. Previous technology controversies have taken anywhere from a decade to over a century to reach that level of policy attention. So why were Gene Drives put on the turbo track to science academy report status? One word: leverage.

What a gene drive does is simple: it ensures that a chosen genetic trait will reliably be passed on to the next generation and every generation thereafter. This overcomes normal Mendelian genetics where a trait may be diluted or lost through the generations. The effect is that the engineered trait is driven through an entire population, re-engineering not just single organisms but enforcing the change in every descendant – re-shaping entire species and ecosystems at will.

It’s a perfect case of a very high-leverage technology. Archimedes famously said “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. ” Gene drive developers are in effect saying “Give me a gene drive and an organism to put it in and I can wipe out species, alter ecosystems and cause large-scale modifications.” Gene drive pioneer Kevin Esvelt calls gene drives “an experiment where if you screw up, it affects the whole world”. [Continue reading…]

Facebooktwittermail

Should chimps be considered people under the law?

Jay Schwartz writes: In late 2013, the Nonhuman Rights Project (NhRP) filed a first-ever lawsuit to free a pet chimpanzee named Tommy from the inadequate conditions provided by his owner. The NhRP, a legal group focused on animal protection, argued that Tommy is an autonomous being who is held against his will and that he is entitled to a common-law writ of habeas corpus, a legal means of determining the legality of imprisonment. Granting habeas corpus to a chimpanzee would mean viewing chimpanzees as legal persons with rights, rather than as mere things, so this case was rather controversial.

The Tommy case came to a close on December 4, 2014, as the Appellate Division of the New York State Supreme Court’s five-judge panel ruled against the NhRP. (The state’s highest court is the Court of Appeals.) Justice Karen K. Peters, the presiding judge, wrote: “Needless to say, unlike human beings, chimpanzees cannot bear any legal duties, submit to societal responsibilities or be held legally accountable for their actions. In our view, it is this incapability to bear any legal responsibilities and societal duties that renders it inappropriate to confer upon chimpanzees the legal rights … that have been afforded to human beings.” [Continue reading…]

A lot of people will regard the effort to confer legal rights to non-humans as being driven by anthropomorphism. But consider the court’s argument. Could not the exact same line of reasoning be used to argue that small children or adults with developmental disabilities be deprived of legal rights? Of course, such an argument would rightly be decried as inhuman and barbaric.

Facebooktwittermail

The citizen soldier

Phil Klay writes: I can’t say that I joined the military because of 9/11. Not exactly. By the time I got around to it the main U.S. military effort had shifted to Iraq, a war I’d supported though one which I never associated with al-Qaida or Osama bin Laden. But without 9/11, we might not have been at war there, and if we hadn’t been at war, I wouldn’t have joined.

It was a strange time to make the decision, or at least, it seemed strange to many of my classmates and professors. I raised my hand and swore my oath of office on May 11, 2005. It was a year and a half after Saddam Hussein’s capture. The weapons of mass destruction had not been found. The insurgency was growing. It wasn’t just the wisdom of the invasion that was in doubt, but also the competence of the policymakers. Then-Secretary of Defense Donald Rumsfeld had been proven wrong about almost every major post-invasion decision, from troop levels to post-war reconstruction funds. Anybody paying close attention could tell that Iraq was spiraling into chaos, and the once jubilant public mood about our involvement in the war, with over 70 percent of Americans in 2003 nodding along in approval, was souring. But the potential for failure, and the horrific cost in terms of human lives that failure would entail, only underscored for me why I should do my part. This was my grand cause, my test of citizenship.

The highly professional all-volunteer force I joined, though, wouldn’t have fit with the Founding Fathers’ conception of citizen-soldiers. They distrusted standing armies: Alexander Hamilton thought Congress should vote every two years “upon the propriety of keeping a military force on foot”; James Madison claimed “armies kept up under the pretext of defending, have enslaved the people”; and Thomas Jefferson suggested the Greeks and Romans were wise “to put into the hands of their rulers no such engine of oppression as a standing army.”

They wanted to rely on “the people,” not on professionals. According to the historian Thomas Flexner, at the outset of the Revolutionary War George Washington had grounded his military thinking on the notion that “his virtuous citizen-soldiers would prove in combat superior, or at least equal, to the hireling invaders.” This was an understandably attractive belief for a group of rebellious colonists with little military experience. The historian David McCullough tells us that the average American Continental soldier viewed the British troops as “hardened, battle-scarred veterans, the sweepings of the London and Liverpool slums, debtors, drunks, common criminals and the like, who had been bullied and beaten into mindless obedience.” [Continue reading…]

Facebooktwittermail

There’s no such thing as free will

structure3bw

Stephen Cave writes: For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will — and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty” — the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness.

Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream — the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.”

So what happens if this faith erodes?

The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties — which some people have to a greater degree than others — to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance.

Galton launched a debate that raged throughout the 20th century over nature versus nurture. Are our actions the unfolding effect of our genetics? Or the outcome of what has been imprinted on us by the environment? Impressive evidence accumulated for the importance of each factor. Whether scientists supported one, the other, or a mix of both, they increasingly assumed that our deeds must be determined by something. [Continue reading…]

Facebooktwittermail

The importance of bearing witness to Syria’s war

Caitlin L Chandler writes: The wound in the middle of the man’s arm is a large circle of ribbed red blood and tendon, the edges of the flesh curving up and out like petals. Metal prongs inserted into the skin on either side of the wound hold his arm in place; his chest rises and falls with each breath.

To observe surgery up close is at first disorienting and surreal, like watching a perfectly shot film in the cinema and then walking through the screen to realize there is no such thing – only machines and people creating the images that were streaming as reality before you. I do not normally see bodies in disrepair that are being painstakingly stitched back together; I feel like I’m trespassing on something sacred.

The Médecins sans Frontières (Doctors Without Borders) trauma hospital in Ramtha, Jordan, is located 3 miles (5km) from the Syrian border. From the roof of MSF’s house, where its international staff live, you can see the blue-green hills that demarcate the border, stretching between the two countries like a gentle wave. Once you could have strolled in those hills, or laid down in the grass and daydreamed.

At night on the roof the lights of nearby Irbid glitter, some of them marking the apartments where thousands of urban refugees dwell, waiting for the war to end, for resettlement or for the chance to try to make their way across the ocean to Europe. Everyone is waiting – to see whether the fragile cease-fire matters and whether the E.U. will continue to turn its back on the men, women and children drowning in its seas. In Ramtha, they are waiting for a medical outcome.

As a humanitarian affairs officer with MSF, part of my job is to understand and construct here in Jordan how MSF practices temoignage – the act of witnessing. Since MSF’s inception, witnessing has been intertwined with the organization’s medical activities, which often occur in contexts where MSF is one of the only organizations to see at first hand the effects of conflict and disaster. It is what always set MSF apart for me from other organizations; although I know witnessing can be an imperfect offering, it at least implies responsibility.

To witness the effects of war is to witness what you cannot change; it is to observe mutilated bodies and sense the dislocation that comes with forced exile. It is to reflect on what meaning you can make in life when you are here, alive, and so many others are dead. For years I did not want to do an MSF mission because I thought it was not the real world, that it would change me in ways I did not want, but now I know that was fear. On the other side of fear is the world; it is still there, whether you choose to look at it or not. [Continue reading…]

Facebooktwittermail

Why it’s impossible to actually be a vegetarian

By Andrew Smith, Drexel University

In case you’ve forgotten the section on the food web from high school biology, here’s a quick refresher.

Plants make up the base of every food chain of the food web (also called the food cycle). Plants use available sunlight to convert water from the soil and carbon dioxide from the air into glucose, which gives them the energy they need to live. Unlike plants, animals can’t synthesize their own food. They survive by eating plants or other animals.

Clearly, animals eat plants. What’s not so clear from this picture is that plants also eat animals. They thrive on them, in fact (just Google “fish emulsion”). In my new book, “A Critique of the Moral Defense of Vegetarianism,” I call it the transitivity of eating. And I argue that this means one can’t be a vegetarian.

[Read more…]

Facebooktwittermail

American inquisition: Training teachers to extract confessions from their students

decay14bw

Douglas Starr writes: About a year and a half ago, Jessica Schneider was handed a flyer by one of her colleagues in the child-advocacy community. It advertised a training session, offered under the auspices of the Illinois Principals Association (I.P.A.), in how to interrogate students. Specifically, teachers and school administrators would be taught an abbreviated version of the Reid Technique, which is used across the country by police officers, private-security personnel, insurance-fraud investigators, and other people for whom getting at the truth is part of the job. Schneider, who is a staff attorney at the Chicago Lawyers’ Committee for Civil Rights Under Law, was alarmed. She knew that some psychologists and jurists have characterized the technique as coercive and liable to produce false confessions — especially when used with juveniles, who are highly suggestible. When she expressed her concerns to Brian Schwartz, the I.P.A.’s general counsel, he said that the association had been offering Reid training for many years and found it both popular and benign. To prove it, he invited Schneider to attend a session in January of 2015.

The training was led by Joseph Buckley, the president of John E. Reid and Associates, which is based in Chicago. Like the adult version of the Reid Technique, the school version involves three basic parts: an investigative component, in which you gather evidence; a behavioral analysis, in which you interview a suspect to determine whether he or she is lying; and a nine-step interrogation, a nonviolent but psychologically rigorous process that is designed, according to Reid’s workbook, “to obtain an admission of guilt.” Most of the I.P.A. session, Schneider told me, focussed on behavioral analysis. Buckley described to trainees how patterns of body language — including slumping, failing to look directly at the interviewer, offering “evasive” responses, and showing generally “guarded” behaviors — could supposedly reveal whether a suspect was lying. (Some of the cues were downright mythological — like, for instance, the idea that individuals look left when recalling the truth and right when trying to fabricate.) Several times during the session, Buckley showed videos of interrogations involving serious crimes, such as murder, theft, and rape. None of the videos portrayed young people being questioned for typical school misbehavior, nor did any of the Reid teaching materials refer to “students” or “kids.” They were always “suspects” or “subjects.”

Laura Nirider, a professor of law at Northwestern University and the project director of the Center on Wrongful Convictions of Youth, attended the same session as Schneider. She told me that about sixty people were there. “Everybody was on the edge of their seat: ‘So this is how we can learn to get the drop on little Billy for writing graffiti on the underside of the lunchroom table,’” she said. One vice-principal told Nirider that the first thing he does when he interrogates students is take away their cell phones, “so they can’t call their mothers.” [Continue reading…]

Facebooktwittermail

How much does it matter whether God exists?

Nathan Schneider writes: Two rooms, in two different cities, but pretty much the same scene: one man stands before a few dozen supporters, many of them middle-aged white males, plus a smaller, precocious cohort in early adulthood. As the man speaks, they interrupt him with good, earnest, detailed questions, which he ably answers more or less to their satisfaction. These crowds crave the intricacies of arguments and the upshots of science. The only thing that seems beyond their ken is how their counterparts in the other room could be convinced of something so wrong.

One of those rooms was in New York City, high in an office building overlooking the ruins that then still remained of the World Trade Center; the man was Richard Dawkins, the Oxford zoologist and ‘New Atheist’ polemicist. The man in the other room was his arch-rival, the evangelical Christian philosopher and debater William Lane Craig, speaking in a classroom on the sprawling campus of his megachurch in Marietta, Georgia. If one were to attend both events without understanding English, it would be hard to know the difference.

Whether such a thing as God exists is one of those questions that we use to mark our identities, choose our friends, and divide our families. But there are also moments when the question starts to seem suspect, or only partly useful. Once, backstage before a sold-out debate at the University of Notre Dame between Craig and Sam Harris, Dawkins’s fellow New Atheist, I heard an elderly Catholic theologian approach Harris and spit out: ‘I agree with you more than I do with that guy!’

During the heyday of the New Atheist movement, a few years after the terrorist attacks of 11 September 2001, I was in the wake of a teenage conversion to Catholicism. One might think that my converts’ zeal would pit me squarely against the New Atheist camp. But it didn’t. Really, neither side of the does-God-exist debates seemed to represent me, and the arguments in question had little to do with my embrace of my new-found faith. I had been drawn by the loosey-goosey proposition that love can conquer hate and death, expressed concretely in the lives of monks I had briefly lived among and members of the Catholic Worker Movement who shared their homes with the homeless and abandoned. I actually agreed with most of what the New Atheists wrote about science and free enquiry; what I disagreed most sorely with them about was their hawkish support for military invasions in Muslim-majority countries. [Continue reading…]

Facebooktwittermail

The people whose lives are controlled by machines

structure10

Kao Kalia Yang writes: My life in America has been a series of days spent within the confines of factories. For the last twenty-two years, I have worked with machines. Since we came to this country I have worked for three different companies. I was an assembler in a company that made coolant systems for cars. I was a general machinist for a second company that made wooden plaques and metal awards. With the most recent company, I was a second-shift polisher for different components that are used in industries such as canning and oil drilling. There have been moments in each of these jobs when my supervisors said in different ways, ‘Bee, you are not here to talk to me. You are here to talk to machines.’

In America, my voice is only powerful within our home. The moment I exit our front door and enter the paved roads, my deep voice loses its volume and its strength. When I speak English, I become like a leaf in the wind. I cannot control the direction my words will fly in the ear of the other person. I try to soften my landing in the language by leaving pauses between each word. I wrestle with my accent until it is a line of breath in the tightness of my throat. I greet people. I ask for directions. I say thank you. I say goodbye. I only speak English at work when it is necessary. I don’t like the weakness of my voice in English, but what I struggle with most is the weakness of my words.

In Hmong, my children hear so much of my words that sometimes I know they become heavy with the meaning I want to impart. I tell my children that my work in America is not important, but I work hard so that one day their work will be. I tell them that my big dream is for one of them to become an international human rights lawyer and bring justice to stories and lives like ours. I want one son or daughter to cross over the petty barriers erected by nations and states and stand firm for those who do not belong to these definitions. [Continue reading…]

Facebooktwittermail

How psychology can help us solve climate change

This piece has been taken down at the request of The Conversation.

Facebooktwittermail

To what extent might Stephen Hawking and Elon Musk be right about the dangers of artificial intelligence?

structure3

Suzanne Sadedin, an evolutionary biologist, writes: I think they are right that AI is dangerous, and they are dangerously wrong about why. I see two fairly likely futures.

Future 1: AI destroys itself, humanity and most or all life on earth, probably a lot sooner than in 1000 years.

Future 2: Humanity radically restructures its institutions to empower individuals, probably via transhumanist modification that effectively merges us with AI. We go to the stars.

Right now, we are headed for Future 1, but we could change this. Much as I admire Elon Musk, his plan to democratise AI actually makes Future 1 more, not less, likely.

Here’s why:

There’s a sense in which humans are already building a specific kind of AI; indeed, we’ve been gradually building it for centuries. This kind of AI consists of systems that we construct and endow with legal, real-world power. These systems create their own internal structures of rules and traditions, while humans perform fuzzy brain-based tasks specified by the system. The system as a whole can act with an appearance of purpose, intelligence and values entirely distinct from anything exhibited by its human components.

All nations, corporations and organisations can be considered as this kind of AI. I realise at this point it may seem like I’m bending the definition of AI. To be clear, I’m not suggesting organisations are sentient, self-aware or conscious, but simply that they show emergent, purpose-driven behaviour equivalent to that of autonomous intelligent agents. For example, we talk very naturally about how “the US did X”, and that means something entirely different from “the people of the US did X” or “the president of the US did X”, or even “the US government did X”.

These systems can be entirely ruthless toward individuals (just check the answers to What are some horrifying examples of corporate evil/greed? and What are the best examples of actions that are moral, even uplifting, but illegal? if you don’t believe me). Such ruthlessness is often advantageous — even necessary, because these systems exist in a competitive environment. They compete for human effort, involvement and commitment. Money and power. That’s how they survive and grow. New organisations, and less successful ones, copy the features of dominant organisations in order to compete. This places them under Darwinian selection, as Milton Friedman noted long ago.

Until recently, however, organisations have always relied upon human consent and participation; human brains always ultimately made the decisions, whether it was a decision to manufacture 600 rubber duckies or drop a nuclear bomb. So their competitive success has been somewhat constrained by human values and morals; there are not enough Martin Shkrelis to go around.

With the advent of machine learning, this changes. We now have algorithms that can make complex decisions better and faster than any human, about practically any specific domain. They are being applied to big data problems far beyond human comprehension. Yet these algorithms are still stupid in some ways. They are designed to optimise specific parameters for specific datasets, but they’re oblivious to the complexity of the real-world, long-term ramifications of their choices. [Continue reading…]

Facebooktwittermail