Turkish Prime Minister Recep Tayyip Erdogan, in a speech on Wednesday at the UN’s Alliance of Civilizations forum said:
Unfortunately the modern world has not passed the test when it comes to Syria. In the last two years, we have seen close to 70,000 people lose their lives, and every single day we see innocent children, women, civilians, killed. And the fact that the world has not reacted to this situation seriously injures the sense of justice. In the same way, rising racism in Europe is a serious, problematic area, vis-à-vis the Alliance of Civilizations project.
In addition to indifference vis-à-vis the Muslim countries, we also see harsh, offending, insulting behavior towards Muslims who live in countries other than their own, and this continues to be an unconscionable act that has been ongoing around the world. We should be striving to better understand the beliefs of others but instead we see that people act based on prejudice and exclude others and despise them. And that is why it is necessary that we must consider — just like Zionism or anti-Semitism or fascism — Islamophobia as a crime against humanity.
Not surprisingly, the only part of that statement that has drawn attention is to equate Zionism with fascism. That’s a claim that never goes down well.
Does condemning Zionism make you anti-Semitic? Not in Erdogan’s mind, it seems. That’s not a question for me to parse, but it’s worth noting that a lot of people seem to perceive any condemnation of Zionism as a condemnation of, if not all Jews, then certainly the ones living in Israel.
Not a question to parse? Meaning, that’s not territory into which a humble blogger at the Washington Post wants to venture. But let’s be clear, this really isn’t such a perilous issue that it can’t be clarified with a few facts.
Firstly, it might come as news to Fisher and some others, but a significant proportion of Jewish Israelis are not Zionists. Neither of course are the 20% of Israel’s population who are not Jewish.
But what is Zionism? The neatest definition I’ve heard came from an American rabbi at J Street: Zionism means having a country where Jews are “in charge.” (The rabbi describing Zionism this way seemed to think it perfectly reasonable that many Jews would want to live in a country run by Jews.) Liberal Zionists like to characterize this as a form of self-determination — a desire for Jews not to be ruled over by non-Jews. But this skirts around the utterly obvious and inevitable consequence of Jewish rule: that it involves non-Jews be ruled over by Jews. In other words, it is the practice of Jewish supremacy.
Zionists never tire of warning about “demographic threats.” In the simplest terms, the demographic threat would become insurmountable if Jews became a minority in the territory controlled by the state of Israel. Still, the demographic threat looms large even before that point is reached.
What this concern with a demographic threat makes clear is that Zionism is untenable in a state where Jews and non-Jews are treated as equals.
Demography hinges on numbers. How large does the Jewish majority need to be to sustain a Jewish state, and how many of them need to be Zionists?
Does Zionism’s intrinsic refusal to treat human beings as equals, constitute a crime against humanity? I’m not sure because that’s a technical term with a legal application. What should be beyond debate is that Zionism is a form a racism.
Yet Zionism enjoys a unique status: anyone who criticizes it gets swiftly vilified by the Western political and media establishment and few people even have the courage to question its meaning.
Google co-founder Sergey Brin accessorized with augmented-reality headset.
Google co-founder Sergey Brin recently showed off Google Glass, the company’s hands-free, voice-activated augmented-reality headset.
TED Blog: To take a picture while you’re wearing Glass, say “take picture.” Done.
“When we started Google 15 years ago,” Brin says, “my vision was that information would come to you as you need it. You wouldn’t have to search query at all.” But for now, we get information by disconnecting from other people, looking down into our smartphone. Brin asks: “Is this the way you’re meant to interact with other people?” Is the future of connection just people walking around hunched up, looking down, rubbing a featureless piece of glass? In an intimate moment, he says, “It’s kind of emasculating. Is this what you’re meant to do with your body?”
Working on this project, Brin says, was revealing: “I have a nervous tic. The cell phone is a nervous habit — If I smoked, I’d probably smoke instead, It’d look cooler. But I whip this out and look as if I have something important to do. It really opened my eyes to how much of my life I spent secluding myself away in email.”
Emasculating? That’s a telling choice of word coming from one of the leaders of the boys world of Silicon Valley.
Are people meant to interact with each other through handheld devices, Brin asks. That sounds like a reasonable question — but not if the answer is that we’d be better off using devices attached to our heads.
Brin might be able to strike a manly cyborg pose, but just as I prefer not to talk to people who hide behind dark glasses, I would find it hard to fully engage with someone whose attention is simultaneously being drawn to passing digital interlopers.
“He doesn’t seem to be all here,” used to be a way of describing loss of sanity. Now it’s the new normal, thanks to the ever-expanding means through which we can conjure illusions of connection.
Presumably Google hopes that in its pursuit of world domination, Google Glass will be the tool that allows the search giant to leap ahead of Apple as their arch rival remains trapped in the emasculating world of handheld devices.
No doubt smartphones separate individuals from the people around them by providing access to a social world untethered from space and time. But the separation results from being disembodied — not because the vehicle of disembodiment is attached to the hand instead of the head.
At the same time, behind this innovation — like so many other purported advances — there is a false governing premise: easier is better.
The smarter Google gets, the dumber we become.
This isn’t just a Luddite sentiment; it’s what Google itself has discovered. Google learns how to improve its search algorithms by studying the way people use its software, but it turns out that this creates a kind of negative feedback loop. The better Google becomes in figuring out what people are looking for, the clumsier people become in framing their questions.
Tim Adams talked to Amit Singhal, head of Google Search, who described how Google can learn from user behavior and infer the quality of search results based on the speed and frequency with which a user returns to a search results page.
I imagine, I say, that along the way he has been assisted in this work by the human component. Presumably we have got more precise in our search terms the more we have used Google?
He sighs, somewhat wearily. “Actually,” he says, “it works the other way. The more accurate the machine gets, the lazier the questions become.”
Is it possible to connect two brains in such a way that the thoughts in one could control those in the other?
That sounds like a question whose answer would have interested Hitler and Stalin. Wouldn’t this be every dictator’s dream: not only the capacity to control how others behave but also how they think?
On the other hand, if you’re a research neuroscientist and you want to conduct experiments in mind control, all you need do to allay public fears is to say that what you hope to discover is going to help paralyzed people walk again. Whenever there’s a hint of Dr Frankenstein, it’s always good to invoke Jesus.
The New York Times reports: In an experiment that sounds straight out of a science fiction movie, a Duke neuroscientist has connected the brains of two rats in such a way that when one moves to press a lever, the other one does, too — most of the time.
The neuroscientist, Miguel Nicolelis, known for successfully demonstrating brain-machine connections, like the one in which a monkey controlled a robotic arm with its thoughts, said this was the first time one animal’s brain had been linked to another.
The question, he said, was: “Could we fool the brain? Could we make the brain process signals from another body?” The answer, he said, was yes.
He and other scientists at Duke, and in Brazil, published the results of the experiment in the journal Scientific Reports. The work received mixed reviews from other scientists, ranging from “amazing” to “very simplistic.”
Much of Dr. Nicolelis’s work is directed toward creating a full exoskeleton that a paralyzed person could operate with brain signals. Although this experiment is not directly related, he said, it helps refine the ability to read and translate brain signals, an important part of all prosthetic devices connected to the brain, and an area in which brain science is making great advances.
He also speculated about the future possibility of a biological computer, in which numerous brains are connected, and views this as a small step in that direction.
_____________________________________________________________________ Stuxnet command-and-control servers were camouflaged behind a website for a nonexistent advertising agency called Media Suffix in 2005.
The discovery that an early version of Stuxnet was in development in 2005, suggests that work on the computer worm may have begun soon after the U.S. received Libya’s P-1 centrifuges in January 2004.
In September 2005, Dennis Ruddy, a general manager at the Department of Energy’s Oak Ridge nuclear facilities said: “There’s a lot of interest in the things that we brought back from Libya because a lot of them, looking at them, measuring the tolerances, setting them up and operating them, to a certain extent tells us how close people are to be able to get a system that can work all the way to bomb-grade material.”
Within two weeks of Ruddy’s statement appearing in the Knoxville News Sentinel, he had been relieved of his duties and lost his security clearance.
Ars Technica: Researchers have uncovered a never-before-seen version of Stuxnet. The discovery sheds new light on the evolution of the powerful cyberweapon that made history when it successfully sabotaged an Iranian uranium-enrichment facility in 2009.
Stuxnet 0.5 is the oldest known version of the computer worm and was in development no later than November of 2005, almost two years earlier than previously known, according to researchers from security firm Symantec. The earlier iteration, which was in the wild no later than November 2007, wielded an alternate attack strategy that disrupted Iran’s nuclear program by surreptitiously closing valves in that country’s Natanz uranium enrichment facility. Later versions scrapped that attack in favor of one that caused centrifuges to spin erratically. The timing and additional attack method are a testament to the technical sophistication and dedication of its developers, who reportedly developed Stuxnet under a covert operation sponsored by the US and Israeli governments. It was reportedly personally authorized by Presidents Bush and Obama.
Also significant, version 0.5 shows that its creators were some of the same developers who built Flame, the highly advanced espionage malware also known as Flamer that targeted sensitive Iranian computers. Although researchers from competing antivirus provider Kaspersky Lab previously discovered a small chunk of the Flame code in a later version of Stuxnet, the release unearthed by Symantec shows that the code sharing was once so broad that the two covert projects were inextricably linked.
“What we can conclude from this is that Stuxnet coders had access to Flamer source code, and they were originally using the Flamer source code for the Stuxnet project,” said Liam O’Murchu, manager of operations for Symantec Security Response. “With version 0.5 of Stuxnet, we can say that the developers had access to the exact same code. They were not just using shared components. They were using the exact same code to build the projects. And then, at some point, the development [of Stuxnet and Flame] went in two different directions.” [Continue reading…]
Ten years after Colin Powell lied to the UN Security Council to help start the war on Iraq, Joe Wilson and Valerie Plame recount some of the events that led to war, but the final line of their commentary is perhaps all they needed to say:
We did not do nearly enough to prevent this tragedy perpetrated on Iraq, on the world, and on ourselves.
On January 28, 2003, President Bush said: “The British government has learned that Saddam Hussein recently sought significant quantities of uranium from Africa.”
Joe Wilson knew at that time that Bush was lying, but he waited until July 6, 2003 before speaking out.
When Valerie Plame heard Powell lying to the UNSC she kept quiet. She didn’t want to lose her job at the CIA.
How many other careerists around Washington are there, who when their consciences told them to speak out, decided to put their material and professional interests first and remain silent — even when as a consequence, hundreds of thousands of people ended up losing their lives?
Even before Chuck Hagel had been confirmed as the new United States Secretary of Defense, opponents to his appointment had declared victory because in their minds victory consisted of much more than preventing him take office.
A few days ago at the neoconservative Commentary, Jonathan Tobin wrote:
The pressure put upon Hagel during the lead-up to his confirmation hearing as well as the difficulty he found himself in when questioned by the Senate Armed Services Committee wasn’t merely the usual grind nominees are subjected to. The process reaffirmed a basic truth about the strength of the pro-Israel consensus that was placed in doubt by the president’s choice: support for the alliance with the Jewish state isn’t merely mainstream politics, it is the baseline against which all nominees for high office are measured. [My emphasis.]
That’s an extraordinary statement and all the evidence suggests that it’s true.
For anyone to be considered for high political office in the United States of America, they must first demonstrate their alliance with Israel.
And this isn’t coming from some wild-eyed conspiracy theorist warning about the unfettered power of the Israel lobby. This is coming from the Israel lobby itself, or the “pro-Israel community” as they prefer to be known.
Alliance with Israel isn’t merely mainstream American politics — and the key word here is “mainstream”, which the dictionary defines as “a prevailing current or direction of activity or influence.”
The strength of the Christian Zionist movement notwithstanding, to identify alliance with Israel as mainstream in American politics says much less about the concerns of most Americans than it says about the way Washington works. In other words, the degree to which alliance with Israel is mainstream says far more about the influence of the Israel lobby than anything else.
And to say that alliance with Israel is “the baseline against which all nominees for high office are measured” is to say that Washington has gatekeepers and their overriding concern is not what is good for America but what is good for Israel.
The Hagel opponents who even now are declaring victory see success in the fact that they made their nemesis demean himself and that they have made him weaker.
What they fail to appreciate is that the more transparent they make their agenda, the more resentment they will breed.
Power which was once more effectively exercised in the shadows is now out on open display. And more than anything, this is the power of loudmouths — it is power that can and will be punctured.
Like most Iranians, I didn’t watch the Oscars and I haven’t seen the winner of Best Picture, Argo. And like the attendees of a recent conference in Tehran on “Hollywoodism”, I share the view that the American film industry exerts political influence — it is not just part of the entertainment business.
A New York Times report on the conference quoted Nader Talebzadeh, an Iranian-American filmmaker:
To Mr. Talebzadeh, it was clear that “Argo” was part of a larger plan by the American entertainment industry to remind a younger generation of the 1979 Iranian hostage crisis. “It’s the only example of aggression they have against Iran,” he said. “ ‘Argo’ just tears open the wounds in order to prepare the minds. This movie is no coincidence. Timing matters.”
Ben Affleck probably didn’t set out to demonize Iran and I don’t think Hollywood is quite as ideologically organized as Talebzadeh suggests. Even so, Argo’s producers could hardly have been oblivious to the fact that at a time when Iran is being demonized, it would not be hard to find support for a thriller in which Iranian revolutionaries threaten American lives. And it would not be unreasonable to expect that such support would come from, among others, Zionists. And yet there remain strong taboos around raising the topic of Jews and Hollywood as this year’s Academy Awards ceremony host, Seth MacFarlane, found out.
Seth MacFarlane found himself at the centre of more scandal on Monday in the wake of his controversial hosting of the Oscars.
The Family Guy comedian caused outrage among viewers when his Ted alter-ego took to the stage at Sunday night’s ceremony with Mark Wahlberg, and told his co-star that if he ‘wants to work in this town’ he’s got to be Jewish.
MacFarlane’s Ted then added to Wahlberg: ‘I was born Theodore Shapiro and I would like to donate to Israel and continue to work in Hollywood forever.’
But the gags, which came as the pair presented the award for Best Sound Mixing and Best Sound Editing, weren’t received well by many Jewish rights groups, with the comedian labelled ‘offensive, unfunny and inappropriate’.
Abraham Foxman, National Director of the Anti-Defamation League, said in a statement: ‘While we have come to expect inappropriate “Jews control Hollywood” jokes from Seth MacFarlane, what he did at the Oscars was offensive and not remotely funny.
‘It only reinforces stereotypes which legitimize anti-Semitism. It is sad and disheartening that the Oscars awards show sought to use anti-Jewish stereotypes for laughs.’
The League’s Founder and Dean, Rabbi Marvin Hier, added: ‘The Oscars are transmitted to every corner of the globe, even to such places where such hateful myths are believed as fact.
‘Every comedian is entitled to wide latitude, but no one should get a free pass for helping to promote anti-Semitism.’
The old anti-Semitic canards about Jews controlling Hollywood, cavorting in secret cabals and beset by dual loyalties are so shopworn as to no longer be funny. And the jokes are all the more risky coming from someone who isn’t himself part of the given community…
[O]bjecting to the myth that Jews control Hollywood raises serious questions of definition. If anybody can genuinely be said to control Tinseltown, it’s probably the 25 people who run the 12 main film studios — that is, the chairman (in one case, two co-chairmen) and president of each. Of those 25, 21 are Jewish, or 84%. That’s simple math. You could define “control” differently — throw in the top agents and producers, leading directors, most bankable stars and so on — and the proportion of Jews would drop, but it probably wouldn’t get down anywhere near the 50% mark.
The issue in my mind is whether we’re all grownup enough to talk about these things without having pogroms, and I think we are. I’ve written here before that Jewish kinship networks are important professionally; most of my work in journalism has come from Jews with whom I share culture and language (very much the way Jodi Kantor got her job at the New York Times). People have a right to discuss these matters in a critical manner: in the ’60s sociologist E. Digby Baltzell, himself a WASP, helped break down Protestant discrimination against Jews in board rooms and back rooms with a book bewailing discrimination called The Protestant Establishment: Aristocracy and Caste in America. Nick Lemann also ascribed a religious character to that former establishment when he called it “the Episcopacy” in his book on the meritocracy. So — what’s good for the goose… Lately Ron Unz, a Jewish meritocrat himself, published a study, The Myth of the American Meritocracy, saying that the Ivy Leagues, which he calls “the funnel” for the ruling elite, have student bodies that are 25 percent Jewish in some large part because Jews in the college admissions are looking for people like themselves. When he spoke at Yale in January, and a Southern Baptist in the audience questioned him, Unz established that there were two Southern Baptists in the audience, and said they ought to be better represented in the Ivy’s. He believes Jews are empowered and secure enough in a diverse liberal society to have this conversation. So do I.
Did MacFarlane stoke controversy just for alluding to the fact that Jews control Hollywood, or was the line he crossed one that is laid down specifically for gentiles? If as Weiss says, Jews are ready to have this conversation, is this supposed to be a conversation among Jews or can anyone join in?
Ironically, if people like Abe Foxman had a little more humor and sophistication and a lot less appetite to gag their critics, they would have seized on the fact that MacFarlane was free to make his joke — proof, arguably, that Jews don’t control Hollywood.
The Telegraph reports: The apes, which are our closest relatives in the animal kingdom, seem to get the same level of satisfaction out of solving brain teasers as their human evolutionary cousins.
A study published by the Zoological Society of London shows that six chimpanzees who were given a game which involved moving red dice or Brazil through a maze of pipes enjoyed solving the puzzle whether they got a reward or not.
The researchers claim this suggests they got the same kind of psychological reward as humans get when solving problems.
Most problem solving witnessed in the animal kingdom, where animals use tools or navigate mazes, are with the aim of reaching food. Hyenas, octopuses and birds such as crows all show the ability to solve problems.
Chimpanzees have also been witnessed in the wild using tools such as a stick to forage for insects or honey in hard to reach places like tree stumps.
But ZSL researcher Fay Clark said their research said they could be motivated by more than just food.
She said: “We noticed that the chimps were keen to complete the puzzle regardless of whether or not they received a food reward.
“This strongly suggests they get similar feelings of satisfaction to humans who often complete brain games for a feel-good reward.”
It seems like research repeatedly demonstrates that we share more similarities with other primates than we previously recognized, and as I’ve suggested before, this says as much about our preconceptions about human uniqueness as it says about the human-like qualities of our close relatives. Moreover, in this case, just as the research indicates chimps experience a human-like satisfaction in problem-solving, I suspect that in both instances this trait relates to something shared by all animate creatures: an interest in discerning order.
Chaos is immobilizing and the ability to turn one direction rather than another rests in part in the ability to see patterns and repetition. In pure pristine perception, every moment would be unique, but in reality, the ground of perception is not blank — present is mapped onto past.
Wired: What might dolphins be saying with all those clicks and squeaks? Each other’s names, suggests a new study of the so-called signature whistles that dolphins use to identify themselves.
Whether the vocalizations should truly be considered names, and whether dolphins call to compatriots in a human-like manner, is contested among scientists, but the results reinforce the possibility. After all, to borrow the argot of animal behavior studies, people often greet friends by copying their individually distinctive vocal signatures.
“They use these when they want to reunite with a specific individual,” said biologist Stephanie King of Scotland’s University of St. Andrews. “It’s a friendly, affiliative sign.”
In their new study, published Feb. 19 in Proceedings of the Royal Society B, King and fellow St. Andrews biologist Vincent Janik investigate a phenomenon they first described in 2006: bottlenose dolphins recognizing the signature whistles of other dolphins they know.
Signature whistles are taught to dolphins by their mothers, and the results were soon popularized as evidence of dolphin names. Many questions remained, though, about the whistles’ function, and in particular about the tendency of dolphins to copy each others’ signatures.
Were they simply challenging each other, like birds matching each other’s songs in displays of territorial aggression? Or using the copied signals deceptively, perhaps allowing males to court females guarded by other males? Or was a more information-rich exchange occurring, a back-and-forth between animals who knew each other and were engaging in something like a dialog?
To investigate these possibilities, King and Janik’s team analyzed recordings made over several decades by the Sarasota Dolphin Research Program, a Florida-based monitoring project in which pairs of dolphins are captured and held in separate nets for a few hours as researchers photograph and study them.
During the captures, the dolphins can’t see each other, but can hear each other and continue to communicate. In their analysis, King and Janik showed that some of the communications are copies of captured compatriots’ signature whistles — and, crucially, that the dolphins most likely to make these were mothers and calves or closely allied males.
They seemed to be using the whistles to keep in touch with the dolphins they knew best, just as two friends might if suddenly and unexpectedly separated while walking down a street. Moreover, copying wasn’t exact, but involved modulations at the beginning and end of each call, perhaps allowing dolphins to communicate additional information, such as the copier’s own identity.
That possibility hints at what linguists call referential communication with learned signals, or the use of learned rather than instinctively understood sounds to mentally represent other objects and individuals. As of now, only humans are known to do this naturally. [Continue reading…]
The reluctance to posit human traits in animals — for fear that one might be anthropomorphizing what are intrinsically non-human behaviors — is itself the expression of a prevailing anthropocentric superstition: that human beings are fundamentally different from all other animals.
When it comes to discerning human-like communication in non-human species there is an additional bias: scientific researchers tend to over-emphasize the function language has as a system of symbolic representation and understate its importance as a means for engaging in emotional exchanges.
Even though our understanding of dolphin communication is very rudimentary, I’d be inclined to believe not only that dolphins do call each other by name, but that they are also keenly attuned and adept in the combination of name and tone.
After all, the utterance of an individual’s name generally signifies much less than the way the name is called — unless that is one is sitting in a waiting room and being hailed by a nameless official. Lucky for dolphins their exchanges never need to be straight-jacketed like that.
Evolution, viewed as a process of progression (which it isn’t) leads to the notion that as the possessors of the most complex brains, we sit proudly at the top of the evolutionary pyramid. Even if one doesn’t question that it makes sense to assign ourselves this position, there’s no disputing that our tenure has thus far been very brief.
Flies on the other hand, have been around for about 200 million years and as Michael Dickinson explains, this has a lot to do with the sophistication of their neurology — which is to say, their ability to do much more with much less.
There are currently about 7,000 languages spoken around the world. It is estimated that by the end of this century as many as 90% of them will have become extinct.
Some people might think that the fewer languages there are spoken, the more readily people will understand each other and that ideally we should all speak the same language. The divisions of Babel would be gone. But as rational as this perspective might sound, it overlooks the degree to which humanity is further impoverished each time a language is lost — each time a unique way of seeing the world vanishes.
To understand the value of language diversity it’s necessary to recognize the ways in which each language serves as a radically different prism through which its speakers engage with life.
Let me give you three of my favorite examples on how speakers of different languages think differently in important ways. I’m going to give you an example from space; how people navigate in space. That ties into how we think about time as well. Second, I’m going to give you an example on color; how we are able to discriminate colors. Lastly, I’m going to give you an example on grammatical gender; how we’re able to discriminate objects. And I might throw in an extra example on causality.
Let’s start with one of my favorite examples, this comes from the work of Steve Levinson and John Haviland, who first started describing languages that have the following amazing property: there are some languages that don’t use words like “left” and “right.” Instead, everything in the language is laid out in absolute space. That means you have to say things like, “There is an ant on your northwest leg,” Or “can you move the cup to the south southeast a little bit?” Now to speak a language like this, you have to stay oriented. You have to always know which way you’re facing. And it’s not just that you have to stay oriented in the moment, all your memories of your past have to be oriented as well, so that you can say things like “Oh, I must have left my glasses to the southwest of the telephone.” That is a memory that you have to be able to generate. You have to have represented your experience in absolute space with cardinal directions.
What Steve Levinson and John Haviland found is that folks who speak languages like this indeed stay oriented remarkably well. There are languages like this around the world; they’re in Australia, they’re in China, they’re in South America. Folks who speak these languages, even young kids, are able to orient really well.
I had the opportunity to work with a group like this in Australia in collaboration with Alice Gaby. This was an Aboriginal group, the Kuuk Thaayorre. One of my first experiences there was standing next to a five year old girl. I asked her the same question that I’ve asked many eminent scientists and professors, rooms full of scholars in America. I ask everyone, “Close your eyes, and now point southeast.” When I ask people to do this, usually they laugh because they think, “well, that’s a silly question. How am I supposed to know that?” Often a lot of people refuse to point. They don’t know which way it is. When people do point, it takes a while, and they point in every possible direction. I usually don’t know which way southeast is myself, but that doesn’t preclude me from knowing that not all of the possible given answers are correct, because people point in every possible direction.
But here I am standing next to a five year old girl in Pormpuraaw, in this Aboriginal community, and I ask for her to point southeast, and she’s able to do it without hesitation, and she’s able to do it correctly. That’s the case for people who live in this community generally. That’s just a normal thing to be able to do. I had to take out a compass to make sure that she was correct, because I couldn’t remember. [Continue reading…]
YaleNews: Good mental health and clear thinking depend upon our ability to store and manipulate thoughts on a sort of “mental sketch pad.” In a new study, Yale School of Medicine researchers describe the molecular basis of this ability — the hallmark of human cognition — and describe how a breakdown of the system contributes to diseases such as schizophrenia and Alzheimer’s disease.
“Insults to these highly evolved cortical circuits impair the ability to create and maintain our mental representations of the world, which is the basis of higher cognition,” said Amy Arnsten, professor of neurobiology and senior author of the paper published in the Feb. 20 issue of the journal Neuron.
High-order thinking depends upon our ability to generate mental representations in our brains without any sensory stimulation from the environment. These cognitive abilities arise from highly evolved circuits in the prefrontal cortex. Mathematical models by former Yale neurobiologist Xiao-Jing Wang, now of New York University, predicted that in order to maintain these visual representations the prefrontal cortex must rely on a family of receptors that allow for slow, steady firing of neurons. The Yale scientists show that NMDA-NR2B receptors involved in glutamate signaling regulate this neuronal firing. These receptors, studied at Yale for more than a decade, are responsible for activity of highly evolved brain circuits found especially in primates.
The disease-bias of medical research dictates that research funding is invariably going to hinge on the promise of treatment for one or more major disorders, in this case schizophrenia and Alzheimer’s. Still, in research such as that described above, it might be just as interesting and fruitful to investigate the neurological impact of what have become ubiquitous forms of behavior such as text-messaging.
In non-neurological language the use of these slow-firing neurons seems to include a constellation of cognitive activities including deliberation, reflection, analysis, and problem-solving.
Do handheld devices and the distractions they cause impair our ability to create and maintain sound mental representations of the world?
I don’t imagine Apple or anyone else with a vested interest in proving that hyper-connectivity is harmless, would welcome this line of inquiry, but still, it seems like a question worth asking.
Galloway is not alone in holding such sentiments – but as a tactic in support of Palestinians, it’s a dead end. Primarily, that’s because the Boycott, Divestment and Sanctions (BDS) movement doesn’t call for the avoidance of people purely on the basis of nationality. Thanks to Galloway, its national committee has just issued a statement, to clear up this particular fallacy.
Whatever your views on BDS – and there are many – Galloway’s move is plainly an own goal (assuming his goal is to support Palestinians, rather than generate publicity for himself). One reason that many left-leaning Jews don’t join the BDS movement is precisely because the boycott is perceived to be about rage against people, rather than an effective political tool. What’s the best way to cement that belief? Announce you’re avoiding Israelis as part of your commitment to BDS. Cue a flood of “told you sos” from those who say its all about punishing Israelis just for being who they are. [Continue reading…]
There’s a big difference between concluding that talking is fruitless, and refusing to talk.
Refusing to talk, prejudges the outcome and it attaches more significance to the act of communication than its content.
What talking can do is open a door into a creative space. It opens the possibility of arriving somewhere new.
Talking engages the plasticity of the human mind.
Rigid minds are always in conflict with the world because the world is always changing. So, even if we find ourselves up against the rigidity of others, it at least serves our own interests to keep our own minds flexible and explore the malleability of our own thought.
The Washington Post‘s Jennifer Ruben quotes the Free Beacon, which quotes an email which says that in a speech delivered at Rutgers University in 2010, Chuck Hagel “basically said that Israel … was risking becoming an apartheid state if it didn’t allow the Palestinians to form a state.”
“Does this fundamentally shift the playing field?” asks Ruben.
No. It just means Hagel was echoing several former Israeli prime ministers and many other Israelis.
“If the day comes when the two-state solution collapses, and we face a South African-style struggle for equal voting rights (also for the Palestinians in the territories), then, as soon as that happens, the State of Israel is finished.” Israel’s Prime Minister Ehud Olmert speaking to Haaretz, November, 2007.
“The simple truth is, if there is one state” including Israel, the West Bank and Gaza, “it will have to be either binational or undemocratic. … if this bloc of millions of Palestinians cannot vote, that will be an apartheid state.” Defense Minister and former Prime Minister Ehud Barak, April, 2010.
“Jewish self-righteousness is taken for granted among ourselves to such an extent that we fail to see what’s right in front of our eyes. It’s simply inconceivable that the ultimate victims, the Jews, can carry out evil deeds. Nevertheless, the state of Israel practises its own, quite violent, form of Apartheid with the native Palestinian population.” Shulamit Aloni, Minister for Education under Yitzhak Rabin, January, 2007.
“[In 1967] We enthusiastically chose to become a colonial society, ignoring international treaties, expropriating lands, transferring settlers from Israel to the occupied territories, engaging in theft and finding justification for all these activities. Passionately desiring to keep the occupied territories, we developed two judicial systems: one – progressive, liberal – in Israel; and the other – cruel, injurious – in the occupied territories. In effect, we established an apartheid regime in the occupied territories immediately following their capture. That oppressive regime exists to this day.” Michael Ben-Yair, Israel’s attorney general from 1993-96, March, 2002.
“Israel must decide quickly what sort of environment it wants to live in because the current model, which has some apartheid characteristics, is not compatible with Jewish principles.” Ami Ayalon, former Israeli admiral and former Labour member of Israeli Knesset, December, 2000.
“These dots are growing evidence of the lack of the spirit of freedom and the emergence of apartheid and fascism. If you look at each dot separately you might miss the bigger picture. Like a child watching a military brigade march, and after seeing the battalions, the batteries and the companies, asking: ‘And when is the brigade finally coming?’ the answer is that while he watched the marching of the battalions, batteries and companies, he was actually watching the brigade. So is the situation in Israel. You do not have to ask where the apartheid is. These events, which are accepted with silence and indifference, together create a picture of a terrible reality.” Yediot’s legal affairs editor, Judge (ret.) Boaz Okon, June, 2010.
“The historical background of the Israeli apartheid state-in-the-making that is emerging before our eyes should be sought in 1967. It is part of a process that has been going on for about 44 years: What started as rule over another people has gradually ripened – especially since the latter part of the 1970s – into a colonialism that is nurturing a regime of oppression and discrimination with regard to the Palestinian population. It is robbing that population of its land and of its basic civil rights, and is encouraging a minority group (the settlers ) to develop a crude, violent attitude toward the Arabs in the territories. This was exactly the reality that, after many years, led to the establishment of the apartheid state in South Africa.” Prof. Daniel Blatman, Holocaust researcher and head of the Institute for Contemporary Jewry, Hebrew University of Jerusalem, April, 2011.
“As it is today, it is an Apartheid state, a full apartheid in the occupied territories and a growing apartheid in Israel – and if this goes on, it will be full apartheid throughout the country, incontestably.” Uri Avnery, Israeli peace activist, November, 2012.
“The spokesmen of the dovish camp [in Israel] tell us horror stories about a future binational state. But the binational state is already here. It has a rigid apartheid legal system, as the High Court of Justice fades away.
“The system preserving this apartheid is more ruthless than that seen in South Africa, where the black were a labor force and could therefore also make a living. It is equipped with the lie of being ‘temporary’.” Yitzhak Laor, November 2009.
“Israel’s apartheid movement is coming out of the woodwork and is taking on a formal, legal shape. It is moving from voluntary apartheid, which hides its ugliness through justifications of ‘cultural differences’ and ‘historic neglect’ which only requires a little funding and a couple of more sewage pipes to make everything right – to a purposeful, open, obligatory apartheid, which no longer requires any justification.” Zvi Bar’el, October, 2010.
Adam Etinson writes: In August of 1563, Michel de Montaigne, the famous French essayist, was introduced to three Brazilian cannibals who were visiting Rouen, France, at the invitation of King Charles the Ninth. The three men had never before left Brazil, had just been subjected to a long interrogation by the king (who was 13 years old at the time), and if they had not already contracted some dangerous European illness, they were surely undergoing a rather severe case of culture shock. Despite this, they still had enough poise to lucidly respond to Montaigne’s questions about what they thought of their new surroundings.
The observations shared by the native Brazilians have a certain comical quality. Because they looked on French society with such fresh eyes, their observations make the familiar seem absurd. But they are also morally revealing. First, the Brazilians expressed surprise that “so many tall, bearded men, all strong and well armed” (i.e., the king’s guard) were willing to take orders from a small child: something that would have been unthinkable in their own society. And second, the Brazilians were shocked by the severe inequality of French citizens, commenting on how some men “were gorged to the full with things of every sort” while others “were beggars at their doors, emaciated with hunger and poverty.” Since the Brazilians saw all human beings “as halves of one another… they found it strange that these poverty-stricken halves should suffer such injustice, and that they did not take the others by the throat or set fire to their houses.”
Montaigne records these observations in an essay entitled, “Des Cannibales.” Well ahead of its time, the essay challenges the haughty denigration of cannibals that was so common among Montaigne’s contemporaries, but not by arguing that cannibalism itself is a morally acceptable practice. Instead, Montaigne makes the more provocative claim that, as barbaric as these Brazilian cannibals may be, they are not nearly as barbaric as 16th-century Europeans themselves. To make his case, Montaigne cites various evidence: the wholesome simplicity and basic nobility of native Brazilian life; the fact that some European forms of punishment — which involved feeding people to dogs and pigs while they were still alive — were decidedly more horrendous than the native Brazilian practice of eating one’s enemies after they are dead; and the humane, egalitarian character of the Brazilians’ moral sensibility, which was on display in their recorded observations.
The fact that, despite all this, 16th-century Western Europeans remained so deeply convinced of their own moral and intellectual superiority was, to Montaigne, evidence of a more general phenomenon. He writes:
We all call barbarous anything that is contrary to our own habits. Indeed we seem to have no other criterion of truth and reason than the type and kind of opinions and customs current in the land where we live. There we always see the perfect religion, the perfect political system, the perfect and most accomplished way of doing everything.
Montaigne most certainly wasn’t the first to make note of our tendency to automatically assume the superiority of local beliefs and practices; Herodotus, the Greek historian of the fifth century B.C., made very similar observations in his Histories, noting how all peoples are “accustomed to regard their own customs as by far the best.” And in his famous Letter 93, which presents an early argument against religious toleration, the medieval Catholic theologian Saint Augustine laments the way in which old customs produce a closed-minded resistance to alternative beliefs and practices that, he argues, is best broken by the threat of punishment. When the 19th-century sociologist William Graham Sumner later named this tendency “ethnocentrism,” the term, and the allegation, became a mantra of 20th-century cultural anthropology.
Since some people might be reluctant to profit from cultural insights provided by cannibals, it’s worth adding some ethnographic detail to Montaigne’s account.
The people here referred to as “Brazilian cannibals” (of course there was no such country as Brazil at that time) would have been Tupinamba, whose population numbered an estimated one million when the Portuguese first arrived. Their practice was of ritual exocannibalism.
(Christians might note that those who receive the Eucharist are participating in a form of symbolic cannibalism. This ritualized consumption of human blood and flesh takes place in the context of a religion whose central motif is a sacred act of human execution following the vilification and torture of the emissary and embodiment of the deity. Whatever theological “yes, but…”s one might want to insert, there’s no escaping the fact that Christian iconography and belief can from the perspective of many other cultures look just as problematic as cannibalism.)
Lending new meaning to the expression, he got a name for himself, Meg Pickard explains:
[I]t is the taking of names which is the key to unlocking the reasons behind Tupinamba anthropophagy. The taking of captives was not to provide a source of slave labour, but rather to provide a fresh source of names for the community. The acquisition of names was extremely important in Tupinamba culture.
A man got a new name after killing a captive (or an enemy in warfare). Sometimes, the new name was that of the slain person. Furthermore, those involved in the ritual handling of the captive also gained new names – the women who dressed the captive, and who bit their arms in a taunting manner following his capture, and preceding his execution, and the men who prepared the arrows to be used, captured the prisoner, or who actually executed him all received new names. Other people surrounding the ritual might also have acquired new names – including, for example, the wife of the executioner. And it was the acquisition of names through warfare and the ritual execution of captives that led to inclusion in such activities as marriage, beer drinking or speaking in public. Without obtaining a name, or a lip-plug, or body scarification in this way, a man could not participate in any of these activities. It was believed that only the brave – and by definition, this meant those who had accumulated many names – would go on to the afterlife.
Furthermore, it was believed by both captives and the Tupinamba that it was actually preferable (more honourable and noble) to be killed and eaten than to die a natural death and be buried in the ground (and perhaps be eaten by animals), and indeed that “to be killed ceremonially and then eaten was the fate for which any brave longed once he had lost his liberty”.
So, having established that Montaigne’s Brazilian cannibals didn’t belong to a culture that prized the taste or nutritional value of human flesh, let’s return to the larger issue at hand: ethnocentrism.
While Etinson notes that ethnocentrism is universal, he neither explores what gives rise to this tendency or why among differing cultures ethnocentrism expresses itself in differing degrees.
As social animals, human beings attach immense value to social acceptance and social status. What facilitates social organization more than anything else is our capacity to mimic one another. We are like herding parrots.
In the chatter of human discourse we prefer to borrow the thoughts of others rather than conjure our own and will gladly mimic whichever thoughts are most popular. Ethnocentrism is a form of cultural group-think in which every participant’s status is elevated through mutual reinforcement of the same ideas of superiority.
Some people — particularly those who pride themselves as independent minded — may balk at the assertion that we are a herd animal always inclined to think each other’s thoughts.
Still, think about it: what is language itself if not the accumulation and sharing of borrowed thoughts? Without the borrowed thoughts out of which language is constructed we would have no thoughts at all!
If ethnocentrism is a form of cultural group-think, a number of factors, if combined, will drive this view to an extreme:
where ethnocentrism provides the basis for an ethnocracy;
within this ethnocracy there is a governing ethnic ideology;
a people already bound together by their own sense of uniqueness speaks a language spoken nowhere else;
this culture sees itself as surrounded by an alien, hostile, and inferior culture;
and within this embattled mindset an existential divide seems to separate the homeland from the rest of the world.
In such a context — yes, I’m talking about Israel — the need for cultural bridge-building has never be more great and neither has such an endeavor ever been more difficult.
It might not be Saturday Night Live, but when the well-known and popular host of an HBO talk show, Bill Maher, can say, “the Israelis are controlling our government,” and win a round of applause, it’s clear that truth-telling on the dysfunctional relationship between Israel and the United States is going mainstream. Maher wasn’t taking a wild shot in the dark. He knew his comment would be well received.
For others who still hold back, it’s not that they are unaware about what kinds of views would resonate with ordinary left-leaning younger Americans. It’s a question of whether media figures in the spotlight are willing to catch negative attention in corporate board rooms. Unlike the networks, HBO isn’t beholden to advertisers, so that obviously gives Maher some extra latitude.
The responses of Maher’s panelists were telling. The Daily Caller‘s Jamie Weinstein, clearly aware that he faced an unfriendly audience, seemed to have put himself in a curious position. He was faulting Hagel for saying that the State Department is Israeli-controlled — by implication, other branches of government (e.g. Congress) could more reasonably be described that way. The Democratic Donna Brazile assumed responsibility for changing the subject, while Jon Meacham turned his back on Maher to make it clear there was no way he would get drawn into a discussion on Israel.
Long before the term cyborg had been coined, Henry David Thoreau — who saw few if any advances in our inexorable movement away from our natural condition — declared: “men have become the tool of their tools.”
Creating the means for someone with total color blindness to be able to hear color, seems like an amazing idea, but we glimpse the dystopian potential of such technology when the beneficiary says the sounds coming from the strident colors of cleaning products displayed on the aisle of a supermarket are more enjoyable than the sound-color of the ocean.
Singularity Hub: What would your world be like if you couldn’t see color? For artist Neil Harbisson, a rare condition known as achromatopsia that made him completely color blind rendered that question meaningless. Not being able to see color at all meant that there was no blue in the sky or green in grass, and these descriptions were merely something to be taken on faith or memorized to get the correct answers in school.
But Neil’s life would change drastically when he met computer scientist Adam Montandon and with help from a few others, they developed the eyeborg, an electronic eye that transforms colors into sounds. Colors became meaningful for Neil in an experiential way, but one that was fundamentally different than how others described them.
This augmentation device wasn’t like a set of headphones that he could put on when he wanted to “listen” to the world around him, but became a permanent part of who he was. Though he had to memorize how the sounds corresponded to certain colors, in time the sounds became part of his perception and the way he “sees” the world. He even started to expand the range of what he could “see”, so that wavelengths of light outside of the visible range could be perceived.
In other words, he became cybernetic…
Neil boldly paints a picture of what the future holds where augmentation devices will alter how we experience the world. Whether for corrective or elective motives, people will someday adopt these technologies routinely, perhaps choosing artificial synesthesia as a means of seeing the world in a broader or deeper way.
In color theory, color is described in terms of three attributes: hue, saturation, and value. Hue is what we generally refer to with color terms — red, green, purple etc. Saturation is the intensity of a color — pale yellow, for instance, has less saturation than lemon yellow. And value refers to the lightness or darkness of a color. In moonlight, all we can perceive is color value, without any ability to register hues or saturation.
When color is understood in these terms, achromatopsia does not have to be viewed as a lack of color vision since, at least in some cases, it can actually lead to the experience of a refinement of sight.
As much as Neil Harbisson might feel that technology has enhanced his perception of the world, I find it depressing that anyone would fail to see that in order to value this kind of nominal extension of the senses requires first that we underestimate the subtlety of human perception.
In An Anthropologist On Mars, Oliver Sacks describes the experience of Jonathan I., a 65-year old artist who suddenly lost his color vision as a result of concussion sustained in a car accident.
As an artist, the loss was devastating.
He knew the colors of everything, with an extraordinary exactness (he could give not only the names but the numbers of colors as these were listed in a Pantone chart of hues he had used for many years). He could identify the green of van Gogh’s billiard table in this way unhesitatingly. He knew all the colors in his favorite paintings, but could no longer see them, either when he looked or in his mind’s eye…
As the months went by, he particularly missed the brilliant colors of spring — he had always loved flowers, but now he could only distinguish them by shape or smell. The blue jays were brilliant no longer, — their blue, curiously, was now seen as pale grey. He could no longer see the clouds in the sky, their whiteness, or off-whiteness as he saw them, being scarcely distinguishable from the azure, which seemed bleached to a pale grey…
His initial sense of helplessness started to give way to a sense of resolution — he would paint in black and white, if he could not paint in color; he would try to live in a black-and-white world as fully as he could. This resolution was strengthened by a singular experience, about five weeks after his accident, as he was driving to the studio one morning. He saw the sunrise over the highway, the blazing reds all turned into black: “The sun rose like a bomb, like some enormous nuclear explosion”, he said later. “Had anyone ever seen a sunrise in this way before?”
Inspired by the sunrise, he started painting again—he started, indeed, with a black-and-white painting that he called Nuclear Sunrise, and then went on to the abstracts he favored, but now painting in black and white only. The fear of blindness continued to haunt him but, creatively transmuted, shaped the first “real” paintings he did after his color experiments. Black-and-white paintings he now found he could do, and do very well. He found his only solace working in the studio, and he worked fifteen, even eighteen, hours a day. This meant for him a kind of artistic survival: “I felt if I couldn’t go on painting”, he said later, “I wouldn’t want to go on at all.”…
Color perception had been an essential part not only of Mr. I.’s visual sense, but his aesthetic sense, his sensibility, his creative identity, an essential part of the way he constructed his world — and now color was gone, not only in perception, but in imagination and memory as well. The resonances of this were very deep. At first he was intensely, furiously conscious of what he had lost (though “conscious”, so to speak, in the manner of an amnesiac). He would glare at an orange in a state of rage, trying to force it to resume its true color. He would sit for hours before his (to him) dark grey lawn, trying to see it, to imagine it, to remember it, as green. He found himself now not only in an impoverished world, but in an alien, incoherent, and almost nightmarish one. He expressed this soon after his injury, better than he could in words, in some of his early, desperate paintings.
But then, with the “apocalyptic” sunrise, and his painting of this, came the first hint of a change, an impulse to construct the world anew, to construct his own sensibility and identity anew. Some of this was conscious and deliberate: retraining his eyes (and hands) to operate, as he had in his first days as an artist. But much occurred below this level, at a level of neural processing not directly accessible to consciousness or control. In this sense, he started to be redefined by what had happened to him — redefined physiologically, psychologically, aesthetically — and with this there came a transformation of values, so that the total otherness, the alienness of his V1 world, which at first had such a quality of horror and nightmare, came to take on, for him, a strange fascination and beauty…
At once forgetting and turning away from color, turning away from the chromatic orientation and habits and strategies of his previous life, Mr. I., in the second year after his injury, found that he saw best in subdued light or twilight, and not in the full glare of day. Very bright light tended to dazzle and temporarily blind him — another sign of damage to his visual systems—but he found the night and nightlife peculiarly congenial, for they seemed to be “designed”, as he once said, “in terms of black and white.”
He started becoming a “night person”, in his own words, and took to exploring other cities, other places, but only at night. He would drive, at random, to Boston or Baltimore, or to small towns and villages, arriving at dusk, and then wandering about the streets for half the night, occasionally talking to a fellow walker, occasionally going into little diners: “Everything in diners is different at night, at least if it has windows. The darkness comes into the place, and no amount of light can change it. They are transformed into night places. I love the nighttime”, Mr. I. said. “Gradually I am becoming a night person. It’s a different world: there’s a lot of space — you’re not hemmed in by streets, by people — It’s a whole new world.”…
Most interesting of all, the sense of profound loss, and the sense of unpleasantness and abnormality, so severe in the first months following his head injury, seemed to disappear, or even reverse. Although Mr. I. does not deny his loss, and at some level still mourns it, he has come to feel that his vision has become “highly refined”, “privileged”, that he sees a world of pure form, uncluttered by color. Subtle textures and patterns, normally obscured for the rest of us because of their embedding in color, now stand out for him…
He feels he has been given “a whole new world”, which the rest of us, distracted by color, are insensitive to. He no longer thinks of color, pines for it, grieves its loss. He has almost come to see his achromatopsia as a strange gift, one that has ushered him into a new state of sensibility and being.
Perhaps the greatest ability we are endowed with by nature resides in none of our individual senses but in our surprising powers of adaptation.
In the current technology-worshiping milieu we are indeed becoming the tools of our tools, but in a more literal sense than Thoreau might have imagined. Unwitting slaves, chained to machines — through devices that supposedly form indispensable connections to the world we are gradually becoming disconnected from what it means to be human.
Following Bulgaria’s announcement that suspects involved in the Burgas bombing which killed five Israeli tourists last July, have been linked to Hezbollah, Tom Donilon, national security adviser to President Obama, has written an op-ed for the New York Times. Predictably, he calls on the EU to add the Lebanese organization to its terrorist list. No doubt this will help the White House score points with Israel and also win favor with Iran hawks. However, a larger story is being obscured.
Let’s assume that the bombers were indeed linked to Hezbollah. That doesn’t necessarily make this a Hezbollah operation or an operation serving Hezbollah’s interests. A more likely scenario is that the attack was conducted on behalf of Iran in retaliation for a string of terrorist attacks launched by Israel against civilian targets in Iran.
If the EU goes ahead and lists Hezbollah as a terrorist organization and they want to be even-handed, they should also list Mossad — especially at a time when there is rather compelling evidence that Mossad is even willing to murder Israelis.
This website or its third-party tools use cookies, which are necessary to its functioning. By closing this banner, you agree to the use of cookies.Ok