WSJ — Noonan & Henthoff: What We Lose if We Give Up Privacy: A civil libertarian reflects on the dangers of the surveillance state

WSJ — Noonan & Henthoff: What We Lose if We Give Up Privacy A civil libertarian reflects on the dangers of the surveillance state

Noonan & Henthoff — What We Lose if We Give Up Privacy: A civil libertarian reflects on the dangers of the surveillance state

Excerpts from the Wall Street Journal — Updated August 16, 2013, 7:05 p.m. ET

Read more at http://online.wsj.com/article/SB10001424127887323639704579015101857760922.html

What is privacy? Why should we want to hold onto it? Why is it important, necessary, precious?  Is it just some prissy relic of the pretechnological past?

We talk about this now because of Edward Snowden, the National Security Agency revelations, and new fears that we are operating, all of us, within what has become or is becoming a massive surveillance state. They log your calls here, they can listen in, they can read your emails. They keep the data in mammoth machines that contain a huge collection of information about you and yours. This of course is in pursuit of a laudable goal, security in the age of terror.

Is it excessive? It certainly appears to be. Does that matter? Yes. Among other reasons: The end of the expectation that citizens’ communications are and will remain private will probably change us as a people, and a country.

***

Among the pertinent definitions of privacy from the Oxford English Dictionary: “freedom from disturbance or intrusion,” “intended only for the use of a particular person or persons,” belonging to “the property of a particular person.” Also: “confidential, not to be disclosed to others.” Among others, the OED quotes the playwright Arthur Miller, describing the McCarthy era: “Conscience was no longer a private matter but one of state administration.”

Privacy is connected to personhood. It has to do with intimate things—the innards of your head and heart, the workings of your mind—and the boundary between those things and the world outside.

image

Martin Kozlowski

A loss of the expectation of privacy in communications is a loss of something personal and intimate, and it will have broader implications. That is the view of Nat Hentoff, the great journalist and civil libertarian. He is 88 now and on fire on the issue of privacy. “The media has awakened,” he told me. “Congress has awakened, to some extent.” Both are beginning to realize “that there are particular constitutional liberty rights that [Americans] have that distinguish them from all other people, and one of them is privacy.”

Mr. Hentoff sees excessive government surveillance as violative of the Fourth Amendment, which protects “the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures” and requires that warrants be issued only “upon probable cause . . . particularly describing the place to be searched, and the persons or things to be seized.”

But Mr. Hentoff sees the surveillance state as a threat to free speech, too. About a year ago he went up to Harvard to speak to a class. He asked, he recalled: “How many of you realize the connection between what’s happening with the Fourth Amendment with the First Amendment?” He told the students that if citizens don’t have basic privacies—firm protections against the search and seizure of your private communications, for instance—they will be left feeling “threatened.” This will make citizens increasingly concerned “about what they say, and they do, and they think.” It will have the effect of constricting freedom of expression. Americans will become careful about what they say that can be misunderstood or misinterpreted, and then too careful about what they say that can be understood. The inevitable end of surveillance is self-censorship.

All of a sudden, the room became quiet. “These were bright kids, interested, concerned, but they hadn’t made an obvious connection about who we are as a people.” We are “free citizens in a self-governing republic.”

Mr. Hentoff once asked Justice William Brennan “a schoolboy’s question”: What is the most important amendment to the Constitution? “Brennan said the First Amendment, because all the other ones come from that. If you don’t have free speech you have to be afraid, you lack a vital part of what it is to be a human being who is free to be who you want to be.” Your own growth as a person will in time be constricted, because we come to know ourselves by our thoughts.

He wonders if Americans know who they are compared to what the Constitution says they are.

Mr. Hentoff’s second point: An entrenched surveillance state will change and distort the balance that allows free government to function successfully. Broad and intrusive surveillance will, definitively, put government in charge. But a republic only works, Mr. Hentoff notes, if public officials know that they—and the government itself—answer to the citizens. It doesn’t work, and is distorted, if the citizens must answer to the government. And that will happen more and more if the government knows—and you know—that the government has something, or some things, on you. “The bad thing is you no longer have the one thing we’re supposed to have as Americans living in a self-governing republic,” Mr. Hentoff said. “The people we elect are not your bosses, they are responsible to us.” They must answer to us. But if they increasingly control our privacy, “suddenly they’re in charge if they know what you’re thinking.”

This is a shift in the democratic dynamic. “If we don’t have free speech then what can we do if the people who govern us have no respect for us, may indeed make life difficult for us, and in fact belittle us?”  If massive surveillance continues and grows, could it change the national character? “Yes, because it will change free speech.”

++++++++++++++++++++++++++++++++++++++++++++++++++++++++

A version of this article appeared August 16, 2013, on page A13 in the U.S. edition of The Wall Street Journal, with the headline: What We Lose if We Give Up Privacy.

Read more at: http://online.wsj.com/article/SB10001424127887323639704579015101857760922.html

The Delete Squad: Free Speech on the Internet (New Republic)

The Delete Squad: Free Speech on the Internet (New Republic)

The Delete Squad: Google, Twitter, Facebook and the new global battle over the future of free speech

BY JEFFREY ROSEN  — New Republic — April 29, 2013

A year ago this month, Stanford Law School hosted a little-noticed meeting that may help decide the future of free speech online. It took place in the faculty lounge, where participants were sustained in their deliberations by bagels and fruit platters. Among the roughly two-dozen attendees, the most important were a group of fresh-faced tech executives, some of them in t-shirts and unusual footwear, who are in charge of their companies’ content policies. Their positions give these young people more power over who gets heard around the globe than any politician or bureaucrat—more power, in fact, than any president or judge.

Collectively, the tech leaders assembled that day in Palo Alto might be called “the Deciders,” in a tribute to Nicole Wong, the legal director of Twitter, whose former colleagues affectionately bestowed on her the singular version of that nickname while she was deputy general counsel at Google. At the dawn of the Internet age, some of the nascent industry’s biggest players staked out an ardently hands-off position on hate speech; Wong was part of the generation that discovered firsthand how untenable this extreme libertarian position was. In one representative incident, she clashed with the Turkish government over its demands that YouTube take down videos posted by Greek soccer fans claiming that Kemal Ataturk was gay. Wong and her colleagues at Google agreed to block access to the clips in Turkey, where insulting the country’s founder is illegal, but Turkish authorities—who insisted on a worldwide ban—responded by denying their citizens access to the whole site for two years. “I’m taking my best guess at what will allow our products to move forward in a country,” she told me in 2008. The other Deciders, who don’t always have Wong’s legal training, have had to make their own guesses, each with ramifications for their company’s bottom line.

The session at Stanford concluded with the attendees passing a resolution for the formation of an “Anti-Cyberhate Working Group,” then heading over to Facebook’s headquarters to drink white wine out of plastic cups at a festive reception. But despite the generally laid-back vibe, the meeting, part of a series of discussions dating back more than a year, had a serious agenda. Because of my work on the First Amendment, I was asked to join the conversations, along with other academics, civil libertarians, and policymakers from the United States and abroad. Although I can’t identify all the participants by name, I am at liberty, according to the ground rules of our meetings, to describe the general thrust of the discussions, which are bringing together the Deciders at a pivotal time.

As online communication proliferates—and the ethical and financial costs of misjudgments rise—the Internet giants are grappling with the challenge of enforcing their community guidelines for free speech. Some Deciders see a solution in limiting the nuance involved in their protocols, so that only truly dangerous content is removed from circulation. But other parties have very different ideas about what’s best for the Web. Increasingly, some of the Deciders have become convinced that the greatest threats to free speech during the next decade will come not just from authoritarian countries like China, Russia, and Iran, who practice political censorship and have been pushing the United Nations to empower more of it, but also from a less obvious place: European democracies contemplating broad new laws that would require Internet companies to remove posts that offend the dignity of an individual, group, or religion. The Deciders are right to be concerned about the balkanization of the Internet. There is, moreover, a bold way to respond to that threat. The urgent question is whether the Deciders will embrace it.

At Facebook, the deciders are led by Dave Willner, the head of the company’s content policy team. His career provides a kind of case study in how the Deciders’ thinking has evolved. Now 28, Willner joined Facebook five years ago, working night shifts in the help center, where he answered e-mails from users about how to use the photo uploader. Within a year, he had been promoted to work on content policy. Today, he manages a crew of six employees who work around shared desks at Facebook’s headquarters in Menlo Park; rather than a global hub for content control, their space, festooned with colorful posters, more closely resembles a neater-than-usual college dorm. Toiling under Willner’s team are a few hundred “first responders” who review complaints about nudity, porn, violence, and hate speech from offices in Menlo Park, Austin, Dublin, and Hyderabad, India. (Willner is also married to a fellow Facebook employee who now leads the User Safety team, responsible, among other things, for child protection and suicide prevention; one imagines rather heady dinner chatter.) Facebook had only 100 million users when Willner was hired, compared with the billion-plus it has now. Each day, they upload more than 300 million photos alone; every week, Facebook receives more than two million requests to remove material. (The New Republic’s owner was a Facebook co-founder.)

When I first met Willner at the Stanford meeting, he wore an orange T-shirt, a gray striped sweater, blue corduroy trousers, round glasses, and a bookish beard—looking very much like the former anthropology and archeology major that he was before starting at Facebook. He took a class about Islam in his senior year, which he says comes in handy in his current job. At the time Willner joined Facebook’s content policy team, the company had no rules on the books for what speech violated its terms of service. So Willner decided to write them himself. He chose as his model university anti-harassment codes, since he himself had just graduated from college. But he soon found that vague standards prohibiting speech that creates a “hostile environment” weren’t practical. The Facebook screeners scattered across three continents brought vastly different cultural backgrounds to their roles and had to rule on thousands of pieces of content daily. The sheer range and complexity of the judgment calls that had to be made compounded the challenge: Is this person naked? Is a photo of Hitler racism, or political commentary? Is it bullying to post a photo of someone distorted through Photoshop? Is posting a photo of a gun a credible threat of violence? What if the gun is from the cover of a rap album?

Willner had read John Stuart Mill in college and understood the crowning achievement of the American First Amendment tradition, which allows speech to be banned only when it is intended—and likely—to incite imminent violence or lawless action. By contrast, as Willner was learning, European law draws a tighter line, prohibiting so-called group libel, or speech that offends the dignity of members of a protected class and lowers their standing in society. Willner decided that neither method would do: Both the U.S. focus on the speaker’s intent and the European focus on the social consequences of their speech would be too subjective for a 22-year-old content reviewer in Dublin or Hyderabad to apply in 20 seconds.What Facebook needed, he came to believe, was a hate-speech policy that focused on concrete, easily categorized actions, so that the decision to remove controversial content, or to escalate the dispute to Willner and his colleagues in Silicon Valley, could be based on nothing more than the information contained within the form that Facebook users file to complain about offensive posts and applied like an algorithm. He sought an engineer’s response to a thorny historical and legal problem—a very Silicon Valley approach.

At first, it didn’t go well. To try to spell out what qualified as a hateful post, Facebook hired an outside firm to write an “Operations Manual for Live Content Moderators,” which was subsequently leaked. Some of the distinctions made by the document were ridiculed by the blogosphere for being jesuitical: “Blatant (obvious) depictions of camel toes and moose knuckles” were banned in the “sex and nudity category,” while the graphic content category held that “bodily fluids (except semen) are ok to show unless a human being is captured in the process.”Furthermore, the draft standards seemed to ban all “Holocaust denial which focuses on hate speech” and “all attacks on Ataturk (visual and text)” around the world, even though Holocaust denial is illegal only in certain countries, including France and Germany, and attacking Ataturk is outlawed only in Turkey. In response to the uproar, Facebook fired the consulting company, and Willner redoubled his efforts to minimize the opportunities for subjective verdicts by his first responders.

Eventually, the project led to Facebook’s most important free-speech decision: to ban attacks on groups, but not on institutions. The current community standards declare: “We do not permit individuals … to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.” But Facebook allows caricatures that depict members of protected groups doing unflattering things, as well as attacks on their faith or leaders. It’s only when a user categorically reviles a protected group that he crosses the line: “I hate Islam” or “I hate the Pope” is fine; “I hate Muslims” or “I hate Catholics” is not. The distinctions might be seen as a triumph of reductionism. But they have empowered the company to resist growing calls for the wholesale deletion ofspeech that foreign governments and their citizens consider blasphemous.

Facebook’s new policy was dramatically tested last September, when the company refused to remove Innocence of the Muslims, the anti-Muhammad video that was initially blamed for causing the Benghazi riots that led to the death of the American ambassador to Libya. After watching the video, Willner and his colleagues concluded that, because nobody said anything explicitly denigrating of Muslims, there was nothing to ban.

As the world watched footage of the body of Christopher Stevens being dragged through the streets, YouTube reached a similar decision. Despite allegations that the riots had been caused by an Arabic-language version of the video posted on the site, it turned out that an English version of Innocence of the Muslims had been in circulation since July. YouTube had determined that the clip didn’t violate its terms of service, which by then were similar to Facebook’s: “Sometimes there is a fine line between what is and what is not considered hate speech. For instance, it is generally okay to criticize a nation, but not okay to make insulting generalizations about people of a particular nationality.” As the violence spread from Libya to Egypt, YouTube temporarily restricted access in those two nations, because of the confusion on the ground. But the company refused to delete the video around the world, even as Egyptian leader Mohamed Morsi, speaking at the United Nations, called on YouTube to do just that.

In a separate U.N. speech, invoking the American free-speech tradition, President Barack Obama rejected Morsi’s idea that the video could be banned simply because it was blasphemous: The First Amendment, he suggested, prohibits the government from taking sides in religious disputes. Instead, in the name of protecting U.S. foreign policy interests, the Obama administration asked YouTube to reconsider its conclusion that the video didn’t violate the company’s terms of service. By exerting this subtle pressure, Obama came close to a version of the heckler’s veto, urging for the film’s removal because of its potential to provoke riots. U.S. courts, despite Obama’s demands, discourage the government from suppressing speech because of its likely effect on an angry mob; judges generally require the authorities to control the audience, not muzzle the speaker. In this case, of course, the mobs fell well outside of U.S. jurisdiction, and the link between the video and potential violence also wasn’t clear. In fact, subsequent investigation called into question the claims of causality that had seemed obvious early on.

Like Facebook, Google and YouTube were right to focus on the content of the film, and right to conclude that, unless the incitement to violence was obvious—say, in the form of a tagline reading, “RISE UP IN VIOLENCE AGAINST MUSLIMS”—the Innocencevideo should remain as widely available as possible. Had YouTube made a different decision, links to the video from the many news stories that mentioned it would have been disabled, denying millions of viewers across the globe access to a newsworthy story and the chance to form their own opinions. In the heat of the moment, both the White House and the content teams at Facebook and YouTube had to make judgments about the same inflammatory material. From a free-speech perspective, the young Deciders made better decisions than the president of the United States.

The meetings that the deciders have been holding at Stanford and elsewhere trace their origins to an earlier gathering half a world away. It was convened in 2011 by the Task Force on Internet Hate of the Inter-parliamentary Coalition for Combating Antisemitism, an initiative with an unwieldy name but a crucial mission: to try to get European parliamentarians and law-enforcement officials to work together with American civil libertarians, the Anti-Defamation League, and the leading Internet companies in shaping standards for online expression. The venue was the Houses of Parliament in London, in a paneled room near the top of the Big Ben clock tower.

After some spirited discussion, the group trooped down a winding stone staircase to the visitors’ gallery overlooking the House of Commons, from which the task force watched our chairman, Member of Parliament John Mann, deliver a blistering summary of his position on the regulation of online speech. “Freedom of expression is not always a good thing,” he told his colleagues in the House. “The Internet is now the place where anti-Semitic filth is spread.”

Because of its historical experience with fascism and communism, Europe sees the suppression of hate speech as a way of promoting democracy. Paradoxically, it has increasingly begun to pursue this goal by legislative and judicial fiat. More than 20 European countries have signed a protocol on cyber-crime that calls on member nations to expand the existing criminal penalties for “acts of a racist and xenophobic nature committed through computer systems.” The Council of Europe has also pushed for increased hate-speech regulation. It’s because of moves like those that some Deciders are worried, as one of them put it, that “we may end up in a situation where Europe slides into a situation currently occupied by Turkey, Pakistan, Saudi Arabia, and India”—countries in which claims of offensiveness can be deployed as a tool of oppression.

Dave Willner (far right above) consults with some of his troops at Facebook HQ. When necessary, the team blocks local access to content that runs afoul of foreign laws, while keeping it available to the rest of the world, as it did with these images and videos.

A recent book, The Harm in Hate Speech, vividly confirms the Deciders’ fears. It was written by Jeremy Waldron, a New York University and Oxford professor who is a vocal champion of the European approach and its most prominent defender for American audiences. Waldron is best known for his longstanding opposition to judicial review: He believes that legislatures, rather than courts, should take the lead in formulating public policy. But this faith in the power of legislation to protect fundamental rights makes him naively optimistic about the capacity of legislatures (rather than Deciders) to balance the competing values of dignity, privacy, and free speech. He notes, accurately, that the U.S. is a global outlier in not regulating group libel and sympathetically invokes laws in countries like the United Kingdom, Germany, and France that prohibit expressions of racial and religious hatred even when there’s no immediate prospect that they will provoke violence. He maintains that hate speech creates what he calls “an environmental threat to social peace.”

Waldron’s argument has a remarkable blind spot: It virtually ignores the Internet. He begins his book by imagining a Muslim man walking with his two young daughters on a city street in New Jersey, where they are confronted with an anti-Muslim sign. Waldron believes that allowing these posters on street corners will convince members of vulnerable minorities “that they are not accepted as ordinary good-faith participants in social life.” But like the European regulators who share his views, Waldron seems unaware that the most significant free-speech debates today don’t take place on street corners, or lampposts, or sandwich boards. They take place online, where a person’s social networks and RSS feeds can filter out many unwelcome views—but where the risks that overregulation will open the door to suppression of political expression are exponentially higher than in the offline world. The secret police can’t eavesdrop on every whisper of revolution. Armed with a Great Firewall, on the other hand, repressive governments can block entire categories of information.

And they’re determined to do so. At a December meeting in Dubai, for example, a majority of the 193 countries that make up the U.N.’s International Telecommunication Union approved a proposal by China, Russia, Tajikistan, and Uzbekistan to create ominous “international norms and rules standardizing behavior of countries concerning information and cyberspace,” as a description of the measure provided by the Chinese government puts it. Waldron, who endorses an earlier U.N. resolution condemning religious defamation while emphasizing the need to protect ideological dissent, would of course never go that far. But the thing about slippery slopes is that, in practice, they can prove hard to avoid. The Dubai meeting highlights the danger of addressing hate speech on the borderless Internet by expanding international regulation: It may be authoritarian dictatorships, not enlightened democracies, who end up writing the new rules.

Waldron offers a defense of free-speech regulation for the nineteenth or early twentieth centuries that threatens the openness of the Internet in the twenty-first. He can’t clearly tell us, for example, whether his definition of hate speech would permit or ban the anti-Muhammad cartoons that Facebook refused to take down after they were first published by a Danish newspaper in 2010. Here is his torturous analysis: “In and of themselves, the cartoons can be regarded as a critique of Islam rather than a libel on Muslim; they contribute, in their twisted way, to a debate about the connection between the prophet’s teaching and the more violent aspects of modern jihadism.” But, he adds, “They would come close to a libel on Muslims if they were calculated to suggest that most followers of Islam support political and religious violence.” He then offers this hedging conclusion: “So it might be a question of judgment whether this was an attack on Danish Muslims as well as an attack on Muhammad. But it was probably appropriate for Denmark’s Director of Public Prosecutions not to initiate legal action against the newspaper.” That byzantine verdict, offered after the fact, is all very well for Denmark’s Director of Public Prosecutions, but Waldron’s opaque standard would be impossible for an Internet first responder to apply in a matter of seconds. And Web companies have another, better reasonfor rejecting European-style prohibitions on group libel, with their complicated calculations about the social consequences of hate speech: Even if they could be applied by Internet screeners, they would open the door to vast subjectivity and to a less open world.

The deciders, of course, have blind spots of their own. Their hate-speech policies tend to reflect a bias toward the civility norms of U.S. workplaces; they identify speech that might get you fired if you said them at your job, but which would be legal if shouted at a rally, and try to banish that expression from the entire Internet. But given their tremendous size and importance as platforms for free speech, companies like Facebook, Google, Yahoo, and Twitter shouldn’t try to be guardians of what Waldron calls a “well-ordered society”; instead, they should consider themselves the modern version of Oliver Wendell Holmes’s fractious marketplace of ideas—democratic spaces where all values, including civility norms, are always open for debate.

Some of the Deciders understand this. At a hate-speech panel in Houston in November, Jud Hoffman, Facebook’s global policy manager, told the audience that his company was tightening its policies, introducing a new system for identifying speech likely to provoke violence. Rather than examining the context in which speech arises, Hoffman said the company now looks for evidence of four objective standards to determine whether a threat is credible: time, place, method, and target. If three of the four criteria are satisfied, the company removes the post or video. This refined approach, Hoffman stressed, helps to protect users against the heckler’s veto, preventing speech from being based on the predicted reaction of the audience. It also avoids Waldron’s murky inquiries into the effect of speech on a group’s social status.

The company that has moved the furthest toward the American free-speech ideal is Twitter, which has explicitly concluded that it wants to be a platform for democracy rather than civility. Unlike Google and Facebook, it doesn’t ban hate speech at all; instead, it prohibits only “direct, specific threats of violence against others.” Last year, after the French government objected to the hash tag “#unbonjuif”—intended to inspire hateful riffs on the theme “a good Jew …”—Twitter blocked a handful of the resulting tweets in France, but only because they violated French law. Within days, the bulk of the tweets carrying the hash tag had turned from anti-Semitic to denunciations of anti-Semitism, confirming that the Twittersphere is perfectly capable of dealing with hate speech on its own, without heavy-handed intervention.

As corporate rather than government actors, the Deciders aren’t formally bound by the First Amendment. But to protect the best qualities of the Internet, they need to summon the First Amendment principle that the only speech that can be banned is that which threatens to provoke imminent violence, an ideal articulated by Justice Louis Brandeis in 1927. It’s time, in other words, for some American free-speech imperialism if the Web is to remain open and free in twenty-first century.

As it happens, the big Internet companies have a commercial incentive to pursue precisely that mission. Unless Google, Facebook, Twitter, and other Internet giants draw a hard line on free speech, they will find it more difficult to resist European efforts to transform them from neutral platforms to censors-in-chief for the entire globe. Along with tougher rules on hate speech, the European regulators are weighing a sweeping new privacy right called “the right to be forgotten.” If adopted, it would allow users to demand the deletion from the Internet of photos they’ve posted themselves but come to regret—as well as photos of them that have been widely shared by others and even truthful but embarrassing blog comments others have posted about them. The onus would be on Google or Facebook or Yahoo or Twitter to take down the material as soon as a user makes the request or make the bet that a European privacy commissioner—to whom requests could be appealed—would determine that keeping the material online serves the public interest or provides journalistic, literary, or scientific value. If the companies guess wrong, they could be liable in each case for up to 2 percent of their annual incomes. A European Commission press officer stresses that each member country would choose how to implement the penalties, but for Google, the fines could hit $1 billion per incident.

Invoking a version of the right to be forgotten, an Argentinian judge in 2009 ordered Yahoo to remove racy pictures of Argentinian pop star Virginia da Cunha that were leading users to pornographic sites when they searched for her name. Claiming it was too technologically difficult to remove only the photos, Yahoo removed all references to her on its Argentine servers, so that, if you plug “da Cunha” into the Yahoo Argentina search engine now, you get a blank page and a judicial order. While Yahoo eventually won on appeal, the big Internet companies don’t want to host blank pages—their business models depend on their ability to ease the free exchange of information. But the right to be forgotten, if put in place, could turn them into the equivalent of TV stations with weak signals, resulting in shows that forever flicker in and out. The Deciders would bolster their position in the fight if their own guidelines more strictly limited the kind of speech they will voluntarily delete.

When I spoke with Nicole Wong at Google five years ago, she seemed a little uneasy with the magnitude of the responsibility she had taken on. “I think the Decider model is inconsistent,” she said. “The Internet is big, and Google isn’t the only one making these decisions.” The recent meetings, though not intended to produce a single hate-speech standard, seem to have bolstered the Deciders’ belief in the necessity of embracing the challenges of their unique positions and, perhaps in some cases, how much they relish the work. “I think this is probably what a lot of people who go to law school want to do,” Willner told me. “And I ended up doing it by accident.”

Meanwhile, the quest for the perfect screening system continues. Some of the Internet companies are exploring the possibility of a deploying an algorithm that could predict whether a given piece of content is likely to cause violence in a particular region, based on patterns of violence in the past. But hoping that the machines will one day police themselves amounts to wishful thinking. It may be that U.S. constitutional standards, applied by fickle humans, are the best way of preserving an open Internet.

Human Events – Consistent Analysis of Islam?

Human Events – Consistent Analysis of Islam?

WHY AREN’T LIBERALS MORE CRITICAL OF ISLAM?

Benjamin Wiker

Why aren't liberals more critical of Islam?

4/25/2013 10:09 AM

We are now—like it or not—immersed in a real debate about the nature of Islam. The background of deceased Boston bomber Tamerlan Tsarnaev is forcing us into it. There is no doubt Tamerlan, the elder brother of the two perpetrators, was transformed by his relatively recent embrace of radical Islam.

And so, we have the very difficult question facing us in regard to Islam: Is the propensity to terror and jihad radical in the deepest sense of word’s origin in Latin, radix, “root”? Is there something at the root of the Quran itself and the essential history of Islam that all too frequently creates the Tsarnaev brothers, Al-Qaeda, Osama bin Laden, Mohamed Atta, the Muslim Brotherhood, and Hamas, or is there some other source, quiet accidental to Islam?

That question must be taken seriously, very seriously.

I am not going to answer that question, but rather pose another: Why do liberals have so much difficulty even allowing that very serious question to be raised?

The answer to this second question is important for the obvious reason that, if liberals won’t allow the first question to be asked, then it surely can’t be answered.

A lot hangs on not answering it, in pretending it is not a legitimate question to raise. If Islam has a significant tendency to breed domestic Islamism—not everywhere, not in every case, but in a significant number of cases—then the current administration’s obsession with, say, Tea Party terror cells is woefully misplaced.

So what is it about liberalism that makes it so difficult for it to take a clear, critical look at Islam, even while liberals have no problem excoriating Christians for every imaginable historical evil?

I believe I can give at least a partial answer, if we take a big step back from the present scene and view the history of Western liberalism on a larger scale.

Liberalism is an essentially secular movement that began within Christian culture. (InWorshipping the State, I trace it all the way back to Machiavelli in the early 1500s.) Note the two italicized aspects: secular and within.

As secular, liberalism understood itself as embracing this world as the highest good, advocating a self-conscious return to ancient pagan this-worldliness. But this embrace took place within a Christianized culture. Consequently liberalism tended to define itself directly against that which it was (in its own particular historical context) rejecting.

Modern liberalism thereby developed with a deep antagonism toward Christianity, rather than religion in general. It was culturally powerful Christianity that stood in the way of liberal secular progress in the West—not Islam, Buddhism, Hinduism, Shintoism, Druidism, etc.

And so, radical Enlightenment thinkers like Voltaire rallied his fellow secular soldiers with what would become the battle cry of the eighteenth-century Enlightenment: écrasez l’infâme, “destroy the infamous thing.” It was a cry directed, not against religion in general, but (as historian Peter Gay rightly notes) “against Christianity itself, against Christian dogma in all its forms, Christian institutions, Christian ethics, and the Christian view of man.”

Liberals therefore tended to approve of anything but Christianity. Deism was fine, or even pantheism. The eminent liberal Rousseau praised Islam and declared Christianity incompatible with good government. Hinduism and Buddhism were exotic and tantalizing among the edge-cutting intelligentsia of the 19th century. Christianity, by contrast, was the religion against which actual liberal progress had to be made.

So, other religions were whitewashed even while Christianity was continually tarred. The tarring was part of the liberal strategy aimed at unseating Christianity from its privileged cultural-legal-moral position in the West. The whitewashing of other religions was part of the strategy too, since elevating them helped deflate the privileged status of Christianity.

And so, for liberalism, nothing could be as bad as Christianity. If something goes wrong, blame Christianity first and all of Western culture that is based upon it.

This view remains integral to liberalism today, and it affects how liberals treat Islam.

That’s why liberals are disposed to interpret the Crusades as the result of Christian aggression, rather than, as it actually was, a response to Islamic aggression. That’s why Christian organizations are regularly maltreated on our liberal college campuses while Islamic student organizations and needs are graciously met. And the liberal media—ever wonder why you didn’t hear last February of the imam of the Arlington, VA mosque calling for Muslims to wage war against the enemies of Allah? Nor should we wonder why, for liberals, contemporary jihadist movements in Islam must be seen as justified reactions to Western policies—chickens coming home to roost. Or when a bomb goes off, that’s why a liberal must hope that it was perpetrated by some fundamentalist patriotic Christian group.

What liberals do not want to do is take a deep, critical look at Islam. To do so just might question some of their most basic assumptions.

Author and speaker Benjamin Wiker, Ph.D. has published eleven books, his newest being Worshipping the State: How Liberalism Became Our State Religion. His website is www.benjaminwiker.com