Footnote 24 — The Archbishop of Atheism

Footnote 24 — Isaac Chotiner, “The Archbishop of Atheism, New Republic (November 11, 2013), p. 27.

Interesting comments from the New Republic interview with Richard Dawkins by Isaac Chotiner – loathe as I am to give Dawkins more publicity, you can read more about it at

 http://www.newrepublic.com/article/115339/richard-dawkins-interview-archbishop-atheism

IC: People talk about “new atheism.”8 Is there something new about it?

RD: No, there isn’t. Nothing that wasn’t in Bertrand Russell or probably Robert Ingersoll. But I suppose it is more of a political effect, in that all these books happened to come out at the same time. I like to think that we have some influence.

IC: Sometimes when I read the so-called new atheists, there’s almost a certain intellectual respect for the fundamentalist thinkers. For being more intellectually coherent.

RD: I’m interested you noticed that. There’s an element of paradox there—that at least you know where you stand with the fundamentalists. I mean, they’re absolutely clear in their error and their stupidity, and so you can really go after them. But the so-called sophisticated theologians, especially ones who are very nice, like Rowan Williams and Jonathan Sacks, you sometimes don’t quite know where you are with them. You feel that when you attack them, you’re attacking a wet sponge.

8  This term is generally applied to the work of Dawkins, Sam Harris, Daniel Dennett, and Christopher Hitchens, but it does not have a meaning that is substantively

From “The Archbishop of Atheism,” New Republic, November 11, 2013 (p. 27 of the print edition).

The Delete Squad: Free Speech on the Internet (New Republic)

The Delete Squad: Free Speech on the Internet (New Republic)

The Delete Squad: Google, Twitter, Facebook and the new global battle over the future of free speech

BY JEFFREY ROSEN  — New Republic — April 29, 2013

A year ago this month, Stanford Law School hosted a little-noticed meeting that may help decide the future of free speech online. It took place in the faculty lounge, where participants were sustained in their deliberations by bagels and fruit platters. Among the roughly two-dozen attendees, the most important were a group of fresh-faced tech executives, some of them in t-shirts and unusual footwear, who are in charge of their companies’ content policies. Their positions give these young people more power over who gets heard around the globe than any politician or bureaucrat—more power, in fact, than any president or judge.

Collectively, the tech leaders assembled that day in Palo Alto might be called “the Deciders,” in a tribute to Nicole Wong, the legal director of Twitter, whose former colleagues affectionately bestowed on her the singular version of that nickname while she was deputy general counsel at Google. At the dawn of the Internet age, some of the nascent industry’s biggest players staked out an ardently hands-off position on hate speech; Wong was part of the generation that discovered firsthand how untenable this extreme libertarian position was. In one representative incident, she clashed with the Turkish government over its demands that YouTube take down videos posted by Greek soccer fans claiming that Kemal Ataturk was gay. Wong and her colleagues at Google agreed to block access to the clips in Turkey, where insulting the country’s founder is illegal, but Turkish authorities—who insisted on a worldwide ban—responded by denying their citizens access to the whole site for two years. “I’m taking my best guess at what will allow our products to move forward in a country,” she told me in 2008. The other Deciders, who don’t always have Wong’s legal training, have had to make their own guesses, each with ramifications for their company’s bottom line.

The session at Stanford concluded with the attendees passing a resolution for the formation of an “Anti-Cyberhate Working Group,” then heading over to Facebook’s headquarters to drink white wine out of plastic cups at a festive reception. But despite the generally laid-back vibe, the meeting, part of a series of discussions dating back more than a year, had a serious agenda. Because of my work on the First Amendment, I was asked to join the conversations, along with other academics, civil libertarians, and policymakers from the United States and abroad. Although I can’t identify all the participants by name, I am at liberty, according to the ground rules of our meetings, to describe the general thrust of the discussions, which are bringing together the Deciders at a pivotal time.

As online communication proliferates—and the ethical and financial costs of misjudgments rise—the Internet giants are grappling with the challenge of enforcing their community guidelines for free speech. Some Deciders see a solution in limiting the nuance involved in their protocols, so that only truly dangerous content is removed from circulation. But other parties have very different ideas about what’s best for the Web. Increasingly, some of the Deciders have become convinced that the greatest threats to free speech during the next decade will come not just from authoritarian countries like China, Russia, and Iran, who practice political censorship and have been pushing the United Nations to empower more of it, but also from a less obvious place: European democracies contemplating broad new laws that would require Internet companies to remove posts that offend the dignity of an individual, group, or religion. The Deciders are right to be concerned about the balkanization of the Internet. There is, moreover, a bold way to respond to that threat. The urgent question is whether the Deciders will embrace it.

At Facebook, the deciders are led by Dave Willner, the head of the company’s content policy team. His career provides a kind of case study in how the Deciders’ thinking has evolved. Now 28, Willner joined Facebook five years ago, working night shifts in the help center, where he answered e-mails from users about how to use the photo uploader. Within a year, he had been promoted to work on content policy. Today, he manages a crew of six employees who work around shared desks at Facebook’s headquarters in Menlo Park; rather than a global hub for content control, their space, festooned with colorful posters, more closely resembles a neater-than-usual college dorm. Toiling under Willner’s team are a few hundred “first responders” who review complaints about nudity, porn, violence, and hate speech from offices in Menlo Park, Austin, Dublin, and Hyderabad, India. (Willner is also married to a fellow Facebook employee who now leads the User Safety team, responsible, among other things, for child protection and suicide prevention; one imagines rather heady dinner chatter.) Facebook had only 100 million users when Willner was hired, compared with the billion-plus it has now. Each day, they upload more than 300 million photos alone; every week, Facebook receives more than two million requests to remove material. (The New Republic’s owner was a Facebook co-founder.)

When I first met Willner at the Stanford meeting, he wore an orange T-shirt, a gray striped sweater, blue corduroy trousers, round glasses, and a bookish beard—looking very much like the former anthropology and archeology major that he was before starting at Facebook. He took a class about Islam in his senior year, which he says comes in handy in his current job. At the time Willner joined Facebook’s content policy team, the company had no rules on the books for what speech violated its terms of service. So Willner decided to write them himself. He chose as his model university anti-harassment codes, since he himself had just graduated from college. But he soon found that vague standards prohibiting speech that creates a “hostile environment” weren’t practical. The Facebook screeners scattered across three continents brought vastly different cultural backgrounds to their roles and had to rule on thousands of pieces of content daily. The sheer range and complexity of the judgment calls that had to be made compounded the challenge: Is this person naked? Is a photo of Hitler racism, or political commentary? Is it bullying to post a photo of someone distorted through Photoshop? Is posting a photo of a gun a credible threat of violence? What if the gun is from the cover of a rap album?

Willner had read John Stuart Mill in college and understood the crowning achievement of the American First Amendment tradition, which allows speech to be banned only when it is intended—and likely—to incite imminent violence or lawless action. By contrast, as Willner was learning, European law draws a tighter line, prohibiting so-called group libel, or speech that offends the dignity of members of a protected class and lowers their standing in society. Willner decided that neither method would do: Both the U.S. focus on the speaker’s intent and the European focus on the social consequences of their speech would be too subjective for a 22-year-old content reviewer in Dublin or Hyderabad to apply in 20 seconds.What Facebook needed, he came to believe, was a hate-speech policy that focused on concrete, easily categorized actions, so that the decision to remove controversial content, or to escalate the dispute to Willner and his colleagues in Silicon Valley, could be based on nothing more than the information contained within the form that Facebook users file to complain about offensive posts and applied like an algorithm. He sought an engineer’s response to a thorny historical and legal problem—a very Silicon Valley approach.

At first, it didn’t go well. To try to spell out what qualified as a hateful post, Facebook hired an outside firm to write an “Operations Manual for Live Content Moderators,” which was subsequently leaked. Some of the distinctions made by the document were ridiculed by the blogosphere for being jesuitical: “Blatant (obvious) depictions of camel toes and moose knuckles” were banned in the “sex and nudity category,” while the graphic content category held that “bodily fluids (except semen) are ok to show unless a human being is captured in the process.”Furthermore, the draft standards seemed to ban all “Holocaust denial which focuses on hate speech” and “all attacks on Ataturk (visual and text)” around the world, even though Holocaust denial is illegal only in certain countries, including France and Germany, and attacking Ataturk is outlawed only in Turkey. In response to the uproar, Facebook fired the consulting company, and Willner redoubled his efforts to minimize the opportunities for subjective verdicts by his first responders.

Eventually, the project led to Facebook’s most important free-speech decision: to ban attacks on groups, but not on institutions. The current community standards declare: “We do not permit individuals … to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.” But Facebook allows caricatures that depict members of protected groups doing unflattering things, as well as attacks on their faith or leaders. It’s only when a user categorically reviles a protected group that he crosses the line: “I hate Islam” or “I hate the Pope” is fine; “I hate Muslims” or “I hate Catholics” is not. The distinctions might be seen as a triumph of reductionism. But they have empowered the company to resist growing calls for the wholesale deletion ofspeech that foreign governments and their citizens consider blasphemous.

Facebook’s new policy was dramatically tested last September, when the company refused to remove Innocence of the Muslims, the anti-Muhammad video that was initially blamed for causing the Benghazi riots that led to the death of the American ambassador to Libya. After watching the video, Willner and his colleagues concluded that, because nobody said anything explicitly denigrating of Muslims, there was nothing to ban.

As the world watched footage of the body of Christopher Stevens being dragged through the streets, YouTube reached a similar decision. Despite allegations that the riots had been caused by an Arabic-language version of the video posted on the site, it turned out that an English version of Innocence of the Muslims had been in circulation since July. YouTube had determined that the clip didn’t violate its terms of service, which by then were similar to Facebook’s: “Sometimes there is a fine line between what is and what is not considered hate speech. For instance, it is generally okay to criticize a nation, but not okay to make insulting generalizations about people of a particular nationality.” As the violence spread from Libya to Egypt, YouTube temporarily restricted access in those two nations, because of the confusion on the ground. But the company refused to delete the video around the world, even as Egyptian leader Mohamed Morsi, speaking at the United Nations, called on YouTube to do just that.

In a separate U.N. speech, invoking the American free-speech tradition, President Barack Obama rejected Morsi’s idea that the video could be banned simply because it was blasphemous: The First Amendment, he suggested, prohibits the government from taking sides in religious disputes. Instead, in the name of protecting U.S. foreign policy interests, the Obama administration asked YouTube to reconsider its conclusion that the video didn’t violate the company’s terms of service. By exerting this subtle pressure, Obama came close to a version of the heckler’s veto, urging for the film’s removal because of its potential to provoke riots. U.S. courts, despite Obama’s demands, discourage the government from suppressing speech because of its likely effect on an angry mob; judges generally require the authorities to control the audience, not muzzle the speaker. In this case, of course, the mobs fell well outside of U.S. jurisdiction, and the link between the video and potential violence also wasn’t clear. In fact, subsequent investigation called into question the claims of causality that had seemed obvious early on.

Like Facebook, Google and YouTube were right to focus on the content of the film, and right to conclude that, unless the incitement to violence was obvious—say, in the form of a tagline reading, “RISE UP IN VIOLENCE AGAINST MUSLIMS”—the Innocencevideo should remain as widely available as possible. Had YouTube made a different decision, links to the video from the many news stories that mentioned it would have been disabled, denying millions of viewers across the globe access to a newsworthy story and the chance to form their own opinions. In the heat of the moment, both the White House and the content teams at Facebook and YouTube had to make judgments about the same inflammatory material. From a free-speech perspective, the young Deciders made better decisions than the president of the United States.

The meetings that the deciders have been holding at Stanford and elsewhere trace their origins to an earlier gathering half a world away. It was convened in 2011 by the Task Force on Internet Hate of the Inter-parliamentary Coalition for Combating Antisemitism, an initiative with an unwieldy name but a crucial mission: to try to get European parliamentarians and law-enforcement officials to work together with American civil libertarians, the Anti-Defamation League, and the leading Internet companies in shaping standards for online expression. The venue was the Houses of Parliament in London, in a paneled room near the top of the Big Ben clock tower.

After some spirited discussion, the group trooped down a winding stone staircase to the visitors’ gallery overlooking the House of Commons, from which the task force watched our chairman, Member of Parliament John Mann, deliver a blistering summary of his position on the regulation of online speech. “Freedom of expression is not always a good thing,” he told his colleagues in the House. “The Internet is now the place where anti-Semitic filth is spread.”

Because of its historical experience with fascism and communism, Europe sees the suppression of hate speech as a way of promoting democracy. Paradoxically, it has increasingly begun to pursue this goal by legislative and judicial fiat. More than 20 European countries have signed a protocol on cyber-crime that calls on member nations to expand the existing criminal penalties for “acts of a racist and xenophobic nature committed through computer systems.” The Council of Europe has also pushed for increased hate-speech regulation. It’s because of moves like those that some Deciders are worried, as one of them put it, that “we may end up in a situation where Europe slides into a situation currently occupied by Turkey, Pakistan, Saudi Arabia, and India”—countries in which claims of offensiveness can be deployed as a tool of oppression.

Dave Willner (far right above) consults with some of his troops at Facebook HQ. When necessary, the team blocks local access to content that runs afoul of foreign laws, while keeping it available to the rest of the world, as it did with these images and videos.

A recent book, The Harm in Hate Speech, vividly confirms the Deciders’ fears. It was written by Jeremy Waldron, a New York University and Oxford professor who is a vocal champion of the European approach and its most prominent defender for American audiences. Waldron is best known for his longstanding opposition to judicial review: He believes that legislatures, rather than courts, should take the lead in formulating public policy. But this faith in the power of legislation to protect fundamental rights makes him naively optimistic about the capacity of legislatures (rather than Deciders) to balance the competing values of dignity, privacy, and free speech. He notes, accurately, that the U.S. is a global outlier in not regulating group libel and sympathetically invokes laws in countries like the United Kingdom, Germany, and France that prohibit expressions of racial and religious hatred even when there’s no immediate prospect that they will provoke violence. He maintains that hate speech creates what he calls “an environmental threat to social peace.”

Waldron’s argument has a remarkable blind spot: It virtually ignores the Internet. He begins his book by imagining a Muslim man walking with his two young daughters on a city street in New Jersey, where they are confronted with an anti-Muslim sign. Waldron believes that allowing these posters on street corners will convince members of vulnerable minorities “that they are not accepted as ordinary good-faith participants in social life.” But like the European regulators who share his views, Waldron seems unaware that the most significant free-speech debates today don’t take place on street corners, or lampposts, or sandwich boards. They take place online, where a person’s social networks and RSS feeds can filter out many unwelcome views—but where the risks that overregulation will open the door to suppression of political expression are exponentially higher than in the offline world. The secret police can’t eavesdrop on every whisper of revolution. Armed with a Great Firewall, on the other hand, repressive governments can block entire categories of information.

And they’re determined to do so. At a December meeting in Dubai, for example, a majority of the 193 countries that make up the U.N.’s International Telecommunication Union approved a proposal by China, Russia, Tajikistan, and Uzbekistan to create ominous “international norms and rules standardizing behavior of countries concerning information and cyberspace,” as a description of the measure provided by the Chinese government puts it. Waldron, who endorses an earlier U.N. resolution condemning religious defamation while emphasizing the need to protect ideological dissent, would of course never go that far. But the thing about slippery slopes is that, in practice, they can prove hard to avoid. The Dubai meeting highlights the danger of addressing hate speech on the borderless Internet by expanding international regulation: It may be authoritarian dictatorships, not enlightened democracies, who end up writing the new rules.

Waldron offers a defense of free-speech regulation for the nineteenth or early twentieth centuries that threatens the openness of the Internet in the twenty-first. He can’t clearly tell us, for example, whether his definition of hate speech would permit or ban the anti-Muhammad cartoons that Facebook refused to take down after they were first published by a Danish newspaper in 2010. Here is his torturous analysis: “In and of themselves, the cartoons can be regarded as a critique of Islam rather than a libel on Muslim; they contribute, in their twisted way, to a debate about the connection between the prophet’s teaching and the more violent aspects of modern jihadism.” But, he adds, “They would come close to a libel on Muslims if they were calculated to suggest that most followers of Islam support political and religious violence.” He then offers this hedging conclusion: “So it might be a question of judgment whether this was an attack on Danish Muslims as well as an attack on Muhammad. But it was probably appropriate for Denmark’s Director of Public Prosecutions not to initiate legal action against the newspaper.” That byzantine verdict, offered after the fact, is all very well for Denmark’s Director of Public Prosecutions, but Waldron’s opaque standard would be impossible for an Internet first responder to apply in a matter of seconds. And Web companies have another, better reasonfor rejecting European-style prohibitions on group libel, with their complicated calculations about the social consequences of hate speech: Even if they could be applied by Internet screeners, they would open the door to vast subjectivity and to a less open world.

The deciders, of course, have blind spots of their own. Their hate-speech policies tend to reflect a bias toward the civility norms of U.S. workplaces; they identify speech that might get you fired if you said them at your job, but which would be legal if shouted at a rally, and try to banish that expression from the entire Internet. But given their tremendous size and importance as platforms for free speech, companies like Facebook, Google, Yahoo, and Twitter shouldn’t try to be guardians of what Waldron calls a “well-ordered society”; instead, they should consider themselves the modern version of Oliver Wendell Holmes’s fractious marketplace of ideas—democratic spaces where all values, including civility norms, are always open for debate.

Some of the Deciders understand this. At a hate-speech panel in Houston in November, Jud Hoffman, Facebook’s global policy manager, told the audience that his company was tightening its policies, introducing a new system for identifying speech likely to provoke violence. Rather than examining the context in which speech arises, Hoffman said the company now looks for evidence of four objective standards to determine whether a threat is credible: time, place, method, and target. If three of the four criteria are satisfied, the company removes the post or video. This refined approach, Hoffman stressed, helps to protect users against the heckler’s veto, preventing speech from being based on the predicted reaction of the audience. It also avoids Waldron’s murky inquiries into the effect of speech on a group’s social status.

The company that has moved the furthest toward the American free-speech ideal is Twitter, which has explicitly concluded that it wants to be a platform for democracy rather than civility. Unlike Google and Facebook, it doesn’t ban hate speech at all; instead, it prohibits only “direct, specific threats of violence against others.” Last year, after the French government objected to the hash tag “#unbonjuif”—intended to inspire hateful riffs on the theme “a good Jew …”—Twitter blocked a handful of the resulting tweets in France, but only because they violated French law. Within days, the bulk of the tweets carrying the hash tag had turned from anti-Semitic to denunciations of anti-Semitism, confirming that the Twittersphere is perfectly capable of dealing with hate speech on its own, without heavy-handed intervention.

As corporate rather than government actors, the Deciders aren’t formally bound by the First Amendment. But to protect the best qualities of the Internet, they need to summon the First Amendment principle that the only speech that can be banned is that which threatens to provoke imminent violence, an ideal articulated by Justice Louis Brandeis in 1927. It’s time, in other words, for some American free-speech imperialism if the Web is to remain open and free in twenty-first century.

As it happens, the big Internet companies have a commercial incentive to pursue precisely that mission. Unless Google, Facebook, Twitter, and other Internet giants draw a hard line on free speech, they will find it more difficult to resist European efforts to transform them from neutral platforms to censors-in-chief for the entire globe. Along with tougher rules on hate speech, the European regulators are weighing a sweeping new privacy right called “the right to be forgotten.” If adopted, it would allow users to demand the deletion from the Internet of photos they’ve posted themselves but come to regret—as well as photos of them that have been widely shared by others and even truthful but embarrassing blog comments others have posted about them. The onus would be on Google or Facebook or Yahoo or Twitter to take down the material as soon as a user makes the request or make the bet that a European privacy commissioner—to whom requests could be appealed—would determine that keeping the material online serves the public interest or provides journalistic, literary, or scientific value. If the companies guess wrong, they could be liable in each case for up to 2 percent of their annual incomes. A European Commission press officer stresses that each member country would choose how to implement the penalties, but for Google, the fines could hit $1 billion per incident.

Invoking a version of the right to be forgotten, an Argentinian judge in 2009 ordered Yahoo to remove racy pictures of Argentinian pop star Virginia da Cunha that were leading users to pornographic sites when they searched for her name. Claiming it was too technologically difficult to remove only the photos, Yahoo removed all references to her on its Argentine servers, so that, if you plug “da Cunha” into the Yahoo Argentina search engine now, you get a blank page and a judicial order. While Yahoo eventually won on appeal, the big Internet companies don’t want to host blank pages—their business models depend on their ability to ease the free exchange of information. But the right to be forgotten, if put in place, could turn them into the equivalent of TV stations with weak signals, resulting in shows that forever flicker in and out. The Deciders would bolster their position in the fight if their own guidelines more strictly limited the kind of speech they will voluntarily delete.

When I spoke with Nicole Wong at Google five years ago, she seemed a little uneasy with the magnitude of the responsibility she had taken on. “I think the Decider model is inconsistent,” she said. “The Internet is big, and Google isn’t the only one making these decisions.” The recent meetings, though not intended to produce a single hate-speech standard, seem to have bolstered the Deciders’ belief in the necessity of embracing the challenges of their unique positions and, perhaps in some cases, how much they relish the work. “I think this is probably what a lot of people who go to law school want to do,” Willner told me. “And I ended up doing it by accident.”

Meanwhile, the quest for the perfect screening system continues. Some of the Internet companies are exploring the possibility of a deploying an algorithm that could predict whether a given piece of content is likely to cause violence in a particular region, based on patterns of violence in the past. But hoping that the machines will one day police themselves amounts to wishful thinking. It may be that U.S. constitutional standards, applied by fickle humans, are the best way of preserving an open Internet.

Death of Blogs? Maybe Not!

Death of Blogs? Maybe Not!

A List of Worthwhile Blogs Occasioned by NYT-Reported Demise of Blogging

There’s been a lot of chatter about The Death of Blogs the last few days, among media both mainstream and conservative, prompted in part by the New York Times‘ decision to shutter a few of its own. Marc Tracy, writing at the New Republic, bemoans the replacement of thoughtful blogging by an “endless stream of isolated dollops of news”:

Smaller brands within brands, be they rubrics like “Media Decoder” or personalities like “Ben Smith,” make increasingly little sense in a landscape where writers can cultivate their own, highly discriminating followings via social media like Facebook, Reddit, and Twitter, while readers can curate their own, highly discriminating feeds. In this world, there is no place for the blog, because to do anything other than put “All Media News In One Place” is incredibly inefficient.

Andrew Sullivan and Ann Althouse are skeptical. The cover story in the Columbia Journalism Review touches on similar themes, but with conclusions that it seems to me bloggers should be somewhat heartened by, especially the idea that “many young consumers prefer to have their news filtered by an individual or a publication with a personality rather than by a traffic-seeking robot or algorithm.”

Truth be told, I don’t have much time for the conservative blogosphere for the simple reason that there isn’t much personality. So much of it is just repetitive outrage about Obama appointees or Brett Kimberlin’s criminal record that it’s not really a useful way to keep yourself informed. I usually stick to Ace of SpadesOutside the BeltwayRedState, and the Gateway Pundit for up to date right-of-center news. The conservative blogosphere’s alleged decline strikes me as a mixed blessing at worst, if it’s even true, since the best blogs, the above included, will keep their readers and even gain more as the lower-quality ones drop off. Regardless, there are underappreciated gems and they deserve to be encouraged, so in the interest of doing so, here are a few that have kept me coming back. They range widely in ideological orientation, posting frequency, popularity, and in pretty much every other way, but I’ve tried to stick to ones you might not have heard of:

Against Crony Capitalism — What it sounds like
Booker Rising — A blogospheric home for black moderates and conservatives
A Chequer-Board of Nights and Days (Pejman Yousefzadeh) — Foreign policy and politics
Garvey’s Ghost — A Garveyite’s perspective on politics
Gucci Little Piggy — Social science commentary
The Hipster Conservative — Religion, politics, and philosophy for conservative hipsters
Iowahawk (David Burge) — Some of the best political humor on the web
Jesus Radicals — Theology from the radical left
L’Hote (Freddie DeBoer) — Left-wing contrarianism
Naked DC — Insider-y political commentary
Notes on Liberty — A solid libertarian group blog
Pinstripe Pulpit (Alan Cornett) — Religious and sartorial matters from a former assistant of the late Russell Kirk
Prez16 (Christian Heinze) — Clever, digestible political commentary
The Rancid Honeytrap — Commentary from the left
Ribbon Farm (Venkatesh Rao) — Economics and social commentary
Rorate Caeli — Traditional Catholicism
Slouching Towards Columbia (Dan Trombly) — Liberal realist foreign policy
The Trad — Culture and style for trads
Turnabout — Jim Kalb’s commentary
United Liberty — Another solid libertarian group blog, frequently updated, and great for breaking news

In the future, I’ll try to do a better job engaging with some of these folks. And if you have more to recommend, leave ‘em in the comments.

New Republic: Affirmative Action

New Republic: Affirmative Action

Race-Based Affirmative Action Makes Things Worse, Not Better

The Supreme Court made clear last month that it would keep affirmative action racial preferences on the front burner of the national conversation for at least the next year. This autumn, the Court will review a federal appeals court’s 8-7 decision striking down a 2006 Michigan voter initiative that banned racial preferences in state university admissions. Meanwhile, the justices are drafting their opinions in a reverse-discrimination lawsuit by a disappointed white applicant to the University of Texas that was argued last October. That decision, the first major constitutional challenge to racial preferences since 2003, is expected by June 28.

With four ardent conservative opponents of racial preferences likely to be joined by Justice Anthony Kennedy—who has never upheld a racial preference—the Court seems likely to strike down the Texas program but not likely to outlaw racial preferences entirely. The Court also seems likely to reverse the federal appeals court decision in Michigan and uphold the state’s 2006 initiative banning racial preferences in state programs. (The issue there is not whether it’s unconstitutional for universities to use racial preferences excessively, but whether it’s unconstitutional for voters to prohibit them entirely.)

The big question, however, is whether the Court will rule so narrowly that its decisions will have little impact outside Texas and Michigan, or will, for the first time, impose serious restrictions on the very large racial preferences that are routine at almost all of the nation’s selective universities. Will these cases mean a dramatic overhaul, and shrinkage, of race-based affirmative action as we know it?

The question has never been more important or more complicated. A rapidly growing body of social science evidence shows that admissions preferences cause great harm to many of the supposed beneficiaries, and that such racial preferences make socio-economic inequality worse, not better. Racial preferences typically produce freshman classes with big SAT and GPA gaps among black, Hispanic, white, and Asian students. At the University of Texas, for example, the black-Asian mean SAT gaps have run above 450 points out of a total possible score of 2400. And studies suggest that many colleges systematically discriminate against high-achieving Asians, as they once did to Jews, to hold down their admission numbers.

STUDENTS ARE MUCH MORE LIKELY TO FORM FRIENDSHIPS IN COLLEGE WITH OTHER STUDENTS WHOSE LEVEL OF ACADEMIC PREPARATION IS SIMILAR TO THEIR OWN.

The two pending cases, and others, have focused on universities’ discrimination against whites and Asians, but the justices must be aware of recent research that casts doubt on the traditional presumption that racial preferences benefit recipients. For example, studies have shown that disproportionate percentages of preferentially admitted black freshmen who aspire to major in science and other tough subjects are forced by bad grades to move to softer majors—and that they would be more likely to achieve their ambitions had they gone to less elite schools for which they were better qualified.1

As for the benefits to white students, I don’t doubt that exposure to people of different races improves everyone’s education if it occurs naturally. But engineering diversity through racial preferences aggravates racial stereotypes and resentments and often leads to self-segregation and social isolation, as detailed in Russell Nieli’s powerful 2012 book, Wounds That Will Not Heal. Another study by Peter Arcidiacono and colleagues shows that students are much more likely to form friendships in college with other students whose level of academic preparation is similar to their own.

Social science evidence now shows that while passed-over whites and Asians suffer (modestly and temporarily, in my view) from race-based affirmative action, the more seriously damaged victims of large racial preferences are the many good black and Hispanic students who are doomed to academic struggle, and damaged self-confidence, when put in direct competition with academically much-better-qualified students. Universities misleadingly assure these students that they will do well, while ignoring and seeking to suppress evidence showing the enormous size of their preferences and poor academic results. No university of which I am aware, for example, tells its racial-preference recruits that more than half of black students end up in the bottom twenty percent of their college classes and the bottom ten percent of their law school classes.2 Racial preferences as used today pervert a once-egalitarian cause by pushing many fairly affluent black and Hispanic students ahead of working-class and poor Asians and whites. So addicted are the universities to racial preferences, and so fearful are most politicians of being trashed as racists, that the Supreme Court may be the only institution that could restore the original ideals of affirmative action.

I hope that in the Texas case, or perhaps in a future cases, the justices will order two modest reforms: order schools to disclose data showing the size, operation, and effects on academic performance of their racial preferences; and mandate that universities stop preferring blacks and Hispanics over better-qualified Asians and whites who are also less well-off.3 The first reform would equip admitted applicants and policymakers alike to make better-informed decisions. The second would provide healthy incentives for selective schools both to enroll more outstanding working-class and poor students and to reduce the mismatch problem.

It goes without saying that educational gaps are the biggest reason for the racial and socioeconomic inequality that cause such deep wounds in our social fabric. But the evidence shows that racial preferences make things worse, not better, by setting up many of our best black and Hispanic students for academic frustration, by neglecting our most promising working-class and low-income students, and by papering over the real problem.

The real problem is the huge racial gap in early academic achievement symbolized by the undisputed fact that the average black twelfth grader has acquired no more academic learning than the average white eighth grader. The real solution is to improve the education received by these children from birth through high school. Every bit of energy that is now being spent on sustaining a failed system of racial admissions preferences would be far better invested in teaching kids enough to make them academically competitive when they arrive at college.

Stuart Taylor, Jr., a Washington writer, is the coauthor with Richard Sander, a UCLA law professor, of Mismatch: How Affirmative Action Hurts Students It’s Intended to Help, and Why Universities Won’t Admit ItThey also filed a brief in the University of Texas case.